4. Competition and AI

AI has the potential to fundamentally reshape how firms make decisions, in particular by generating predictive analytics, automating decision-making, and optimising business process (OECD, 2017[1]). This transformation is a natural extension of trends that are already well underway, for example two-thirds of EU e-commerce retailers use software to automatically adjust their prices to competitors (European Commission, 2017, p. 5[2]). This means competitive dynamics could be changing in ways that are difficult to predict. The speed and nature of decision-making with respect to prices, product design (including customisation to individual consumers), targeting marketing efforts, and managing costs and investments may all evolve.

The benefits to consumers (including business consumers) from this transformation are significant and wide reaching. Consumers have access to an ever-growing range of digital products and services, for example online platforms that rely on searching or matching functionality. Established markets are being transformed by new entrants harnessing digital technologies, for example in the financial sector, where competition is being spurred by new and cheaper services (explored in Chapter 2). Further, even in traditional markets, significant supply-side efficiencies may be passed on in the form of lower prices or better responsiveness to consumer preferences, for example when supermarkets are able to better adjust their selection in response to trends in consumer demand. More broadly, one estimate suggests that AI technologies will increase labour productivity in developed economies by up to 40% by 2035 (Accenture, 2017[3]).

AI applications may also enable a faster detection and response to changes in consumer preferences, making markets responsive to the evolution of demand. They may also be used to detect unsafe products and fix them remotely through software updates. More broadly, the rapid development of AI technology has triggered rivalry on an important dimension of competition: innovation.

At the same time, AI can also provide new tools for consumers to make decisions. AI applications may use the ever-increasing amount of data available about products and consumers in order to develop personalised products and transactions that better fit individuals’ and businesses’ needs. AI can also help guide consumers in markets with complex or uncertain prices and conditions, in order to select the best offer based on their needs and preferences. They may also give rise to “algorithmic consumers” whose decision-making is at least partially automated through the application of algorithms (Gal and Elkin-Koren, 2017[4]).

These examples demonstrate the significant procompetitive potential of AI applications in markets, both on the supply and the demand side. However, as AI begins to play a greater role in decision-making, particularly with respect to pricing, it may also dampen competition in some markets. Predictability, transparency and frequent interactions between competitors’ algorithms can undermine competitive dynamics. For instance, while price transparency may facilitate consumer decision-making, it may also lead firms to compete less aggressively by facilitating co-ordination. Competition authorities may find that addressing these concerns will require new technical capacities to assess AI technologies and their effects in markets. Further, certain market outcomes caused by AI will be difficult to address with current enforcement tools. This chapter explores some of the potential competition risks stemming from the use of AI, namely collusion and abuses of dominance, and highlights the challenges this poses for competition policy. It also notes that research is still developing on the likelihood of these risks, as well as the potential for AI to play a destabilising role in collusive agreements.

Concerns about AI-related competition problems fit within a broader discussion about competition in digital-intensive markets. Data-intensive digital technologies can involve high fixed costs and low variable costs, economies of scale and scope, significant first-mover advantages, strong network effects,1 and switching costs or other consumer behaviours that lead to lock-in. These characteristics may contribute to market power that is (1) durable and (2) brought to bear in multiple markets. AI technologies can exemplify these market dynamics, particularly given the importance of data access and processing involved (OECD, 2016[5]).

Indicators of potentially durable market power, which suggest that markets are less contestable for new innovations and less open to competitive pressure, have been observed across OECD economies, and particularly in digital-intensive sectors. For example, the mark-ups firms charge over marginal costs are on the rise (Calligaris, Criscuolo and Marcolin, 2018[6]; Bajgar, Criscuolo and Timmis, 2021[7]), and the rate of new firm entry in digital-intensive sectors has declined (Calvino and Criscuolo, 2019[8]).

Digital markets may feature a significant degree of vertical integration (e.g. when the operator of an e-commerce platform offers its own products for sale on the platform) or conglomerate business models (when digital firms are active in multiple related markets featuring overlapping consumers). AI technologies can directly lead toward these types of business models, given that they can either be an important input in some markets, or be repurposed across multiple markets. One recent paper finds empirical support for this idea: specifically that AI investments in one market are associated with greater investment activity in other related markets (Babina et al., 2020[9]). These business models offer significant benefits to consumers in the form of economies of scope and convenience, but may also give rise to competition problems, including higher collusion risk due to multi-market contact of conglomerate firms (since multi-market contact can facilitate communication and enhance incentives for collusion) and the potential for abusive conduct by dominant firms (OECD, 2020[10]; Fletcher, 2020[11]).

In sum, AI, like many digital technologies, has the potential to produce wide-reaching benefits for consumers, but it may also contribute to durable market power in digitalised markets, and give rise to new forms of competition concerns. It can facilitate the implementation of collusive agreements and abusive conduct by dominant firms. It can also lead to new market dynamics that depress competitive pressures without clear anticompetitive conduct, creating challenges for the existing competition policy toolbox. These challenges are explored further below.

The use of AI to support and automate business decisions may usher in a new age for competitive dynamics in markets. With this new age may come competition harms stemming from collusion, abusive conduct on the part of dominant firms, or mergers, as described further below.

For competition policy purposes, the term collusion refers to “any form of co-ordination or agreement among competing firms with the objective of raising profits to a higher level than the non-cooperative equilibrium” (OECD, 2017, p. 19[1]). Collusion is not limited to agreements on prices; rather, it can include the allocation of different segments of a market among competitors, agreements regarding product quality or total output, and even harmonising the terms and conditions to be offered to consumers.

The widespread use of AI in a market can be associated with a higher risk of collusion, specifically market dynamics that make collusive outcomes more stable or rewarding. First, AI applications rely on consumer data, the offer of competitors, and transactions in a market. When there is a high degree of market transparency due to the availability of data on competitor pricing or transactions, for example in online marketplaces, collusion is more likely. This is because it is easier for firms to communicate through pricing signals, detect any deviations from a collusive agreement, or implement algorithms to implement collusion (OECD, 2017, pp. 21-22[1]). Secondly, AI may be deployed in markets in which frequent interactions with competitors are feasible (through rapid price adjustments), which can make the implementation and monitoring of collusive agreements easier. Third, as noted above, since there are indications that AI applications are likely to be associated with conglomerate business models, firms active in AI technologies may be more likely to compete in multiple markets. Research suggests that, when competitors are in contact with one another across multiple markets, it makes collusion more likely by increasing the gains of collusion (OECD, 2015[12]). Nonetheless, the UK Competition and Markets Authority has indicated that AI-facilitated collusion will be most likely in markets already susceptible to collusion (for example due to homogeneity of products and firms) (Competition and Markets Authority, 2018, p. 31[13]).

Beyond its deployment in markets whose conditions make collusion more likely, stable and profitable, AI may also directly lead to collusive outcomes, whether by design or not. Collusion can take two different forms, both of which can be implemented through AI (OECD, 2017, p. 19[1]):

Explicit collusion refers to anti-competitive conduct that is maintained with explicit agreements, whether they are written or oral. The most direct way for firms to achieve an explicit collusive outcome is to interact directly and agree on the optimal level of price or output.

Tacit collusion, on the contrary, refers to forms of anti-competitive co-ordination which can be achieved without any need for an explicit agreement, but which competitors are able to maintain by recognising their mutual interdependence. In a tacitly collusive context, the non-competitive outcome is achieved by each participant deciding its own profit-maximising strategy independently of its competitors. This typically occurs in transparent markets with few market players, where firms can benefit from their collective market power without entering in any explicit communication.

Explicit collusion (or forming a cartel) is considered one of the most serious breaches of competition law. Reflecting this seriousness, cartel formation is generally a by object or per se infringement – meaning that a cartel agreement is illegal regardless of its effectiveness. AI can be used as a tool for implementing and maintaining collusive agreements.

Collusive agreements can be difficult to maintain for a variety reasons. First, communication between the parties will be limited, since they will want to avoid detection and proof of their agreement. Thus, the agreement’s stability may be undermined if the parties interpret its terms differently, for instance if costs or demand change. Second, there can be incentives for firms to deviate from an anticompetitive agreement, for example by slightly undercutting the agreed-upon price to earn more revenue. While a cartel can seek to punish deviators through targeted, aggressive competition, it can sometimes be difficult to detect deviations, and to co-ordinate a punishment response. Third, collusive agreements can be difficult to maintain as the number of participants increases, differences between participants or their products emerge, or innovations reshape the market.

AI can be a tool to overcome the challenges that threaten the stability of cartels. In particular, an algorithm can avoid misinterpretations or errors in implementing cartel agreements by implementing pricing or other decisions according to pre-established parameters – particularly when a common flow of data is available to all parties. In addition, more sophisticated AI applications can be used to monitor implementation, uncover deviations from collusive agreements, and even implement punishment strategies. Finally, AI can help market participants respond to more fundamental changes in markets as well, for example if all participants use the same technology to make decisions in response to change in markets.

First, monitoring algorithms can take advantage of available data in order to monitor conditions in a market and determine whether any cartel participants have deviated from the agreement. When paired with technologies such as screen scraping, which allows data available to users (including prices or outcomes of search results) to be automatically gathered, these algorithms could make it easy to identify any deviations, and thus discourage such deviations. These algorithms can also prevent the misinterpretation of firm behaviour, which could lead cartel members to inaccurately believing deviations have taken place, thus undermining the stability of the agreement even if all members comply (Competition and Markets Authority, 2018, p. 24[13]).

Monitoring algorithms have been used to implement collusive agreement between poster sellers in the UK and US. The sellers agreed not to undercut each other’s prices, and used automated pricing software (the same software in the US case) to implement this agreement when setting prices on Amazon (US Department of Justice, 2015[14]; Competition and Markets Authority, 2016[15]). While the algorithm used here was relatively simple, it demonstrates the potential for AI as an instrument for facilitating collusive agreements.

Collusive outcomes can sometimes be facilitated by parties other than firms competing with one another, such as industry associations, upstream suppliers, or downstream sellers. Thus, artificial intelligence need not be employed by the competing firms to facilitate collusion – a central “hub” can be used to transmit information, execute collusive agreements, and monitor compliance (Competition and Markets Authority, 2021[16]). This could occur, for example, when all parties use a common provider for selling to consumers which limits price competition, as illustrated in the case in Box 4.1. This scenario is not new in competition enforcement, as traditional cartels have in the past used third parties to set terms, and monitor as well as enforce collusive agreements. However, AI may be able to do so more effectively and discreetly than ever before.

Competition authorities in several jurisdictions have determined that imposition of fixed or minimum resale prices by manufacturers (also called “resale price maintenance”), could be a means of implementing a collusive agreement (OECD, 2019, p. 28[17]). This conduct may also be facilitated with algorithms. For example, the European Commission recently imposed fines totalling EUR 111 million on a manufacturer of consumer electronics for imposing resale price maintenance on retailers, and using monitoring algorithms to verify compliance (see Box 4.2).

Firms seeking to form a cartel or implement a cartel agreement may attempt to communicate with one another in indirect ways, known as “signalling”. Algorithms can help firms identify signals (for example, in relation to price or quantity), act on them according to pre-established decision rules, and even determine whether fellow cartel members have accepted a signalled proposal. This type of co-ordination can be difficult for competition authorities to tackle as explicit collusion. Among other challenges, it can sometimes be difficult to distinguish from procompetitive behaviour, such as certain types of information disclosure or algorithmic exploration (Autorité de la concurrence and Bundeskartellamt, 2019, pp. 53-55[18]). However, in a past case described in Box 4.3, the US Department of Justice argued that signalling through online airline reservation platforms constituted a per se infringement.

While the body of research in this area is growing, more needs to be done to understand whether AI can in fact play a role in undermining collusive outcomes, namely by allowing cartel participants to more effectively deviate from an agreement (Petit, 2017[19]). Greater research in this area would enable a more informed assessment of the underlying risks.

The role of AI in implementing explicit collusive agreements is relatively straightforward. However, many of the concerns about AI and collusion involve situations without an explicit agreement among competitors. While this type of collusion can still harm consumers, it is generally not covered under competition law, making the distinction particularly important.

AI may lead to tacit collusion outcomes, in which firms make decisions that jointly maximise profits, without the need for any co-ordination or collective decision-making on the part of those firms. First, the use of AI in business decisions may create the conditions for tacit collusion to emerge. As firms invest in AI solutions to make pricing and other business decisions, the overall level of transparency and data availability can be expected to increase. In particular, competitive pressures may drive firms to collect and observe data in the market once at least one firm begins to do so (OECD, 2017, p. 22[1]). Thus, even in markets where tacit collusion may have been difficult in the past due to a lack of awareness of competitor decisions, AI and its associated technologies could make tacit collusion viable by making firms more predictable (Ezrachi and Stucke, 2017, pp. 1789-1790[20]).

Second, AI can affect the incentives of firms to tacitly collude. A firm will be less likely to undertake aggressive competitive decisions, such as a price cut, if it knows that competitors’ algorithm will be able to rapidly and without delay respond, for example with a price cut of its own. This could render any such decisions unprofitable, and disincentivise aggressive price competition (Ezrachi and Stucke, 2017, pp. 1791-92[20]).

Third, AI, and particularly machine learning algorithms tasked with making business decisions independently, may arrive at a tacitly collusive outcome on its own. Take the example of a machine learning algorithm tasked with maximising profits by setting pricing decisions using available data on demand and competitor prices. Without AI, the best profit-maximising strategy could be to compete and potentially out-innovate rivals, since a tacitly collusive outcome would be too difficult to achieve (for example due to analytical complexity, the propensity for human error or the lack of transparency), even if it would deliver greater profits. Machine learning algorithms, on the other hand, may reach a tacitly collusive outcome which does in fact maximise profits. This outcome could be the result of the repeated interactions of each firm’s pricing algorithms which, after a period of trial and error, avoid aggressive competition to protect profits (Ezrachi and Stucke, 2017, p. 1795[20]). The risk could be particularly pronounced if competing firms purchase the same algorithm or data stream from a third-party provider, resulting in a form of “hub and spoke” collusion (Competition and Markets Authority, 2018, p. 31[13]).

Empirical research on the subject of tacit collusion engendered by AI is still developing. Findings remain mixed, and recent experiments suggest that the current state of AI technology may not lead to tacit collusion without a facilitating factor, such as common algorithm design (Deng, 2018[21]). However, one recent paper provides an example in which it appears to have occurred. Specifically, the authors investigated the adoption of algorithmic pricing software by German gasoline stations (Assad et al., 2020[22]). They found that the adoption of the software increased the margins of gasoline stations facing competition by on average 9%, but had no impact on the margins of stations that were monopolies in their area. Further, in markets with only two gas stations, the authors found that margins would only increase if both stations adopted the software, and that the nature of this margin change was consistent with the software gradually reaching a tacitly collusive outcome.

Tacit collusion, which generally arises in more concentrated markets with homogeneous products (for example commodity markets where firms compete primarily on price), is generally characterised by a lack of dynamism, stable prices, steady profit margins and few significant variations in market shares. When these outcomes are observed and firms in the market use algorithms to make pricing and other decisions, the precise cause may nonetheless be unclear. Machine learning algorithms can constitute a “black box”, since the process behind a given pricing decision, for example, may not be observable. Thus, it may not be possible to know whether competing algorithms have reached a joint profit maximisation outcome, for example by signalling and monitoring each other’s responses, or whether the cause of a lack of market dynamism is simply a high degree of transparency and simplistic algorithmic pricing rules. This opacity can compound challenges for competition policy makers, as explored further below.

In sum, AI, in the presence of transparent prices and other market data, can dampen competition by making collusion more durable, more feasible, or even the unintentional result of profit maximisation algorithms (especially when firms purchase the same algorithm from a single provider). However, there is only limited research available on the competitive impact of machine learning and other AI applications in firm decision-making. The precise effect of AI in a given market may well vary significantly based on the conditions of the market, the design of the algorithm, and data availability, among other characteristics. For example, it is not clear how the risk of algorithmic collusion will be affected by the market characteristics that normally make collusion more difficult, such as differentiated products, significant dynamism due to innovation, and substantial differences in competing firms (such as large discrepancies in firm size). On one hand, AI applications that rely on predictability and clear market patterns may be unable to reach a collusive outcome in rapidly changing and differentiated markets. On the other hand, sophisticated AI could be effective in implementing collusive strategies in complicated markets with highly differentiated products – collusion that would break down if implemented by humans due to the risk of error or limited capacity.

AI could also lead to more aggressive competition in some markets, in contrast to the concerns about collusion outlined above. AI could, for instance, become a major differentiator among firms in terms of how quickly they are able to respond to changes in markets, how accurately they are able to forecast and interpret data, and how they are able to harness AI to develop better, cheaper products. Aggressive profit-maximisation algorithms could even disrupt tacitly collusive outcomes in markets, depending on their design. This scenario of aggressive competition being spurred by AI suggests that, rather than dampening competition, AI could encourage competition. This may depend on the characteristics of specific markets – concentrated markets with homogeneous products and firms may be more prone to tacit algorithmic collusion, whereas in dynamic, markets characterised by firm heterogeneity, AI could intensify competition. However, there are also risks associated with the latter scenario.

In digital markets exhibiting features that create a tendency toward concentration and market power, such as strong network effects, large discrepancies in access to data, and switching costs for consumers, the aggressive competitive strategies employed by AI may in fact cross into the category of abusive conduct.

Competition laws generally either prohibit certain types of abusive conduct by firms deemed to be “dominant” (i.e. hold significant market power), or attempts to obtain or retain a monopoly position (for further discussion, see OECD (2020[23])). Enforcement of these prohibitions generally involves an assessment of the effects of the conduct in question, in contrast to by object or per se collusion cases which do not require such an assessment. This reflects the economic theory associated with the conduct: certain strategies that are harmless or even procompetitive when employed by small firms could in fact be anticompetitive and harmful to consumers when employed by dominant firms. This logic would apply to any potentially abusive conduct associated with the use of AI.

Markets in which AI plays a significant role in competitive decision-making and product design may, like many digital markets, exhibit certain features that make dominant positions more common. AI investments involve significant economies of scale and scope, given the data and technical capacity required. When AI is a part of the product offered to consumers, it is also likely to exhibit network effects, since greater user numbers can improve the quality of the algorithms involved. Thus, AI applications as well as the data flows and intangible assets used to operate them can be a source of competitive advantage, and barriers to entry, enabling the emergence of market power and potentially dominance. Recent OECD work suggests that these effects have led to higher barriers to technology diffusion and declining business dynamism (Berlingieri et al., 2020[24]; Calvino, Criscuolo and Verlhac, 2020[25]).

When AI drives the competitive decision-making of a dominant firm on prices or other important dimensions of quality, it could choose anticompetitive strategies. For example, machine-learning algorithms with a sufficiently long-term perspective could choose to use predatory pricing2 or margin squeeze3 strategies without specific instructions to do so. These strategies can both lead to consumer harm, by excluding competitors from a market or hampering their access to either inputs or downstream distribution.

In fact, these strategies may be more effective when implemented by AI. This is because planning a predatory pricing strategy aimed at driving a competitor out of the market, for example, requires information on the competitor’s cost structure, available resources and capacity to withstand certain price increases. AI could be used to more effectively analyse available data to determine these characteristics, or infer them from observable characteristics such as the competitor’s response to changes in the market (Dolmans, 2017, p. 8[26]).

When AI is part of a product provided to consumers (e.g. when a platform provides search results based on a search algorithm), rather than a mechanism for making business decisions, different competition concerns may arise. In particular, dominant firms may seek to leverage their position to exclude competitors in downstream or related markets. As noted above, AI applications can exhibit significant economies of scope, in that they can be used for multiple different products and generate useful insights across product markets. Thus, to the extent AI leads to more conglomerate business models, there may be a risk of leveraging through, for example, bundling and tying of products together (OECD, 2020[10]).

The most prominent concern, however, may stem from vertical relationships, for example when AI can be used in online platforms that connect firms with final consumers. In particular, as a result of the market characteristics described above, the operators of online platforms which incorporate algorithms may become “gatekeepers” in certain markets (Crémer, de Montjoye and Schweitzer, 2019[27]). For instance, a dominant online marketplace platform can serve as an important gatekeeper between a seller and its consumers. The design of AI used to display products to consumers that enter a search query would, in this case, have a significant impact on the prospects of the seller. Thus, it may be used to exclude firms from the market or provide advantages to firms that make commercial arrangements with the gatekeeper, affecting the experience of users without their awareness. In other cases, online platform firms may also compete downstream, for example offering products for sale on the platform they operate. This could lead to anticompetitive conduct in order to leverage platform market power into downstream markets using that platform.

One example was highlighted in allegations set out in a Majority Staff Report from the US House of Representatives Subcommittee on Antitrust. The Report described how an online marketplace could use its gatekeeper status and preferential access to third-party seller data to identify popular products, copy their features and introduce its own version (Majority Staff, Subcommittee on Antitrust, Commercial and Administrative Law, 2020, pp. 273-274[28]). While this conduct may be a procompetitive strategy in some instances, it may constitute abusive conduct in others. Indeed, the European Commission has opened an investigation into Amazon for these practices (European Commission, 2019[29]). AI technologies may be used to implement these strategies by assessing market data and identifying opportunities for product launches, for instance.

Another example of competition concerns when a product involves the use of algorithms is “self-preferencing” – when a firm provides advantages to its own products in search results, for example, which could constitute a an abuse of dominance (OECD, 2020, p. 54[23]). This was the theory of harm underlying the European Commission’s case involving Google Shopping, as described in Box 4.4.

More generally, there is growing interest in the role that changes to algorithm design and behavioural “nudges” can play in shaping consumer behaviour. For example, an online platform may take advantage of the tendency of consumers to consult only the first few results on a search query, or use prominent display features to try to encourage a given choice – all while consumers assume that they are obtaining neutral or unbiased results (see, for example, Costa and Halpern (2019[30])). More broadly, AI can be used to analyse consumer decision-making patterns and alter the “choice architecture” available to consumers in order to take advantage of consumer behavioural biases without the knowledge of consumers (Competition and Markets Authority, 2021[16]). The implications of such strategies go beyond competition policy, and have particular relevance for consumer protection authorities.

In some jurisdictions, abuse of dominance prohibitions extend beyond conduct aimed at harming competition by excluding competitors or narrowing their margins. Specifically, some jurisdictions prohibit dominant firms from imposing terms on consumers or suppliers that are deemed to be unfair or discriminatory (referred to as exploitative abuses of dominance) (OECD, 2020, p. 50[23]). These cases involve numerous conceptual as well as legal challenges, and they represent only a small proportion of abuse of dominance cases brought by competition authorities (OECD, 2018, p. 27[31]). However, a recent case by the German competition authority against Facebook regarding exploitative abuse of dominance (Bundeskartellamt, 2019[32]) suggests that this tool may be considered in some jurisdictions to address competition concerns in digital markets. Whether authorities opt to pursue concerns associated with AI strategies using provisions regarding exploitative abuses of dominance (for example when platforms impose on manufacturers the type of practices enabling product copying described above) remains to be seen, however.

Of particular relevance to AI is the question of whether personalised pricing, or price discrimination across consumers, could constitute an exploitative abuse of dominance. In contrast to the concerns about stable prices, uniformity, dampened competition and tacit collusion described above, AI may lead to highly dynamic and specialised markets (Dolmans, 2017, p. 8[26]).

Price discrimination occurs when consumers are charged different prices based on their characteristics or consumption patterns. For example, rather than offering a uniform service, many airlines offer different cabin types and services according to a consumer’s willingness to pay. This can be an effort to capture additional revenues from certain groups of consumers, for example those travelling for business versus leisure.

With AI, firms can process a growing amount of data on consumers and their characteristics. This allows firms to set prices not just based on broad categories of consumers, but in some cases individually tailored prices based on estimates of the consumer’s willingness to pay, as ascertained using multiple data points – also called personalised pricing (OECD, 2018, p. 8[31]). In particular, AI applications may be able to generate inferred data (such as consumption preferences, brand loyalty, and purchasing behaviours) based on data provided by and observed about consumers in a way that was not possible beforehand (OECD, 2018, p. 11[31]). While data can make some degree of personalisation possible in many digital markets, AI decision-making can enable more extensive, more accurate and more granular personalisation. However, it may not be able to overcome all challenges associated with implementing personalised pricing, such as potential consumer arbitrage behaviour which would help some consumers avoid higher prices.

Beyond personalised pricing, AI can be used to personalise the functionality and information provided to consumers. In fact, personalisation may become significantly more widespread thanks to AI applications. For example, it can play a role in the ranking of products in a search query, or in the timing and content of notifications directed to consumers (Competition and Markets Authority, 2021[16]). Further, this personalisation may occur without a consumer’s knowledge.

Personalisation, including personalised pricing, is not automatically cause for competition concerns – in many cases it can improve overall efficiency in markets, and could even improve the accessibility of products. For example, personalised pricing could result in lower prices for some consumers who would not purchase a product if there were only a single, market-wide price. In particular, those with a lower willingness to pay could be offered a price below the previous market-wide price, potentially offset by higher prices for consumers with a higher willingness to pay (OECD, 2018[31]). It may also allow greater customisation to meet a consumer’s needs, and provide a strategy for new firms to develop a foothold in a market (Competition and Markets Authority, 2021[16]). However, significant concerns may arise in some cases. The line between human decision-making and outcomes can be blurred in these situations, particularly when the decision-making process of “black box” algorithms tasked with a broad objective cannot be observed.

Some of these concerns can fall squarely within the realm of competition law. For example, a dominant firm may use algorithms to identify users of rival products and implement “selective pricing” or use behavioural nudges in order to lure them away (OECD, 2018, p. 28[31]). Such strategies could constitute an exclusionary abuse of dominance, depending on the circumstances.

Other concerns associated with personalised pricing may be raised in the context of the more rare exploitative abuse of dominance cases described above. For example, in some jurisdictions, complaints may arise about a dominant firm exploiting its market power by using an algorithm to impose excessive prices on some consumers. However, caution may be needed in these circumstances, particularly if intervention could threaten access to consumers who may have access to a product thanks to cross-subsidisation and personalised pricing.

Finally, personalised pricing may give rise to broader societal concerns that are better addressed outside of competition law, even if it is being implemented by firms with market power. Consumer protection concerns may arise out of the opacity of personalisation algorithms, which can put consumers at an informational disadvantage and make shopping around more difficult. Broader concerns could also arise out of the potential for AI to set personalised prices that result in discrimination, including on the basis of an individual’s age, gender, location, or race (Competition and Markets Authority, 2021[16]). Concerns about this type of discrimination are more fully explored in Chapter 3333.

Competition authorities may also need to assess the risks of market power stemming from AI when reviewing mergers. Many of these risks apply across a broad range of digital markets, and overlap with concerns stemming from data access as well as network effects. For example, a merger that alleviates the competitive pressure on a firm may also reduce its incentives to innovate, thus limiting the positive potential of AI technologies for consumers (OECD, 2018[33]). However, the dynamics associated with AI may also lead to unique concerns. For example, a merger that combines two firms’ datasets or AI capacity could result in market power that can be hard to contest given the substantial economies of scale and scope associated with data and AI applications (OECD, 2016[5]).

While these harms will be most straightforward with respect to mergers between two product market competitors, there has been growing interest in vertical mergers (i.e. mergers between firms and their suppliers or downstream distributors) in the digital sector (OECD, 2019[34]). Competition harms may result from vertical mergers if the post-merger firm can cut its competitors off from the supply of an essential input (such as data or AI technology), although the analysis in these cases can be complex.

Finally, conglomerate mergers (i.e. mergers between firms that are neither competitors nor in a supply relationship), may also give rise to harm in particular circumstances. These circumstances may be particularly common in digital markets, including those featuring AI technology. In particular, a digital firm with market power in one market may use a merger to enter another market with an overlapping user base. It may then leverage its market power in the original market to foreclose competition in the new market, for example by bundling products together in some situations (OECD, 2020[10]).

Merger control is a preventative measure that can also help address the risks of anticompetitive conduct before it occurs. Thus, in markets in which AI is used for competitive decision-making, or as a component of a product offered to consumers, the risks of the merger facilitating algorithmic collusion or abuses of dominance could be considered by authorities. For example, authorities could consider whether one of the merging parties used a different pricing strategy or algorithm from other firms in a market – meaning the merger could be depriving the market of a “maverick” that encourages competition. Further, if a merger risks significantly increasing transparency in a market (due, for example, to one of the parties’ tendency to disclose significant detail on its prices and products beyond what is needed for consumers), it may also lead to collusive outcomes.

The discussion above identifies a range of potential competition problems associated with AI, either as a decision-maker, implementer of firm strategies, or component of a product offered to consumers. Further, while limited, there have been some cases by competition authorities to address anticompetitive conduct associated with algorithms, as well as one initial empirical indication that algorithms could dampen competition. The question remains, however, whether competition policy and in particular enforcement frameworks are equipped to address the potentially significant impact of AI on competitive dynamics in markets. The OECD Competition Committee held an initial discussion on this subject in 2017, which served to identify some of the key challenges facing competition authorities as well as some proposed solutions. This section explores these challenges and the developments that have occurred since the discussion.

For the purposes of competition law, harms associated with AI can be divided into three categories: (1) the use of AI to implement anticompetitive agreements or strategies developed by humans; (2) the implementation of identifiable anticompetitive strategies by AI without explicit instructions by humans; and (3) AI coinciding with a reduction in competitive intensity without explicit evidence of anticompetitive strategies or agreements.

The first category involves the most straightforward application of competition law. Explicit cartel agreements among competitors are infringements of competition law regardless of whether they are executed through phone calls, meetings, or pricing algorithms designed to mirror one another’s behaviour. The collusion cases involving poster sellers in the US and UK described above illustrate this point.

Similarly, the use of algorithms to implement abuses of dominance (including the self-preferencing issues described above) would be subject to the same legal standards as the implementation of these strategies through other means. Recent reform proposals by the European Commission and UK Government have included additional rules and oversight regarding gatekeeper platforms, however.4 These reforms reflect concerns about the ability of existing competition laws to address concerns regarding self-preferencing, for example in terms of speed or coverage of the law.

The second category may apply in cases where an abuse of dominance has occurred as a result of the decisions made by an algorithm. Unlike hard-core collusive arrangements, which are assumed to be anticompetitive given past experience, abuses of dominance are assessed in terms of their effect, and rooted in economic theories of harm. Thus, even if an algorithm is not explicitly programmed to exclude competitors, for example, firms using “black box” algorithms that execute anticompetitive behaviour on their own are likely to be liable for the effects of this conduct. For example, when investigating a potential exploitative abuse of dominance case involving the possibility of excessive airfares, the President of the Bundeskartellamt stated (Bundeskartellamt, 2018[35]):

The use of an algorithm for pricing naturally does not relieve a company of its responsibility. The investigations in this case have also shown that the airlines specify the framework data and set the parameters for dynamic price adjustment separately for each flight. The airlines also actively manage changes to these framework data and enter unanticipated events manually, which are not automatically accounted for by the system.

The third category remains the most prominent legal challenge to enforcing competition laws in the presence of AI. In particular, competition law does not apply to tacit collusion outcomes reached without any co-ordination among the firms involved. This is because tacit collusion may in fact be the most rational response to conditions in certain markets, even if it is not ideal from an economic perspective, and thus it could be difficult to fashion an effective remedy that would improve market outcomes (OECD, 2017, p. 18[1]).

Collusion cases are generally prosecuted according to whether an agreement has taken place among firms. The precise definition of an agreement varies across jurisdictions, but proof is generally required that an allegedly collusive outcome was the result of direct or indirect communication rather than purely independent decision-making by the firms involved. Some jurisdictions supplement this by also considering other forms of collusion. Concerted practices are “a form of coordination between undertakings by which, without it having reached the stage where an actual agreement has been concluded, practical cooperation between them substitutes the risks of competition” (European Commission, 2019, p. 4[36]). Facilitating practices, which are “positive, avoidable actions, engaged in by market players, which allow firms to easier and more effectively achieve coordination, by overcoming the impediments to coordination” (Gal, 2017, p. 18[37]).

The challenge posed by tacit collusion enabled by algorithms is thus similar to that posed by tacit collusion more broadly, particularly in oligopolistic markets (OECD, 2017, pp. 35-36[1]): in the presence of strong entry barriers and homogeneous products, there may be strong incentives for firms to avoid aggressive competition. Some have suggested that algorithmic collusion in these situations could be addressed as part of a broader competition policy debate about attempting to tackle tacit collusion through competition law (Gal, 2017, p. 21[37]). In particular, the reliance on an explicit agreement to prove an infringement may need to be revised.

In the absence of such a significant and controversial change, some alternative approaches tailored to algorithms could be considered. One such approach is to consider algorithms facilitating practices, particularly if they enable tacitly collusive outcomes to be reached more efficiently, according to the analysis set out in Figure 4.1 (Gal, 2017, p. 18[37]). For example, the use of similar pricing algorithms could be considered a harmful facilitator of collusion. An equivalent practice in more traditional sectors would be a decision to publicise detailed price and product information, in a manner beyond what would be useful to consumers, to clearly signal competitors and enable collusion. However, there may be a significant evidentiary burden for proving that an algorithm constitutes such a practice in some jurisdictions (Ezrachi and Stucke, 2017, pp. 20-21[38]).

Alternatively, there may be a case in some situations for considering that an agreement between firms may have been reached through a “meeting of algorithms” (OECD, 2017, p. 38[1]). In particular, if a collusive outcome has been reached through rapid algorithmic pricing decisions, it could be interpreted as a case of explicit collusion. In particular, the algorithms could be deemed to have reached an indirect agreement by signalling to each-other in order to reach a mutually-acceptable price. However, competition authorities are still grappling with whether this interpretation may fit within current competition law, particularly given that this sort of algorithmic communication may still be limited in today’s markets (Autorité de la concurrence and Bundeskartellamt, 2019, p. 53[18]).

These theories have yet to be tested extensively in competition authority proceedings. Doing so will require addressing important questions, including liability. For example, if a firm procures a black box machine-learning algorithm from a third party developer, and the latter does not explicitly include collusion objectives in programming the algorithm, who should bear liability for any collusive outcomes that result? Some authorities have suggested a duty on the part of firms to avoid collusive outcomes, in the same way that a firm would not escape liability if an employee engaged in collusion without knowledge of the firm’s owners (Autorité de la concurrence and Bundeskartellamt, 2019, p. 58[18]; Vestager, 2017[39]).

In the event the legal challenges to tackling tacit algorithmic collusion prove insurmountable, authorities may need to make use of the alternative measures described in Section 4.3.3 below.

Investigations into collusion and abusive conduct each involve their own unique challenges – challenges that may be exacerbated when algorithms are involved.

In the case of collusion investigations, which are a priority in many jurisdictions, detection is a particular challenge.5 Firms that collude generally seek to maintain the secrecy of their agreement, for instance by minimising communication, or using indirect or informal communication that does not leave behind evidence. In this sense, explicit cartels using AI tools involve the same detection challenges as many other cartels. However, AI may be an effective means of further limiting communication between cartel participants, for example by using signalling techniques to help co-ordinate a cartel’s response to changes in a market.

There are some techniques available to competition authorities to tackle detection challenges. First, competition authorities have developed leniency programs that allow a cartel member to come forward with information about the cartel in exchange for a lesser penalty (OECD, 2001[40]). A leniency application may be an attractive option for cartel participants that are not satisfied with the outcome of a cartel agreement, or if they fear detection and prosecution by competition authorities (including the potential for another cartel member revealing the cartel and applying for leniency). While leniency is a primary detection method in many markets, its efficiency will depend on the existence of a credible threat of detection through other means. Thus, authorities are continuing to explore alternative detection methods.

One such method of cartel detection is the use of screening tools by competition authorities (OECD, 2013[41]). These can include structural screens, which may identify markets where authorities may wish to pay particular attention given certain characteristics that might make collusion more likely (including product homogeneity and oligopolistic market structure). Authorities are also exploring the use of behavioural screens, which use available data on firm behaviour to flag potential collusion.6 This could include looking for patterns of unusual or unexplained behaviour (such as uniform price rises across a market not related to changes in demand or input prices), and identifying “structural breaks” in market data that could show the implementation of a cartel agreement or the adaptation of the cartel to market changes (OECD, 2013[41]).

Screens are generally only helpful in providing indications of potential cartel activity – that is, in order for prosecution to occur, further investigation will be required. However, they can create disincentives for cartel formation, and encourage leniency applications, to the extent they create the threat of detection. Further, screens involve a range of technical and analytical challenges in their implementation, but AI may in fact help competition authorities in surmounting these challenges, as described in the following Chapter.

Beyond detection, competition authorities also face the challenge of investigating potential collusion facilitated by AI that straddles the line between explicit and tacit. Gal (2017, pp. 24-25[37]) proposes that authorities prioritise certain situations where competition harm may be more clear-cut; including: when firms consciously use similar algorithms, when they use similar data and make it easier for rivals to observe this data, or when firms reveal the content or design of their algorithms, making it easier for rivals to copy them.

Finally, competition authorities face a significant technical challenge in analysing AI-related competition concerns, whether they pertain to collusion or abusive conduct. While competition authority staff always face the need to become acquainted with an industry, its players and its dynamics when undertaking investigations, the assessment of AI can pose a particular challenge. This is due not only to the technical aspects of AI design and functioning, but also the opaque nature of AI, particularly when machine learning functionality is involved.

The UK Competition and Markets Authority (2021[16]) has recently published a report that identifies a range of investigative techniques allowing competition authorities to better assess AI. Specifically:

  • When authorities do not have direct access to an algorithm or the data it uses, they may nonetheless monitor the behaviour of market participants through, for example: “mystery shopping” (in which authority staff mimics a consumer), scraping techniques (which extract data from websites or applications), or access to application programming interfaces (APIs, which can facilitate access to data on an online platform).

  • When authorities have access to the data used by the AI, or data regarding its outputs, they may be able to undertake analysis on the nature and effects of the AI decision-making. For example, in its investigation regarding Google’s practice of promoting its own comparison shopping service, the European Commission used data on 1.7 billion search queries in order to estimate the effect of search result ranking on consumer decisions (European Commission, 2017[42]).

  • When authorities have access to the code underlying AI, they may attempt a review of the code (to determine, for example, whether there are explicit anticompetitive instructions included), although this may involve substantial technical and practical difficulty. Alternatively, authorities may engage in testing the algorithm in order to better understand its functioning and outputs.

Each of these approaches are likely to require that competition authorities have access to sufficient technical expertise. Several authorities are investing in this capacity, including being able to not only assess algorithms but also deploy them for their own purposes, as outlined in the following Chapter. At the same time, more traditional evidence gathering can help in some situations. For example, the collection of internal firm documents that help explain the business strategy associated with AI techniques can be valuable for investigations (Deng, 2018, p. 91[21]).

As the discussion above highlights, there are a range of competition and other policy concerns that may not be easily addressed through current competition enforcement tools. These range from tacit collusion, to the modification of consumer-facing algorithms in ways that may not qualify as an abuse under current standards, to wide-ranging consumer protection and even human rights concerns. There are several opportunities for competition policy makers to help address these concerns.

AI may affect competitive dynamics in a market without leading to explicit collusion or abuses of dominance. As noted above, tacit collusion facilitated by AI may depress market dynamics and make competitor decisions more predictable. AI may also be designed in a way that takes advantage of consumers’ behavioural biases or makes switching to other providers more difficult.

Many OECD competition authorities have the ability to conduct market studies, in cases where competition is not functioning well but an antitrust investigation is not warranted.7 A limited number of jurisdictions also have the power to order the implementation of remedies in response to any issues identified (generally through a complementary instrument called a market investigation).

Market studies may be of particular value in identifying the conditions that are leading to dampened competition in markets featuring AI decision-making, or in assessing the supply- and demand-side factors enabling other uncompetitive outcomes. One such example is the European Commission’s sector inquiry on e-commerce, which observed issues regarding price transparency and automated price adjustments, as summarised in Box 4.5. The findings of market studies can be used to support recommendations for regulatory change, measures to inform consumers, or follow-on investigations by competition or other regulators. For example, they can discuss the balance between promoting data access for procompetitive reasons and the risks of collusion. Market studies can also highlight risks associated with market concentration among AI service providers (which could lead to symmetry in pricing behaviour, for example).

In addition, market studies and other competition authority advocacy efforts can be used to identify ways to better-equip consumers to face AI-related conduct generating competition concerns. Competition authorities are beginning to explore the greater use of measures focusing on the consumer side of the market, in recognition of the fact that competition problems may be enabled through certain demand side characteristics (see, for example, the background note prepared by the UK Competition and Markets Authority to support an OECD discussion on this topic (2018[43])). Some potential measures of relevance here including leveraging AI to enable comparison tools and services acting as an intermediary between consumers and online platforms, transparency requirements, and data portability as well as interoperability (see, for example, Costa and Halpern (2019[30])). The OECD is exploring the application of these measures, for example with a discussion earlier this year on data portability, interoperability and competition in the Competition Committee.8

Beyond the importance of international co-operation among competition authorities given the borderless nature of many digital markets, addressing concerns regarding AI will require close and ongoing co-ordination with other regulators, including consumer protection and data protection authorities as well as sector regulators. The use of AI to take advantage of consumer behavioural biases, for example, may not be easily addressed using competition enforcement tools even if it does shape market dynamics, in which case competition authorities may wish to support further consumer protection or data protection interventions. In addition, data protection frameworks can have a significant impact on the competitive dynamics associated with AI in markets; for instance, poorly-designed data protection regulations may enhance the advantage of incumbents and reduce market contestability (see, for example, (OECD, 2020[44])). Further, as a greater range of regulators begins to respond to AI issues within their own mandates, competition authorities can also play a role in ensuring that these responses do not unduly harm competition and lead to unintended consequences for consumer welfare. For instance, authorities may need to highlight situations in which competition enforcement can more effectively address certain concerns than regulation – particularly when rapid innovation could render regulation obsolete, or when the concern can be addressed by leveraging the forces of competition.

In addition, the competition concerns outlined above present a range of novel conceptual and practical challenges for competition authorities. Authorities can benefit from co-operation with researchers when seeking to assess how AI is currently affecting competitive dynamics, and the ensuing effects on consumers. Further, this co-operation can help explore how widespread certain anticompetitive practices are, and develop tools for detecting and analysing them.

Policy makers may also need to consider the role of trade policy when seeking to promote the evolution of AI in a procompetitive manner. For instance, discussions at the World Trade Organization regarding e-commerce have covered potential competition distortions in trade policy that could affect AI applications. This includes requirements that firms transfer their software source code when seeking to operate in a given jurisdiction.9 In addition, data localisation policies could affect the evolution of AI applications, either by limiting the combination of datasets or creating competition distortions, and will thus need to be considered carefully.

Concerns about the role of algorithms in facilitating or reaching collusive outcomes have been on the radar of competition authorities for several years (as demonstrated by the OECD’s 2017 roundtable on the topic). Since that time, interest and analysis regarding competition issues in digital markets more broadly has grown significantly. Governments have commissioned expert reports,10 competition authorities have undertaken market studies,11 and elected officials have held hearings.12

These studies emphasise the importance of protecting and promoting competition in digital markets, given their growing role in the economy and their importance for future growth. They also point to changes needed to address growing concerns about competitive dynamics in the wake of digitalisation. In some cases, these changes relate to strengthening competition enforcement frameworks, such as enhancing merger control or enabling faster competition authority intervention in the case of abuses.

Further, a growing set of jurisdictions are also developing new legislation and new regulatory frameworks to address competition concerns that may not be easily addressed within existing competition frameworks, including proposals made by the European Commission13 and UK Government,14 among others. This reflects the view that current competition enforcement procedures may be too slow, too reactive instead of proactive, or may not capture all competition concerns that arise in digital markets.

The scope of these regulatory proposals extend well beyond addressing concerns regarding AI, including competition risks stemming from bundling of digital products together, self-preferencing by vertically-integrated firms, and concerns about the bargaining relationship between large online platforms and businesses using them. While the full extent of these proposals is beyond the focus of this Chapter, the OECD Competition Committee will be holding a roundtable on the topic of ex ante regulation in digital markets in December 2021.

Of particular relevance to the assessment of AI-related competition issues, both the EU and UK proposals referenced above seek to impose specific rules on online platforms acting as gatekeepers. These measures rectify perceived gaps in competition enforcement frameworks, namely their ability and speed in addressing concerns about self-preferencing and other issues of relevance to AI applications. The precise application of these measures is still being developed, but they suggest the need to consider asymmetric measures for particular dominant firms, including with respect to the design and operation of their algorithms.

Other policy measures are being developed to specifically address AI-related competition concerns. For instance, the European Commission has set out guidelines under the EU Platform-to-Business (P2B) regulation regarding the ranking of choices presented to consumers on platforms. While not legally binding, these guidelines can demonstrate how platforms can comply with their obligations under the P2B regulation. They include transparency with respect to the main parameters of algorithmic ranking, improving the information available to businesses that use platforms and encouraging fairness in the application of ranking algorithms (European Commission, 2020[45]).

This chapter highlights how AI may lead to competition problems, particularly collusion and abusive conduct. Further, the nature of these outcomes may not be easily addressed using existing competition enforcement tools. This is particularly the case with tacit collusion, which one commentator likened to a “crack” in enforcement frameworks that could be widened into a “chasm” due to AI (Mehra, 2016, p. 1340[46]).

The competition policy community’s understanding of exactly how AI will affect competitive dynamics is still developing, and rooted in theory. By enabling more efficient decision-making, new products and services, AI holds substantive procompetitive potential, and may even serve to undermine the stability of some collusive outcomes. At the same time, competition risks could emerge if AI dampens competition by making markets predictable, transparent and stagnant, or if it leads to the implementation of aggressive strategies that exclude competitors from markets. More broadly, AI gives rise to a range of concerns that cannot be neatly confined within a competition law context, and will require the attention of consumer protection regulators as well as policy makers to address more fundamental questions about discrimination.

Despite concerns about the significant potential legal and investigative challenges that may arise due to these AI-related competition problems, it is clear that competition authorities’ toolboxes are not empty. They can capture a wide range of conduct using existing tools, as demonstrated by some of the cases summarised above. They can manage risks of anticompetitive conduct through careful merger control, conduct market studies to identify procompetitive measures (including those aimed at supporting consumer information and decision-making), and engage in advocacy and co-operation with other regulators. However, additional legislative tools may be required to capture the full range of competition concerns, and competition authorities will need access to the technical capacity and knowledge needed to understand how AI works in markets. In fact, there are numerous opportunities for authorities to harness AI to improve their work, as will be explored in the following chapter.

In sum, it is still too early to say whether AI will deliver on its potential for significant procompetitive consumer benefits, or whether it will lead to widespread competition harm. However, it is clear that competition policy will have an important role to play in managing the potential dark sides of AI technology for consumers, businesses that may be harmed by AI-enabled anticompetitive conduct, and the economy more broadly. This will involve ensuring regulatory frameworks support innovation and procompetitive AI applications without unnecessary burdens or competition barriers, ensuring enforcers have the right tools to enforce competition laws, and co-ordinating across regulatory and policy disciplines.


[3] Accenture (2017), How AI Boosts Industry Profits and Innovation, https://www.accenture.com/fr-fr/_acnmedia/36dc7f76eab444cab6a7f44017cc3997.pdf.

[22] Assad, S. et al. (2020), “Algorithmic Pricing and Competition: Empirical Evidence from the German Retail Gasoline Market”, CESifo Working Paper No. 8521, https://www.econstor.eu/bitstream/10419/223593/1/cesifo1_wp8521.pdf.

[51] Australian Competition & Consumer Commission (2019), Digital Platforms Inquiry: Final Report, https://www.accc.gov.au/publications/digital-platforms-inquiry-final-report.

[18] Autorité de la concurrence and Bundeskartellamt (2019), Algorithms and Competition, https://www.autoritedelaconcurrence.fr/sites/default/files/algorithms-and-competition.pdf.

[49] Autorité de la concurrence and Bundeskartellamt (2016), Competition Law and Data, https://www.bundeskartellamt.de/SharedDocs/Publikation/DE/Berichte/Big%20Data%20Papier.pdf?__blob=publicationFile&v=2.

[9] Babina, T. et al. (2020), Artificial Intelligence, Firm Growth, and Industry Concentration, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3651052.

[7] Bajgar, M., C. Criscuolo and J. Timmis (2021), Intangibles and Industry Concentration: Supersize Me, OECD, https://one.oecd.org/document/DSTI/CIIE(2019)13/REV1/en/pdf.

[24] Berlingieri, G. et al. (2020), Last but not least: laggard firms, technology diffusion and its structural and policy determinants, OECD, https://www.oecd-ilibrary.org/science-and-technology/laggard-firms-technology-diffusion-and-its-structural-and-policy-determinants_281bd7a9-en.

[32] Bundeskartellamt (2019), Case Summary: Facebook, Exploitative business terms pursuant to Section 19(1) GWB for inadequate data processing, https://www.bundeskartellamt.de/SharedDocs/Entscheidung/EN/Fallberichte/Missbrauchsaufsicht/2019/B6-22-16.pdf?__blob=publicationFile&v=4.

[35] Bundeskartellamt (2018), Press Release: Lufthansa tickets 25-30 per cent more expensive after Air Berlin insolvency – “Price increase does not justify initiation of abuse proceeding”, https://www.bundeskartellamt.de/SharedDocs/Meldung/EN/Pressemitteilungen/2018/29_05_2018_Lufthansa.html.

[6] Calligaris, S., C. Criscuolo and L. Marcolin (2018), “Mark-ups in the digital era”, OECD Science, Technology and Industry Working Papers, No. 2018/10, https://www.oecd-ilibrary.org/industry-and-services/mark-ups-in-the-digital-era_4efe2d25-en.

[8] Calvino, F. and C. Criscuolo (2019), “Business dynamics and digitalisation”, OECD Science, Technology and Industry Policy Papers, No. 62, OECD Publishing, Paris, https://dx.doi.org/10.1787/6e0b011a-en.

[25] Calvino, F., C. Criscuolo and R. Verlhac (2020), Declining business dynamism: structural and policy determinants, OECD, https://www.oecd-ilibrary.org/science-and-technology/declining-business-dynamism_77b92072-en.

[16] Competition and Markets Authority (2021), Algorithms: How they can reduce competition and harm consumers, https://www.gov.uk/government/publications/algorithms-how-they-can-reduce-competition-and-harm-consumers/algorithms-how-they-can-reduce-competition-and-harm-consumers#theories-of-harm.

[50] Competition and Markets Authority (2020), Online platforms and digital advertising: Market study final report, https://assets.publishing.service.gov.uk/media/5efc57ed3a6f4023d242ed56/Final_report_1_July_2020_.pdf.

[43] Competition and Markets Authority (2018), Designing and Testing Effective Consumer-facing Remedies: Background Note for OECD Competition Committee Working Party No. 3, OECD, https://one.oecd.org/document/DAF/COMP/WP3(2018)2/en/pdf.

[13] Competition and Markets Authority (2018), Pricing algorithms: Economic working paper on the use of algorithms to facilitate collusion and personalised pricing, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/746353/Algorithms_econ_report.pdf.

[15] Competition and Markets Authority (2016), Decision of the Competition and Markets Authority: Online Sales of Posters and Frames - Case 50223, https://assets.publishing.service.gov.uk/media/57ee7c2740f0b606dc000018/case-50223-final-non-confidential-infringement-decision.pdf.

[30] Costa, E. and D. Halpern (2019), The behavioural science of online harm and manipulation, and what to do about it, The Behavioural Insights Team, https://www.bi.team/wp-content/uploads/2019/04/BIT_The-behavioural-science-of-online-harm-and-manipulation-and-what-to-do-about-it_Single.pdf.

[27] Crémer, J., Y. de Montjoye and H. Schweitzer (2019), Competition policy for the digital era, https://ec.europa.eu/competition/publications/reports/kd0419345enn.pdf.

[21] Deng, A. (2018), “What Do We Know About Algorithmic Tacit Collusion?”, Antitrust, Vol. 33/1, https://awards.concurrences.com/IMG/pdf/fall18-denga-published.pdf?46593/5e118cee876a9c4a37e32bc8c9451a34c54e18d9.

[48] Digital Competition Expert Panel (2019), Unlocking digital competition: Report of the Digital Competition Expert Panel, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf.

[26] Dolmans, M. (2017), Artificial Intelligence and the future of competition law – further thoughts, Cleary Gottlieb Steen & Hamilton LLP, https://www.coleurope.eu/sites/default/files/uploads/event/dolmans.pdf.

[45] European Commission (2020), Press Release: European Commission publishes ranking guidelines under the P2B Regulation to increase transparency of online search results, https://ec.europa.eu/digital-single-market/en/news/european-commission-publishes-ranking-guidelines-under-p2b-regulation-increase-transparency.

[36] European Commission (2019), Hub-and-spoke arrangements – Note by the European Union for the OECD Competition Committee Roundtable Discussion, https://one.oecd.org/document/DAF/COMP/WD(2019)89/en/pdf.

[29] European Commission (2019), Press Release: Antitrust: Commission opens investigation into possible anti-competitive conduct of Amazon, https://ec.europa.eu/commission/presscorner/detail/en/ip_19_4291.

[2] European Commission (2017), Final report on the E-commerce Sector Inquiry, https://ec.europa.eu/competition/antitrust/sector_inquiry_final_report_en.pdf.

[42] European Commission (2017), Press Release: Antitrust: Commission fines Google €2.42 billion for abusing dominance as search engine by giving illegal advantage to own comparison shopping service, https://ec.europa.eu/commission/presscorner/detail/en/IP_17_1784.

[38] Ezrachi, A. and M. Stucke (2017), Algorithmic Collusion: Problems and Counter-Measures: Note for the Competition Committee Roundtable on Algorithms and Collusion, OECD, https://one.oecd.org/document/DAF/COMP/WD(2017)25/en/pdf.

[20] Ezrachi, A. and M. Stucke (2017), “Artificial Intelligence & Collusion: When Computers Inhibit Competition”, University of Illinois Law Review, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2591874.

[11] Fletcher, A. (2020), Digital competition policy: Are ecosystems different?, https://one.oecd.org/document/DAF/COMP/WD(2020)96/en/pdf.

[37] Gal, M. (2017), Algorithmic-facilitated Coordination: Note for OECD Competition Committee Roundtable on Algorithms and Collusion, OECD, https://one.oecd.org/document/DAF/COMP/WD(2017)26/en/pdf.

[4] Gal, M. and N. Elkin-Koren (2017), “Algorithmic Consumers”, Harvard Journal of Law & Technology, Vol. 30, https://jolt.law.harvard.edu/assets/articlePDFs/v30/30HarvJLTech309.pdf.

[28] Majority Staff, Subcommittee on Antitrust, Commercial and Administrative Law (2020), Investigation of Competition in Digital Markets: Majority Staff Report and Recommendations, https://judiciary.house.gov/uploadedfiles/competition_in_digital_markets.pdf?utm_campaign=4493-519.

[46] Mehra, S. (2016), “Antitrust and the Robo-Seller: Competition in the Time of Algorithms”, Minnesota Law Review, Vol. 100, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2576341.

[23] OECD (2020), Abuse of dominance in digital markets: Background note by the Secretariat, http://www.oecd.org/daf/competition/abuse-of-dominance-in-digital-markets-2020.pdf.

[44] OECD (2020), Consumer Data Rights and Competition: Background note by the Secretariat, https://one.oecd.org/document/DAF/COMP(2020)1/en/pdf.

[10] OECD (2020), Roundtable on Conglomerate Effects of Mergers - Background note by the Secretariat, https://one.oecd.org/document/DAF/COMP(2020)2/en/pdf.

[47] OECD (2019), Practical approaches to assessing digital platform markets for competition law enforcement: Background note by the Secretariat for the Latin American and Caribbean Competition Forum, https://one.oecd.org/document/DAF/COMP/LACF(2019)4/en/pdf.

[17] OECD (2019), Roundtable on Hub-and-Spoke Arrangements: Background Note by the Secretariat, https://one.oecd.org/document/DAF/COMP(2019)14/en/pdf.

[34] OECD (2019), “Vertical Mergers in the Technology, Media and Telecom Sector”, https://one.oecd.org/document/DAF/COMP(2019)5/en/pdf.

[33] OECD (2018), Considering non-price effects in merger control: Background note by the Secretariat, https://one.oecd.org/document/DAF/COMP(2018)2/en/pdf.

[31] OECD (2018), Personalised Pricing in the Digital Era: Background note by the Secretariat, https://one.oecd.org/document/DAF/COMP(2018)13/en/pdf.

[1] OECD (2017), Algorithms and Collusion: Competition Policy in the Digital Age, https://www.oecd.org/competition/algorithms-collusion-competition-policy-in-the-digital-age.htm.

[5] OECD (2016), Big data: Bringing competition policy to the digital era: Background paper by the Secretariat, https://one.oecd.org/document/DAF/COMP(2016)14/en/pdf.

[12] OECD (2015), Serial offenders: Why some industries seem prone to endemic collusion - Background Paper by the Secretariat, https://one.oecd.org/document/DAF/COMP/GF(2015)4/en/pdf.

[41] OECD (2013), Roundtable on Ex Officio Cartel Investigations and the Use of Screens to Detect Cartels: Background note by the Secretariat, https://one.oecd.org/document/DAF/COMP(2013)14/en/pdf.

[40] OECD (2001), Policy Brief: Using Leniency to Fight Hard Core Cartels, http://www.oecd.org/daf/competition/1890449.pdf.

[19] Petit, N. (2017), Antitrust and Artificial Intelligence: A Research Agenda, https://orbi.uliege.be/bitstream/2268/235346/1/Artificial%20Intelligence%20Juin%202017.pdf.

[14] US Department of Justice (2015), Press Release: Former E-Commerce Executive Charged with Price Fixing in the Antitrust Division’s First Online Marketplace Prosecution, http://www.justice.gov/atr/public/press_releases/2015/313011.docx.

[39] Vestager, M. (2017), Speech at Bundeskartellamt 18th Conference on Competition - Algorithms and competition, https://wayback.archive-it.org/12090/20191130155750/https://ec.europa.eu/commission/commissioners/2014-2019/vestager/announcements/bundeskartellamt-18th-conference-competition-berlin-16-march-2017_en.


← 1. Gains enjoyed by consumers of a product when more consumers use that product (OECD, 2019, p. 6[47]).

← 2. Predatory pricing refers to a strategy by a firm to cut prices in order to push competitors out of the market and then, with the market power obtained thanks to barriers to entry that protect the firm’s position after the exit of competitors, raise prices afterward.

← 3. Margin squeeze strategies arise when a firm reduces its rival’s margins, specifically when it has market power either upstream (i.e. over an input) and competes with rivals downstream, or when it has market power downstream (e.g. over retail distribution) and competes with rivals upstream.

← 4. The proposed EU Digital Markets Act package imposes specific rules on digital gatekeepers: https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/digital-markets-act-ensuring-fair-and-open-digital-markets_en. In the UK, a Digital Markets Unit will enforce a code of conduct applicable to dominant digital firms: https://www.gov.uk/government/news/new-competition-regime-for-tech-giants-to-give-consumers-more-choice-and-control-over-their-data-and-ensure-businesses-are-fairly-treated.

← 5. This is less of an issue with respect to abuses of dominance where competitor and consumer complaints help with detection.

← 6. See, for instance, the OECD Workshop on cartel screening in the digital era, https://www.oecd.org/competition/workshop-on-cartel-screening-in-the-digital-era.htm.

← 7. Find additional resources at: https://www.oecd.org/daf/competition/market-studies-and-competition.htm

← 8. Find additional resources at: http://www.oecd.org/daf/competition/data-portability-interoperability-and-competition.htm.

← 9. See, for instance, proposals by the European Union) and Singapore regarding WTO Disciplines and Commitments Relating to Electronic Commerce (https://docs.wto.org/dol2fe/Pages/FE_Search/FE_S_S009-DP.aspx?language=E&CatalogueIdList=253794,253801,253802,253751,253696,253697,253698,253699,253560,252791&CurrentCatalogueIdIndex=6&FullTextHash=&HasEnglishRecord=True&HasFrenchRecord=True&HasSpanishRecord=True and https://docs.wto.org/dol2fe/Pages/FE_Search/FE_S_S009-DP.aspx?language=E&CatalogueIdList=253794, respectively).

← 10. See, for instance, reports commissioned by the European Commission (Crémer, de Montjoye and Schweitzer, 2019[27]) and UK Government (Digital Competition Expert Panel, 2019[48]).

← 11. See, for instance, studies undertaken by the Australian (Australian Competition & Consumer Commission, 2019[51]), French and German (Autorité de la concurrence and Bundeskartellamt, 2016[49]), and UK (Competition and Markets Authority, 2020[50]) competition authorities.

← 12. (Majority Staff, Subcommittee on Antitrust, Commercial and Administrative Law, 2020[28])

← 13. The Digital Markets Act package, which imposes specific rules on digital gatekeepers: https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/digital-markets-act-ensuring-fair-and-open-digital-markets_en

← 14. Including the establishment of a Digital Markets Unit to enforce a code of conduct applicable to dominant digital firms: https://www.gov.uk/government/news/new-competition-regime-for-tech-giants-to-give-consumers-more-choice-and-control-over-their-data-and-ensure-businesses-are-fairly-treated.

Metadata, Legal and Rights

This document, as well as any data and map included herein, are without prejudice to the status of or sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any territory, city or area. Extracts from publications may be subject to additional disclaimers, which are set out in the complete version of the publication, available at the link provided.

© OECD 2021

The use of this work, whether digital or print, is governed by the Terms and Conditions to be found at http://www.oecd.org/termsandconditions.