2019


Machine Learning and AI as business tools:

Threat or blessing for competition?

CCP 15th Annual Conference 6th – 7th June 2019

Welcome to this year’s CCP Conference on ‘Machine Learning and AI as business tools: Threat or blessing for competition?’

The competition and consumer policy questions surrounding the increasing digitisation in economic activity are reaching a new height of policy focus as suggested by recent reports in the UK such as HM Treasury’s Digital Competition Expert Panel report as well as policy reviews in other jurisdictions such as Australia, France, Germany and the EU. Whatever policy changes may occur in the future they should be informed by researchers, policymakers and business with companies in all sectors needing to stay informed to ensure that they are aware of the future risks and can comply with the law. This conference will bring you to the leading edge of this research and policy discussion.

We hope this conference will contribute to the emerging debate at the interface of social and computer sciences, ultimately helping us better understand the potential blessings and threats that AI might imply for competition.


Session 1 – Algorithms, AI and Cartel Risks

Vincenzo Denicolò, Professor of Economics at the University of Bologna, used his talk titled, ‘Artificial Intelligence, Algorithmic Pricing and Collusion’ to explain how pricing algorithms are supplanting human decision making in real marketplaces lately. The key issue, according to him, is the risk of collusion among AI pricing algorithms. Different views of algorithmic collusion were presented with regard to risk, and he made the comment that AI outperforms humans in most areas and may also be able to do so in price making.

He explained the competition policy debate on the possible consequences of this development through his experimental work on pricing algorithms powered by AI in computer simulated marketplaces and studying the interaction among a number of Q-learning algorithms in a workhorse oligopoly model of price competition with Logit demand and constant marginal costs.

He further stated that in this setting, the algorithms consistently learn to charge supra-competitive prices, without communicating with one another. The high prices are sustained by classical collusive strategies with a finite phase of punishment, followed by a gradual return to cooperation. This finding is robust to asymmetries in cost or demand and to changes in the number of players.


Can machine learning algorithms learn to collude by themselves? Timo Klein claims that this might be the case. In a very classical duopoly setting, two firms are set to compete sequentially in prices. What is new is that now it will be the artificial intelligence itself, that is, the algorithm, who will be setting prices. In this case, this is done with a Q-learning process. This is not a state-of-the-art machine learning process, quite the contrary. Yet it could be argued that if a simple algorithm as this one can produce coordination, a more complex one could even be better at it.

The way this simulation works is the following: firms set prices and generate a given level of profits. After that, they update their experience and move to the next period. The more they play, the more they learn. As in classical collusion games, firms learn that there are benefits of undercutting rivals, but also high profits of setting high prices, the collusive trade-off.

Various specifications and parameters can be adjusted in the learning process, but the results are clear:  firms rapidly learn to play best responses, and average prices are far above competitive ones. That is, without any programmed collusion or coordination, machine learning is able, to some extent, collude.


Despite some recent literature claiming that algorithmic pricing could be a threat for competition, Professor Kuhn argues that in fact, we have very little evidence to sustain this hypothesis. Most of the literature has focused on tacit collusion, that is when firms set prices above competitive levels without explicitly agreeing on them. Lab experiments with human subjects have provided evidence supporting this idea, but it is well known that tacit collusion is very hard to maintain in repeated games. Even in a lab setting, tacit collusion becomes virtually impossible with more than two individuals in the absence of communication. And so the relevant question that we should be asking is: to what extent is machine learning different from human learning? Would they become more sophisticated than us in coordinating without explicit communication? We have seen in this session some simple simulations in which machine learning algorithms are indeed setting supra-competitive prices without any explicit coordination. These are very simple settings using far from state-of-the-art learning processes, and so it could be argued that more sophisticated algorithms could be even better at colluding. However, Professor Kuhn disagrees arguing recent theoretical literature has precisely pointed out that rationality constraints facilitate collusion.


Ariel Ezrachi, Professor of competition law at the University of Oxford, took this opportunity to explain how Algorithmic collusion has the potential to transform future markets, leading to higher prices and harm to consumer welfare.

He explained that algorithmic collusion may remain undetected and unchallenged, in particular when it is used to facilitate conscious parallelism. The risks posed by such undetected collusion have been debated within antitrust circles in Europe, US and beyond.

He further explored the rise of algorithmic tacit collusion and responds to those who downplay it by pointing to the gap between the law and this particular economic theory. It will highlight the limitations of the legal tools which generally assume tacit collusion to be possible, without communications, under certain market conditions, and therefore not prohibited under Article 101 TFEU.

The importance of communication to prove tacit collusion was highlighted and the aspect of companies manipulating algorithms to benefit themselves was brought to light stressing the need to theorize rules to prevent such actions. He felt that the hub and spoke industry will be the next area under competition law scrutiny due to the providers. Lastly, he commented that the algorithm alone will not be able to lead to tacit collusion and that the debate starts only where the rest of the conditions for collusion are also present.


Session 2 Algorithmic Personalisation

We all know that companies are using big data to personalize their products and services for their users. But what about government? Could policy decisions use big data as well? Think of an extreme case: car manufacturers collect real-time driving data, and potentially they could design personalised breaking systems. But the government could also use this data, to design better speed limits (maybe personalised speed limits). What are the implications of this data sharing?

In this paper, Professor Michal Gal focuses on this interplay between different types of personalisation and argues that personalised laws could create chilling effects that might distort data collection and data-driven innovation. It is well known that users experience a “privacy paradox”. When asked how much we value our privacy, we tend to answer that we value it a lot. Yet our decisions regarding privacy issues do not match these answers. Now, enter the government into the equation. Would users become more reluctant to share their data? And if so, is this going to compromise data-driven innovation? The potential to use big data for better policy is of course promising, but it has to be noticed that governance by data will create inherent tensions with data markets.


Wynne Lam asks about the consequences of companies being able to profile consumers. On the one hand, the more information they have about the user, the more they can personalise its experience and offer more suitable services or products. On the other hand, in the limit, companies could perfectly price discriminate each one of us.

This paper tries to shed some light in this debate by constructing a theoretical model in which firms are able to profile consumers. The setting is as follows: two firms producing a homogenous good are able to identify the valuation of a given group of consumers. Their profiling abilities are asymmetrical, meaning that one of the firms can profile more consumers than the other firm. The profiling is non-perfect, and the firms compete in a Bertrand setting. In equilibrium, firms charge uniform prices for non-profiled consumers and price discriminate for profiled consumers. Firms also enjoy market power in equilibrium.

Regarding the effect of profiling in consumer welfare, the model perfectly identifies the personalisation trade-off. If firms become better at profiling consumers, this is likely to increase uniform prices and some consumers will lose. Yet some consumers will win as a result of the increased profiling. For example, people with low valuations will start buying thanks to the personalised pricing and the market will be expanded.


We have all used comparison websites at some point. We use them to check flight prices, when changing our insurance plan, or to compare the specifications of a mobile phone. And of course, we would like to think that these platforms are objective. Basically, once we introduce our search, we will obtain a ranking of options. This ranking could be ordered by price, user reviews, and so on. Yet sometimes the default ranking, that is, the first one to appear after the search, does not follow any objective measure of classification. The question that arises is therefore: are these platforms biasing default rankings? Amelia Fletcher’s work looking at one of the main hotels booking websites revealed that their margins were taken into consideration by the algorithm that created the default list (together with other factors as price to user review). That is, one of the reasons that a given hotel was appearing near the top of your search list was not based on its quality for price, location, etc., but on the fact that the platform was making a bigger margin with that hotel compared to others.

To obtain an objective ranking is not easy. Yet we would like to know that the factors deciding ranking positions are, at least, aligned with the interest of consumers. And so the relevant policy question is: what extent should we regulate default rankings? On the one hand we know that consumers tend the choose default options and might need to be protected. But on the other hand, there exist potential benefits to small players and new entrants who might otherwise struggle to gain good rakings.


Session 3: Privacy and Competition

Paul Bernal, Senior Lecturer in IT, IP and Media Law at the University of East Anglia, outlined that both governments and individuals misunderstand the importance and collective nature of privacy and, as a result, often underestimate the value of it.

Paul began by outlining the concept of privacy outside of a competition context, exhibiting ‘privacy’ not only as an individual human right, but also as a collective human right. He drew on examples of profiling, targeting and other uses for data collection in order to illustrate how consumers comprehend their privacy rights and its value, finding that the public are more concerned with their privacy rights when there is a challenge to their individual right.

In his consideration of the implications, Paul then deconstructed the recent open letter by Mark Zuckerberg in support of the regulation of platforms such as Facebook. Paul argued this letter was an attempt to maintain the status quo of underestimating the value of privacy.

Finally, Paul suggested competitive policy could be used to break down digital platform conglomerates to protect the public as both citizens and consumers, against societal harms such as price discrimination, the undermining of democracy and the spreading of disinformation.


Gregor Langus, Senior Vice President at Compass Lexecon, outlined how competition in online markets require digital platforms improve their services by profiling their users in order to provide a more personalised experience with targeted advertisements.

Gregor outlined the economic assessment of the effects on consumer welfare from data collection. To do so, he illustrated the trade-offs calculated by both consumers and platforms in their decision to strategically release and collect data, respectively. He also drew on a wide range of surveys illustrating similar findings indicating consumers are increasingly concerned about their privacy and its regulation, and for good reason.

By examining how digital platforms use data (and the vast revenue collected from that use), Gregor assessed the belief that competition in this market needed more data collection about Internet users which, in turn, necessitates a greater invasion of privacy.  In terms of whether, therefore, competition in a digital era results in a ‘race to the bottom’ for  privacy, he highlighted the external factors restricting platforms’ use of data. This encompassed concerns such as consumer backlash, preventing consumers from sharing their data, and a worry about breaches.

Gregor concluded that privacy may be a parameter of competition if verifiable, and as a result, competition may able to mitigate the risk of a ‘race to the bottom’.


Elias Deutscher, Lecturer in Competition Law and IP at the University of East Anglia examined the role of privacy in digital merger control, stating that privacy, as a non-pecuniary parameter of competition, is often overlooked when the adverse effects of mergers are determined. Elias initially questioned how privacy-related harms can be integrated into merger analysis, presenting its role as potentially an element of product quality, an element of consumer choice and as a non-monetary price in zero-price markets.

When questioning how we can therefore measure privacy in monetary terms, Elias used “conjoint analysis” in order to confer a pecuniary valuation on privacy. Elias considered willingness-to-pay studies that seek to measure privacy in monetary terms can be valuable, in particular as they better address the ‘privacy paradox’ by collecting observed preferences, rather than stated preferences.

Elias concluded that these studies can be relied upon in order to facilitate privacy’s use as a parameter in competition analyses, in particular merger assessments.


Session 4: AI for Enforcement, Monitoring and Evaluation

In his talk, Stefan Hunt, Chief Data and Technology Insights officer at the Competition and Markets Authority, discussed how both private and more recently, public sector have seen increased engagement with tools of machine learning and artificial intelligence. Digitisation in this way, he emphasised, has enhanced the ability of authorities to effectively identify problem areas, as well as, enforce regulations. He provided examples, including those in merger analysis and web scrapping that help in extracting greater insights for decision making. Data science projects can help develop capability to analyse harms that can arise with new data, improve efficiency through automating internal processes, build evidence for cases that is richer, visualise and create communications to aid decision makers, and understand technology of firms. Stefan considers that while both domestic and international bodies, are affected by digitisation,  it can be expected that different regulatory agencies within the UK will be affected differentially, depending on individual manpower and budgets: their data needs based on which sectors they work with, etc.

In his concluding remarks, he restated the immense opportunity that this seems to present and can enable strengthening analytical capabilities of regulatory agencies, as well as allow for collaborative initiatives across national and international agencies.


Peter Ormosi, Professor of Competition Economics and CCP member, took the opportunity to introduce the audience to the evolution in the scope of machine learning (ML) over the past 5 years. From the need for building everything from scratch, to having ready access to deep learning frameworks to design neural networks, word embedding trained data and more, in today’s date.

Going further, he spoke about the role of supervised learning in enabling an improved quality of data collection and subsequent analysis. He provided examples where new language processing can enable opening up of yet under-researched themes such as, transformation of unstructured text from public opinions on the internet which can be useful in economic and legal research. Working through the steps involved in data collection, he emphasised how far ML has automated a lot of human work. He next mentioned another tool of ML, i.e., data augmentation which is useful in situations when there isn’t enough data available, such as synonym replacement in incomplete text data. Finally, he discusses some limitations and issues surrounding interpretability of ML models.

In his final remarks, he reaffirms that recognising the potential that data driven research holds, there is an ongoing expansion of ML application in social science research.


Session 5: Apps

Michael Kummer, Lecturer in Economics at the University of East Anglia, discussed a recent research with his co-authors in Germany, which is about the relationship between market power and personal data collection in the mobile app market. As app user data are likely obtained by app developers and then shared with the other parties, the policy makers and regulators have expressed their growing concern about the privacy issue. Moreover, the application market is considered to be highly concentrated. Hence, the main purpose of their research is to examine whether developers with higher market power tend to collect more data. By collecting and analysing the dataset of more than 1.5 million applications in 5000 submarkets over two years, they found that both cross-sectional and panel evidence that higher market concentration leads to higher data collection.  Michael concludes by raising the potential policy concern of data driven merger that may prevent the entry, enhance the market power over data and rise the privacy risk for consumers.


Franco Mariuzzo, Lecturer in Econometrics at University of East Anglia presented a joint paper that highlight the impacts of mobile application quality in the platform competition in tablet PC market. The tablet PC market is peculiar as there are two dominant platforms with different structures: Apple is vertically integrated with iOS and control both the device production and many other manufacturers produce the app store, while Google Android only manages the app store and the device is produced by many other manufacturers. The paper developed both theoretical and empirical framework to study the competition between two platforms in the essence of indirect externality generated by application quality, which is assumed to be exogenous. The key findings emphasised by Franco are that the app quality has a larger indirect effect on Apple’s profit than Android manufacturer’s profit and the former enjoy more benefits than the latter when consumers’ income increase.  Franco suggests a potential direction of future research to study developers’ strategic decision of application quality and how application stores can enhance the application quality.


Day Two

Keynote Speaker, Tommaso Valletti

Competition Policy in a Digital Era: A view from Europe

DG Competition Chief Economist, Tomassso Valletti, opens the second day by addressing the core topic of this 15th CCP conference: what are the challenges of competition policy in the digital economy? We are facing extremely concentrated markets (with some companies’ market share above 90%), no entry, network effects, and behavioural consumers. As economists, we know that these elements do not precisely characterise a competitive market, but rather the contrary: they are all part of a perfect cocktail of market failure. According to Dr. Valletti, the question is not whether or not we intervene these markets, but how do we do this. From a competition authority perspective, resources are limited, and traditional areas of intervention, like merger control, have become irrelevant by design: there is no way to monitor every time that a big tech acqui-hires a 10-people-in-a-garage company. Not only that, one of the main areas in which competition authorities will have to intervene in the future, according to Dr. Valletti, will be privacy, and traditional anti-trust tools might be completely inadequate for this purpose. Having already worked in different cases in the sector, like latest Google shopping case, he has finished his talk by encouraging the audience to identify market failures within digital markets and, further than that, to design remedies for them.


Session 6: AI in Practice (Presentations and Panel)

The session began with Amelia Fletcher introducing the context for the discussion by talking about the importance of understanding AI in practice.

Imran Gulamhuseinwala, Trustee, Open Banking Implementation Entity (OBIE), introduced the concept of open banking which refers to the use of Application Programming interfaces (APIs) that enable third-party developers to build applications and services around the financial institution with an emphasis on the increasing value of data. He commented that AI in financial services is constantly changing and will develop a lot. He concluded by saying that a framework where we are able to price optimisation mechanism where the credit scores are computed in the best manner and that open banking has huge potential to succeed.


Peter Wells,, Director of Public Policy, Open Data Institute, introduced the concept of open data (data that anyone can use) and informed us about the services of the Open Data Institutes role in working with companies and governments to build an open, trustworthy data ecosystem. He explained how an increase in access to data leads to value to business and that there are new approaches to increasing it.


Derek McAuley, Director of Horizon at the University of Nottingham, took the opportunity to explain the different dimensions to AI and data. He talked about data mobility and facilitating the movement of data with emphasis on why the Furman Report focuses on it. Interoperability of data was considered not a burden but a requirement by him. He commented that forcing data sharing (personal data) on companies is something that might do more harm than good. The role of competition with respect to data was also highlighted.


Danilo Montesi, Full Professor of Database and Information Systems at the Department of Computer Science and Engineering of the University of Bologna, explained the different facets to digital platforms and the various ways they use data to conduct their business. He felt that AI compliance has become increasingly difficult and depends and evolves with how data learning occurs.


Sebastian Wismer, General Policy Division, Unit for Digital Economy, Bundeskartellamt, provided a German perspective to competition enforcement in a digitalised world. He suggested that both the pro and anti-competitive effects of algorithm use need to be considered carefully while regulating the same. The effect on collusion was discussed with him commenting that the emergence of collusion due to algorithms is at its nascent stage. He discussed three scenarios of algorithm-driven collusion in his presentation.

The panel provided great insights into AI from the perspective of tech experts and economists.


Session 7: Online Advertising

Gabriele Rovigatti, Research Fellow at Bank of Italy, considered sponsored search – the sale of ad space on search engines through online auctions – in particular the impact of intermediaries’ concentration on the allocation of revenues. Gabriele highlighted how advertisers bid through a handful of specialised intermediaries: a practice that enhances automated bidding and data pooling. However, the concentration of intermediaries results in on intermediary representing competing advertisers and, therefore, reducing competition. Gabriele first explored issues with defining the market, with ‘advertising industries’ being too wide a definition, but individual ‘keywords’ being too narrow. Instead, Gabriele suggested a meaningful thematic categorisation of competing keyword in order to identify relevant markets.

Gabriele then explored the effects of concentration of intermediaries in these markets, and usig data from keyword bidding, demonstrated a sizeable and negative relationship between platform revenue and intermediaries’ HHI. This poses a risk of abuse by the platforms in an attempt to mitigate losses and raises questions about whether increased buyer power is desirable.


Tomassso Valletti, Chief Competition Economist at the European Commission, examined merger assessment considerations in relation to digital platforms.

After outlining how platform markets have been defined in previous merger assessments as being too narrow, Tommaso puts forward digital platforms as ‘attention brokers’ – platforms that learn about their individual users and are able to send targeted advertisements only to those users. Tommaso considered a concentration among attention brokers tightens the bottleneck, leading to higher ad prices, fewer ads being sold to entrants, and lower consumer welfare in the upstream industries. Using a model of a market in which an entrant advertiser requires the advertisement space in order to become known, Tommaso considered how an incumbent undertaking purchasing advertisement space, not to gain recognition, but rather to keep out the competing entrant. Tommaso illustrated ‘attention brokers’ as the gatekeepers to advertising space and further explored the implications of merging digital platforms and the potential to facilitate the ability of the incumbent to prevent entrants.


Sally Broughton Micova , Lecturer in Communications, Policy and Politics at University of East Anglia, examined the market for video advertising, audiovisual media service providers, broadcasters and video sharing platforms and the competition therein for budgets. Sally outlined attempts by the European policymakers to ‘level the playing field’ among competitors by evening the qualitative rules and consumer protections for advertising.

Sally showed a comprehensive map of the players in the relevant market and illustrated three examples in which data can be used; in targeting, in strategy, and in reporting KPI progress. Taking quotes from interviews with players in the market, Sally illustrates that demand-side rebates and discounts based on the size and longevity of contracts, as can be seen in traditional media advertisement, is very much still common practice. In some digital markets, it may even be bigger and less transparent.

As a result, Sally concluded that a number of sources of unevenness have not been addressed by the previous policy changes, in particular those that involve relationships mirroring those in traditional media as well as the control and ability to use data.


Session 8: How to Unlock Digital Competition (panel)

There were several insights into digital competition in this session. There were suggestions to move into an ex ante approach towards assessing all competition law cases with comments for and against it incited by a general concurrence that the current ex post way of regulation can be modified. There was discussion on the implementation of a digital report for Australia like the Furman Report with all panelists highly commending the approach taken in the Furman Report. It was felt that data has changed the way competition regulation takes place and issues such as foreclosure, economic dependence and theories of harm were discussed.


Session 9: Regulatory Compliance Over AI (panel)

The last session was critical of the previous one where it was felt that the law and practical aspects of competition law were not considered. There were concerns regarding the fear of technology presented earlier. It was felt that the legal aspects of AI need to be implemented in a structured manner so that regulators know exactly how to regulate. Issues regarding competition laws applicability to privacy issues were discussed.


Comments are closed.