Read this quarter’s Intermedia here
Digital convergence has been under way virtually since digital technology was invented. But the idea that networks, hardware, data and content would all come together into single entities has reached a new level with the rise of the seemingly ubiquitous ‘platform’. Originally the term was used to describe a base technology on which other software applications could run. It is the expansion into multi-platform technologies that enabled the companies to transition from discrete into multiple markets and challenge traditional actors across a range of sectors. One of the components of their success has also led to many of their problems. Allowing competitor applications to operate on their platforms expanded their reach, but required them to treat their competitors’ content in the same way as their own, much as a company running train tracks would have to allow competitors’ trains to compete on equal terms with its own. Inevitably, there have been complaints that their behaviour has been anti-competitive.
The political and social environment has also changed dramatically. Once, technology companies were everybody’s darlings. Founders were perceived to be smart, visionary young people, with ambitions for a better world and values such as ‘do no evil’; they were the antithesis of the faceless, greedy corporation. Yet now they take the blame for everything from the decline of the high street, to teenage suicides, privacy intrusions and undue electoral influence.
A number of countries around the world are looking at ‘platform power’ with the implicit aim of finding ways to curb it, and many see Competition Law as a means of doing so. Competition Regulators have a whole series of powers at their disposal and have not been afraid to use them – Google alone has been fined €8.4 Billion by the EU in the last two years, with further investigations ongoing. Traditional regulation has always been about balancing the incentive to investment and innovation with the enablement and protection of consumers. In just the last year the EU has introduced the GDPR, legislation on copyright, and framework guidance on the development of ‘ethical AI’, which is expected, at a future point, to become legislation.
The current climate is certainly one that favours greater intervention. Arguably, this can be traced back to the financial crisis of 2007/8. Public anger and a loss of confidence in the functioning of markets gave rise to more interventionist instincts, political and regulatory. There is a paradox to this. Research by the Pew Research Centre and others repeatedly shows that, despite disquiet in areas like privacy, the public are broadly positive about the impact of technology on their own lives, but much less positive about the impact on wider society. Concerns about specific harms should be dealt with by a combination of pressing platforms to make changes to reduce harms – we are already seeing, for example, commitments on the rapid take-down of harmful material – and appropriate regulation. However, in an environment that favours ‘taming the platforms’ there is a danger of losing sight of the benefits that technology has brought, and will continue to bring. What matters is that any new regulation is both targeted and effective at addressing harms, without disproportionately damaging innovation and investment.
It is the multi-faceted nature of platforms that has blurred the hitherto clear boundaries of sectoral regulation. Platforms can compete in multiple industries, and their ability to control data and pricing power is not always visible to regulators focused on an individual sector. Two recent reviews in the United Kingdom have sought to address the issue. The Furman Review into Digital Competition proposed the creation of a Digital Markets Unit, with a remit to create a code of conduct and promote data mobility and openness. At the same time a parliamentary committee published a report entitled ‘Regulating in a Digital World’ . The report asserted that ‘regulation of the digital environment is fragmented, with gaps and overlaps’, and: ‘big tech companies have failed to adequately tackle online harms.’ Their conclusion was that a new ‘Digital Authority’ was required, to oversee regulation ‘in the digital world’. There has yet to be a response from the UK government, but it is hard to see a Digital Authority resulting in anything other than more regulation. It is striking, however, that while both reviews identified the need for a new Digital Authority, and implied that new competition regulation was required, neither identified what form it should take.
Whether it is required turns on two central questions: do platforms act as an enduring natural monopoly, and are the current legal and regulatory instruments sufficient to deter them from anti-competitive behaviours?
Competition law concepts can be difficult to apply in the case of platforms. In the first instance, when it comes to algorithms, the influence isn’t just commercial, but social and political. Concerns have been raised about, for example, algorithmic bias in areas such as healthcare, insurance and even housing. (‘Humans are biased, so algorithms programmed by humans will be biased.’) Campaigners have expressed concerns about algorithms designed to reduce fake news could result in the erosion of free speech. These are highly charged debates which can’t resolved by a focus on competition regulation.
And the commercial considerations themselves are complex. At the heart of the platform lie algorithms and data, tools that are generally not well understood by policymakers. Where data is gathered through an algorithm, to what extent should the information be open to everyone, and for how long? (This is the same debate that rages over IP in the Life Sciences sector). Is there genuine intellectual property in the algorithm, or is it just a smokescreen to exclude fair competition? It’s easy to become obsessed with ‘big data’, but the relevant question is does the company have both the quality of the data, and the ability to process and manipulate it into a valuable commodity? Different datasets pose different questions; some are unique and key to innovation. Others, such as those used by credit agencies, are shared, and unlikely to present a competition issue. Further complications arise when proprietary data is merged with that harvested from third party websites linked to the platform. Each of these cases needs to be treated differently, making any overarching regulation all but impossible to conceive.
Anti-trust liability can only arise from conduct that is committed ‘intentionally’ or ‘negligently’, but how does this apply in the case of decisions made by self-learning machines? In what proportion is the responsibility that of the programmers, users, or beneficiaries? The EU has made clear how it sees things: ‘an algorithm remains under a firm’s direction and control and therefore the firm is liable for the actions taken by the algorithm’ . However, this has not yet been tested in a legal process. On pricing, Margarthe Vestager, the EU’s Competition Commissioner, has said that what businesses ‘can – and must – do is to ensure anti-trust compliance by design’; to build in, for example, code that prevents algorithms from learning automatically to match competitors’ price increases. Competition law, then, wields significant power but with manifest limitations.
While there are examples of possible anti-competitive behaviours by platforms, it is debatable whether current evidence suggests that it amounts to ‘acting as an enduring natural monopoly’. One can point to companies like IBM in personal computing, and Nokia in mobile telephony as once indomitable ‘natural monopolies’, laid low not by regulation, but innovation. Moreover, while new regulation may be required in the future, many argue that existing tools have so far proved to be sufficient to deal with the alleged transgressions. The case for substantial additional powers, or additional regulators, is hard to make based on the examples seen to date. Establishing new regulatory institutions is a major step that should only be undertaken once their purpose is clearly defined.
A stronger argument is that the expertise of sectoral regulators remains critical to good regulation, and that collaboration between sectoral and competition regulators is the ideal approach. Bringing together the best qualified experts in areas where action is mutually agreed has been shown to be highly effective and un-bureaucratic. In any case, many competition authorities are in the process of improving their sectoral competences.
Competition Regulation has been seen as a ‘ready-to-go’ solution to much of the criticism directed at technology firms. Arguably, it has been used too quickly, too heavily. Many of the complaints about platforms come from their competitors. Some of what Social Media companies do, for example, is analogous to a traditional media organisation, but much of it is different. It requires a nuanced, rather than a homogenous approach, in a world where nuance is not in fashion.
The demands for change and accountability have been aimed principally at the ‘big 5’ technology companies, all of which are American. It is surely pertinent to ask, as some are already, why this is so and what might be done about it? As the world moves into the era of 5G and AI, the US is burdened with much less regulation than Europe. China is already regarded as the world leader in many aspects of AI, and Chinese companies act with little constraint beyond the requirements of the state. Is it realistic to assume that highly regulated European companies will be able to compete equally? Establishing the right environment for investment and innovation is critical, and over-regulation tends to favour larger incumbents over newer, smaller companies.
Platform regulation is widely accepted as both necessary and inevitable by much of the technology community, and by many of the platforms themselves. But it needs to have a clear purpose and outcomes, be balanced and proportionate, and allow the widest possible space for innovation and ideas. Before embarking on another round of regulation directed specifically at the technology industry it might wise be to reflect on the adage that, as in much creative enterprise, less may well result in more.
Pew Research Center
Select Committee on Communications Regulating in a digital world
Directorate for Financial and Enterprise Affairs Competition Committee
Furman Review
Article by Russell Seekins of Re:Strategy with thanks to Stephen Unger of Flint Global and Francesco Liberatore and Sam Hare of Squire Patton Boggs for their contributions.
What's the role of competition regulation in the new world of platforms, and are changes needed? Digital convergence has been under way virtually since digital technology was invented. But the idea that networks, hardware, data and content would all come together into single entities has reached a new level with the rise of the seemingly ubiquitous ‘platform’.
We give innovators and regulators a forum in which to explore, debate and agree the best policies and regulatory frameworks for widest societal benefit.
Insight: Exchange: Influence
We give members a voice through conferences, symposiums and private meetings, as well as broad exposure of their differing viewpoints through articles, reports and interviews.
The new website will make it easier for you to gather fresh insights, exchange views with others and have a voice in the debate
Take a look Learn more about our updatesYou are seeing this because you are using a browser that is not supported. The International Institute of Communications website is built using modern technology and standards. We recommend upgrading your browser with one of the following to properly view our website:
Windows MacPlease note that this is not an exhaustive list of browsers. We also do not intend to recommend a particular manufacturer's browser over another's; only to suggest upgrading to a browser version that is compliant with current standards to give you the best and most secure browsing experience.