Read this quarter’s Intermedia here
Many arguments for regulatory intervention in telecoms markets rest on untested assumptions. These are often ideas that make superficial or intuitive sense – and have great political potency – but don’t necessarily stand up to critical analysis. Sometimes these premises seem so obvious that we don’t bother to test them. However, if it is the case that the assumption is correct, then finding the evidence should be easy.
Strangely enough, some of the most important regulatory initiatives in communications are undertaken without carrying out regulatory impact assessments (RIAs). RIAs would ostensibly add more support for policymaking and have been part of the regulatory process of most OECD countries for at least a decade. RIAs consist of a defined process and these steps:
“Testing assumptions helps address policy questions with greater care.”
It seems that regulatory professionals are so busy that they don’t have time to do impact assessments, the point of which is to gather evidence, review alternatives, define success criteria, and weigh various options before implementing regulation. Because of the rushed and last minute nature of telecoms regulation, political energy is directed toward a particular proposal, regardless of whether another option might achieve the goal in a more effective way.
Testing assumptions helps address policy questions with greater care and ultimately leads to better policymaking.
Admittedly I acknowledge my own assumption, that economic and technical analysis should inform policy decisions. Seemingly consumerfriendly policies that don’t take into account the complexities of economics and engineering can have the opposite of what was intended or negative effects.
So the untested assumptions are:
Debates over telecoms policy are necessary to the wellbeing and prosperity of any country. Sound telecoms policy can benefit users tremendously while bad ideas can be terribly costly. At its best, telecoms policy can help lift the poorest and least fortunate among us to an improved quality of life, afford unparalleled access to education, health and other essential services, and create platforms for expression and enterprise. Few, if any, other technologies or industries have the potential to create so much good for so many.
It is not surprising that these assumptions tap into deep currents in the popular psyche. The questions at issue in telecoms policy reflect values at the core of democracy, social commitments to equality and universal access, and concerns about the control of information. The intuitive appeal of these arguments ensures that they find substantial support among well-intentioned legislators, regulators, and much of the public. But intuitive appeal often leads analysis astray.
Policymakers need the intellectual courage and fortitude of a scientist (or a Sherlock Holmes) when it comes to testing assumptions.
The first premise is that everyone needs low-cost access to high speed broadband to take advantage of essential applications for education, health, government, and other social services. This assumption gives rise to several related policy prescriptions: ensuring the availability of service everywhere (universal service); ensuring that service is either low-cost or subsidised for those who may not be able to afford access; ensuring that at least one carrier offering such service is available to every consumer (a ‘carrier of last resort’); and imposing various service-level guarantees and quality of service requirements on every carrier.
The idea of universal service may have grown out of ensuring basic telephone access, but it is worth questioning whether it is necessary or even desirable that every broadband technology, whether fibre, coaxial cable, wireless voice, fixed and mobile wireless data, satellite, and even copper, must comply with such requirements. Indeed some of these technologies are better for voice service, for video service, for downloading large amounts of data, or for playing video games.
Some of these services are also better or worse regarding social commitments: mobile wireless, for instance, is great in that you can bring your connection to emergency services wherever you go; but it is problematic in that it can be difficult for those emergency services to know your location should you need them to find you.
Moreover, emergency, employment, health, government and e-commerce applications don’t require high speeds. Indeed, ensuring that an application and content are designed efficiently not only improves user experience, but it also increases the chances that those technologies can be accessed today on whatever kind of network is available. Speed is not the only important aspect of broadband. For certain health and education applications which require real-time communications, the elimination of latency, jitter, and packet loss are more important.
An alternative approach to mandating high speeds at low cost is to require that essential services be developed to run at low speeds. Another pro-consumer policy would be to move away from defining broadband in terms of speed (Mbps) but instead offer categories of service depending on application, eg. a basic services package for health, education, government and employment applications versus a streaming video package. This would make it easier to enforce remedies that ensure providers fulfil their obligations with a particular package, rather than to attempt to deliver everything on a given speed.
The question remains whether high-speed video should be part of the basic set of essential services when its primary goal is to enable entertainment. Rich media is not driven necessarily by consumer demand, but rather by the bandwidth and technology that makes it available. Furthermore, rich multimedia is not accessible to the deaf and blind, so a key group is already marginalised by insisting that video is an essential service. In our race to leverage the latest and greatest technologies for various (legitimately important) services, we too often forget that not everyone can avail themselves of those technologies.
Certain users place a high value on streaming video, but its social value compared with other applications, whether emergency communications, government, education, health or ecommerce, may be smaller. So we must address the trade-off between resource-intensive networks serving high private value services versus modest networks that support socially valuable services.
“The question is whether high-speed video should be part of the basic set of essential services.”
WIRELINE VS WIRELESS
Another assumption is that wireless technologies can’t compete with wireline. While wireless may have certain limitations currently, in the short term, its portability makes it the preferred broadband connection for an increasing number of people. In the mid- to long-term, as wireless moves into millimetre-wave bands accessing many gigahertz of capacity, wireless may well supplant cable in terms of throughput.
The value of wireless is underscored by the fact that many cable providers are exploring 4G/LTE over unlicensed spectrum. In any case, it’s important to recognise that different users may value the technologies differently, and it is by no means a fait accompli that a basic set of services can only be realised on one kind of technology.
The next premise is that innovation requires an open or neutral internet. In current telecoms debates the premise that openness and neutrality are prerequisites for innovation border on religious dogma, but this assumption too is not necessarily true. Indeed, openness and neutrality are not unambiguously good or bad. Openness may facilitate some innovation, but inhibit others. There are a variety of open and closed business models in which consumers benefit.
It is unquestionably the case that open access can facilitate certain types of innovation. It reduces R&D and other transaction costs (especially search and negotiation costs to get permission or access to use existing infrastructure) and reduces opportunities for rent extraction by those who otherwise control an infrastructure. On the other hand, it makes some forms of innovation more expensive or difficult to implement. However, consumers love one closed innovation platform above all: Apple.
Apple’s hardware and software designs are part of a tightly controlled, vertically integrated, closed product ecosystem. Apple would not exist if we had the equivalent of network neutrality for computer hardware or software. This does not mean that either an open or a closed model is necessarily better in any given case; it does mean that we want a more nuanced approach than one that mandates either approach in every situation.
There is almost no empirical evidence about openness and net neutrality with regard to internet innovation. The literature of net neutrality comprises some 7,000 articles and is almost entirely theoretical. Even the top ten most cited articles, each with a few hundred citations, conflict dramatically about whether net neutrality is even needed, or suggest ambiguous outcomes for the policy. The policy arguments for net neutrality and internet openness typically rely on assertions of the ‘end-to-end principle’ and more recently the ‘virtuous circle of innovation’, two notions which are surprisingly under-theorised in the academic literature given their popularity in the media and net neutrality debates.
There is no doubt that the theory proposed for preserving innovation by Mark Lemley and Lawrence Lessig in 2000 is a potent one. They called it the ‘end-to-end principle’, appropriating the term from a 1984 paper by engineers Saltzer, Reed and Clark. The original proposition is:
“The principle, called the end-to-end argument, suggests that functions placed at low levels of a system may be redundant or of little value when compared with the cost of providing them at that low level.”
In a speech at the FCC’s Open Internet Access Committee in 2010, the original end-to-end principle co-author David Clark noted that his paper was not about ‘openness’, and in fact that the word ‘open’ did not even appear in the original paper. Instead the paper was about ‘correctness’ and where it is appropriate to place functionality in the network depending on the benefits to be delivered. As such, it could be interpreted that prioritisation should be applied at the higher level (or core) of the network, and not the ends, when it is warranted.
Lemley and Lessig suggest that the end-to-end principle explains the virtues of internet architecture, its openness, how the ‘ends’ of the network where users and applications reside should be ‘intelligent’, and that the protocols and pipes be as simple and general as possible. Furthermore, they decried the injustice that telephone and cable companies were regulated differently, that telephone companies were required to unbundle but not cable companies. They predicted that unless similar restrictions were placed on cable, prices and innovation would be harmed. They predicted that the end-to-end principle which ‘governed the internet since inception’ would be compromised.
It may be difficult to tell whether internet innovation has been compromised because cable was not unbundled in the US. Indeed, a number of application innovations have emerged since the year 2000 including Skype, Facebook, WhatsApp and the online version of Netflix, all without net neutrality rules in place. The difficulty of proving the end-to-end principle as driving internet innovation is that any network by definition is an end-to-end system.
The authors observe that there are other important features of the network’s design beyond the end-to-end principle: “As we have said, no one fully understands the dynamics that have made the innovation of the internet possible.” As such, it may be worth revisiting proposed policies to see whether they have oversimplified Lemley and Lessig’s notion, or at least to allow for a more robust understanding of the internet and innovation than simply the end-to-end principle.
“Apple would not exist if we had the equivalent of network neutrality for computers.”
VIRTUOUS CIRCLE OF INNO VATION
Another assumption is the ‘virtuous circle’ of internet innovation. The FCC proposed this theory in its Open Internet Report & Order of 2010. The virtuous circle insists on a logical progression of events: that content and applications emerge from a state of net neutrality; they stimulate demand for internet subscriptions; demand generates revenue for operators which they then invest in infrastructure. However, in the same proceeding a group of internet engineers suggests that “both openness and investment generate innovation”. The FCC’s theory insists on a clockwise progression, but the engineers say that it can also work in a counterclockwise direction. Both assumptions have very different implications for policy. The end-to-end principle and virtuous circle are frequently dramatised by the folklore of the hacker in the garage or dorm room who becomes a billionaire. However compelling this image, it is a romanticised view of internet innovation that is the exception, not the rule. The internet we know today would not be possible without fundamental innovations in computers, chips, servers and storage – all of which required massive investment, market experimentation, government grants, and more often than not, closed laboratories and environments. This is not to say that the innovation in networks is more important than applications, but policy need not make false choices that favour one type of innovation over another.
There are at least one million academic articles about using the internet itself (or internet-enabled platforms) as a form of innovation for industry and society, but precious few articles suggest that the internet must be one way or another, whether open or closed. Moreover ‘innovation’ is a broad and popular term and is the subject of millions of articles in the academic press, and even more in the mainstream press. On the technical side most historical perspectives on internet architecture make clear that, while it has long had an ‘open’ character, this character is at least in part accidental and does not equate with ‘neutrality’.
A review of the innovation literature suggests a number of theories to explain innovation, for example the joining of complementary assets through partnerships and the need to look ‘outside the box’ for new ideas. Ironically, proposed open internet policies may prohibit the very things that the literature suggests promote innovation, namely partnerships such as zero rating. The key takeaways from the literature are nuanced – different price structures ‘can’ or ‘may’ benefit or harm consumers. In some cases, ‘non-neutral’ price structures may benefit consumers, in others it may harm them.
But this does not mean that we should prescribe ex-ante prophylactic pricing rules on every activity and business model – rather, we should monitor conduct and pricing in the internet ecosystem and be ready to bring ex-post actions against pricing decisions that are demonstrably harmful to consumers. Given the nuance and ambiguity of the literature, it is all the more important that impact assessments be undertaken as part of policymaking.
Another assumption deployed in telecoms debates is that things are better somewhere else. It is often expressed as: “We are falling behind in ____” (fill in the blank). The assumption equates the region or country as the sum of a single measure, but it begs the questions as to better for what and for whom and to what end. These pronouncements are frequently and opportunistically justified with rankings produced by a variety of marketing organisations. While rankings can be helpful, they do not in themselves constitute appropriate and sufficient evidence for decision making.
The ‘falling behind’ assumption is a common refrain for the policy crise du jour where emotion and fear overrule analysis and rigour. The myopic focus on broadband as simply the sum of discrete measures such as speed or price misses important nuances about how broadband creates economic and social value. Simply put, broadband is not an end in itself but an enabler. There is no value in being the ‘best’ in any broadband metric if it does not deliver social benefit.
We need a more comprehensive, holistic view of broadband that encompasses not just networks and their characteristics, but adoption, applications, digital readiness, market development, and so on. Indeed the OECD Council’s principles for internet policy embrace a range of outcomes, and not one metric of speed or network type.
“We should resist temptation to make binary interpretations of the world.”
Broadband rankings can be created to ‘prove’ that almost any country is the best or the worst. It’s a tool of political grandstanding that releases political leaders from responsibility. It is relatively easy to improve the numbers on discrete, isolated measures. The greater challenge and responsibility is to ensure that broadband has a productive use in society – which is far harder to achieve. This notion underlies a shift away from the regulatory state toward the developmental state. Leaders realise that expert communications regulation professionals may be better deployed across a range of agencies than siloed in the telecoms authority. Indeed such professionals add limited value by micromanaging networks versus sharing their intelligence to help health, education, transport and other sectors take advantage of broadband technologies. When there is broadband in everything, there is little need for a specialised agency for it.
Such conclusions underpinned in part the decision to dismantle the Danish telecoms regulator in 2011, a near overnight and unchallenged event made by the new centre-left government. Telecoms regulations are still in force, but this demonstrates that there are a variety of ways to achieve policy objectives, and telecoms regulatory authorities need not be the delivery mechanism.
There is no doubt that conducting regulatory impact assessments increases decision making time and can make the case for any one particular outcome less robust. However, it guards against the emotional argumentation and manipulation that certain actions must be done regardless of the cost.
Good telecoms policy is rarely simple. As such we should resist temptation to make binary interpretations of the world where more nuanced views can ultimately deliver better social outcomes.
Is much of current policy based on received wisdom and not rigorous evidence? Roslyn Layton sets out several areas where this may well be so
We give innovators and regulators a forum in which to explore, debate and agree the best policies and regulatory frameworks for widest societal benefit.
Insight: Exchange: Influence
We give members a voice through conferences, symposiums and private meetings, as well as broad exposure of their differing viewpoints through articles, reports and interviews.
The new website will make it easier for you to gather fresh insights, exchange views with others and have a voice in the debate
Take a look Learn more about our updatesYou are seeing this because you are using a browser that is not supported. The International Institute of Communications website is built using modern technology and standards. We recommend upgrading your browser with one of the following to properly view our website:
Windows MacPlease note that this is not an exhaustive list of browsers. We also do not intend to recommend a particular manufacturer's browser over another's; only to suggest upgrading to a browser version that is compliant with current standards to give you the best and most secure browsing experience.