Read this quarter’s Intermedia here
The EU’s principles will set the highest standards for AI in the world, but is the balance right?
Draft AI guidelines were published last month by the “High Level Expert Group on Artificial Intelligence”, a group of 51 academics, scientists, policymakers, and industry experts appointed by the European Commission. The guidelines introduce a new term into the AI debate. Combining the issue of ethics – how we ensure that AI is used for the good – with AI safety – how we ensure that it doesn’t do harm, accidentally or otherwise – the group establishes the concept of “Trustworthy AI”, in which the EU should aim to lead the world. It is in the reference to trust, and the “human-centric” approach, that the principles bear some comparison to this year’s GDPR legislation. But where GDPR was essentially legislating for existing, known uses and misuses of data, and established global companies, the HLEG faced the altogether trickier problem of considering applications and uses of AI that are still largely in progress or unknown. And it is here that the group acknowledges disagreements. The guidelines suggest, for example, that humans must always be informed when they are dealing with an AI entity. This seems reasonable, but is it always going to be practical? As chatbots become increasingly intuitive and heuristic, are they going to need cautionary warnings? And what about blended applications, when human and AI responses are combined with, for example, a controller responsible for the responses of a number of different bots?
In the continuing debate about how far regulation leads or follows technology developments, the HLEG is placing the EU firmly in the former camp. Will this provide the basis for “responsible competitiveness”, as the group hopes, or will the European AI industry be put at a disadvantage to its American and Asian competitors? Will clear regulation establish the trust needed to encourage innovation, or will the investment dollars find their way to more unconstrained markets? The guidelines are out for consultation, with final recommendations due in March 2019. While there is much debate still to be had, it seems unlikely that everybody will end up happy.
The issue of trust was discussed at length at the Washington DC Telecommunications & Media Forum, and the different facets of the AI debate will be a central part of the IIC programme in 2019.
The EU’s principles will set the highest standards for AI in the world, but is the balance right?
We give innovators and regulators a forum in which to explore, debate and agree the best policies and regulatory frameworks for widest societal benefit.
Insight: Exchange: Influence
We give members a voice through conferences, symposiums and private meetings, as well as broad exposure of their differing viewpoints through articles, reports and interviews.
The new website will make it easier for you to gather fresh insights, exchange views with others and have a voice in the debate
Take a look Learn more about our updatesYou are seeing this because you are using a browser that is not supported. The International Institute of Communications website is built using modern technology and standards. We recommend upgrading your browser with one of the following to properly view our website:
Windows MacPlease note that this is not an exhaustive list of browsers. We also do not intend to recommend a particular manufacturer's browser over another's; only to suggest upgrading to a browser version that is compliant with current standards to give you the best and most secure browsing experience.