"Build a trustworthy framework, and they will come."
The EU’s principles will set the highest standards for AI in the world, but is the balance right?
Draft AI guidelines were published last month by the “High Level Expert Group on Artificial Intelligence”, a group of 51 academics, scientists, policymakers, and industry experts appointed by the European Commission. The guidelines introduce a new term into the AI debate. Combining the issue of ethics – how we ensure that AI is used for the good – with AI safety – how we ensure that it doesn’t do harm, accidentally or otherwise – the group establishes the concept of “Trustworthy AI”, in which the EU should aim to lead the world. It is in the reference to trust, and the “human-centric” approach, that the principles bear some comparison to this year’s GDPR legislation. But where GDPR was essentially legislating for existing, known uses and misuses of data, and established global companies, the HLEG faced the altogether trickier problem of considering applications and uses of AI that are still largely in progress or unknown. And it is here that the group acknowledges disagreements. The guidelines suggest, for example, that humans must always be informed when they are dealing with an AI entity. This seems reasonable, but is it always going to be practical? As chatbots become increasingly intuitive and heuristic, are they going to need cautionary warnings? And what about blended applications, when human and AI responses are combined with, for example, a controller responsible for the responses of a number of different bots?
In the continuing debate about how far regulation leads or follows technology developments, the HLEG is placing the EU firmly in the former camp. Will this provide the basis for “responsible competitiveness”, as the group hopes, or will the European AI industry be put at a disadvantage to its American and Asian competitors? Will clear regulation establish the trust needed to encourage innovation, or will the investment dollars find their way to more unconstrained markets? The guidelines are out for consultation, with final recommendations due in March 2019. While there is much debate still to be had, it seems unlikely that everybody will end up happy.
The issue of trust was discussed at length at the Washington DC Telecommunications & Media Forum, and the different facets of the AI debate will be a central part of the IIC programme in 2019.
Andrea Millwood Hargrave,
Director General, International Institute of Communications
- Thursday, 03 January 2019