EU sets out artificial intelligence plan amid concerns
The European Commission (EC) has published a plan prepared with member states to foster the development and use of artificial intelligence (AI) in Europe. It focuses on four areas: increasing investment, making more data available, fostering talent and ensuring trust. The EC recognises that investment levels for AI in the EU are low and fragmented compared with other parts of the world such as the US and China, and has set out various European funding models and a network of European AI excellence centres. It also proposes to create cross-border “data spaces” for AI; support advanced degrees in AI through scholarships; and a group of experts, representing academia, business, and civil society, is working on ethics guidelines for the development and use of AI. A first version of the ethics guidelines will be published soon and the experts will present their final version to the EC in March 2019 after consultation through the European AI Alliance. The ambition is then to bring Europe’s ethical approach to the global stage and cooperation is invited with all non-EU countries “that are willing to share the same values”.
Meanwhile, the third annual AI report from the AI Now Institute at New York University notes a year of “cascading AI scandals” that have raised questions of accountability, saying “existing regulatory frameworks fall well short of what’s needed”. Among the report’s recommendations: governments need to regulate AI by expanding the powers of sector-specific agencies to audit and monitor these technologies by domain; facial recognition and “affect recognition” (a subclass of facial technology) need stringent regulation to protect the public interest; and the AI industry urgently needs new approaches to governance, and AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. It also recommends that technology companies should provide protection for conscientious objectors, employee unions and ethical whistleblowers, and consumer protection agencies should apply “truth-in-advertising” laws to AI products and services.
European plan here. AI Now report here. Also issued is the Montreal Declaration for Responsible Development of AI (in French).
- Tuesday, 18 December 2018