‘Bug bounties’ are a commonly used tool to help spot errors in software. A report from a group of prominent AI researchers has proposed a similar approach as part of a ‘robust toolbox of mechanisms’ to verify claims for AI, reports the Financial Times. ‘Bias bounty hunters’ could include researchers, members of the public and journalists who find apparent bias when using AI-driven systems. The report is designed to move on from ‘abstract ethical concerns’ and focus on actionable solutions, says the newspaper. Institutions involved in the research include OpenAI, Google, the Alan Turing Institute and Cambridge University. Read more (£)
‘Bias bounty’ suggested to combat AI discrimination: Researchers draw on ideas used in software development
We give innovators and regulators a forum in which to explore, debate and agree the best policies and regulatory frameworks for widest societal benefit.
Insight: Exchange: Influence
We give members a voice through conferences, symposiums and private meetings, as well as broad exposure of their differing viewpoints through articles, reports and interviews.
The new website will make it easier for you to gather fresh insights, exchange views with others and have a voice in the debateTake a look Learn more about our updates
You are seeing this because you are using a browser that is not supported. The International Institute of Communications website is built using modern technology and standards. We recommend upgrading your browser with one of the following to properly view our website:Windows
Please note that this is not an exhaustive list of browsers. We also do not intend to recommend a particular manufacturer's browser over another's; only to suggest upgrading to a browser version that is compliant with current standards to give you the best and most secure browsing experience.