International Institute of Communications

Shaping the policy agenda: TELECOMMUNICATIONS • MEDIA • TECHNOLOGY
Tel:+44 (0)20 8544 8076
Fax:+44 (0)20 8544 8077

social twitter sm  social linkedin sm  social youtube sm  social facebook sm

Preferring humans to AI

Preferring humans to AI

There will be many jobs AI can do better than humans. We might want humans to carry on doing them anyway.

Ask a lawyer about their biggest frustrations and, as long as the conversation’s private, one of them will be the inconsistency of judges; or, more specifically, the inconsistency between judges. Their individual rulings, perhaps driven by unconscious or other kinds of bias, may be sufficiently predictable for a law firm to decide its strategy – including whether to drop the case entirely – upon the appointment of a presiding judge. So predictable, in fact, that technology companies are already on to it. Algorithms now exist that are capable of reviewing witness statements for “sentiment” (principally language patterns and the use of key words) and matching these to the previous rulings by the judge to provide a predicted outcome and, by extension, a decision on whether or not the case should be taken to court, settled, or dropped. This is good business for the law firm, who can provide this analysis to their client. And clients, in general, prefer analysis to opinion.

All of which feels just a little depressing.Should there be a right to sue an algorithm? Instead of using AI (or in this case its younger sibling, Machine Learning) to predict the bias in the judicial system, what if it could be deployed instead to improve it? An equivalent analysis could be provided to judges themselves, reviewing comparable cases and rulings by other judges to provide a judgment “recommendation”. In this way greater consistency would be assured. And in any case, in areas such as sentencing, aren’t guidelines so confined that the opinions of judges are already largely removed from the equation?

This is causing major palpitations in the legal profession. How could such a system be accountable? Should there be a right to sue an algorithm? And if all judges were to follow algorithmic guidance, would this not itself create bias over time, as judgments “revert to the mean”?

The short answer is that AI can be used to inform, help, and guide humans, and perhaps protect them, but it cannot be allowed to make decisions over them, even if those decisions are likely to be better or fairer. AI will certainly have a major role to play in the legal system of the future, but the judgment of one fallible human being over another is one that shouldn’t be handed to robots.

Russell SeekinsRussell Seekins
Partner, Re:Strategy

  • Tuesday, 28 August 2018

Leave a comment

You are commenting as guest.

Stay up to date with the IIC

Tell us how you'd like to stay informed about events, interviews and more from the IIC. 

My IIC Preferences

Follow us on Twitter