Guilty or not guilty – can the computer decide?

 

Algorithms and artificial intelligence (AI) have become ubiquitous: they track our movements through the GPS in our smartphones; in the health system they help offer diagnostics to cure diseases; they influence our finances by determining our credit rating and they even beat mankind’s greatest chess and Go players.

Until relatively recently, the justice system was one of the few areas of public life that seemed impervious to the advance of technology: witness the barristers’ clerks wheeling trolleys laden with legal documents to court or the lack of skype facilities in court rooms.

This is all changing however, and even the justice system is slowly integrating AI and algorithms into its practices.

What is an algorithm? Is their introduction into the justice system the end of the world as we know it?

An algorithm, in the simplest terms, is a set of rules and criteria that enable problem-solving, a sequence of instructions telling a computer what to do. The most well-known example of using algorithms within the justice system is probably predictive policing. Anyone who has watched The Wire or Minority Report will know what predictive policing entails – anticipating where crime will take place, enabling the police to stop it before, or as, it happens.  It depends on sophisticated computer analysis of historical crime data, analysing their time and location to predict where and when future crimes will occur.

This is not vastly different to the way a police officer anticipates crimes, deducing for instance that burglars would tend to target businesses at night when they are unoccupied, and to target homes during the day, when residents are away at work. This kind of analysis enables the police to focus resources on areas most likely to suffer from crime.

Police forces are currently the chief users of algorithms in the justice system in England and Wales.

Kent Police has been using predictive policing software for a few years. A Google map on a computer screen shows little red dots where the algorithm predicts crime is likely to occur based on analysis of previous crime patterns. This works especially well for recurring, low-level crime. Kent police estimates that low-level street crime, like common assault and anti-social behaviour, has dropped by 7% since the system was rolled out in 2013.

Durham Police uses an algorithm called HART (Harm Assessment Risk Tool), which helps custody officers decide whether a suspect should be kept in custody, released, or directed to a rehabilitation programme called Checkpoint. HART uses data from 34 different categories – covering a person’s age, gender and offending history – to rate people as a low, moderate or high risk.

The London Metropolitan Police (the Met) and South Wales police both use facial recognition technology (FRT). This algorithm scans faces in real time and matches them against an existing database of faces. This has been used in city centres, at political demonstrations, sporting events and festivals with worrying results.

According to Big Brother Watch, the Met’s facial recognition matches are 98% inaccurate, while South Wales Police’s matches are 91% inaccurate. There are also serious problems with the fact that these police forces retain huge databases of faces, including those of innocent individuals incorrectly matched by facial recognition.

In a number of states in the United States, algorithms are used to help judges make sentencing decisions. The Level of Service Inventory – Revised (LSI-R) is a popular commercial risk assessment tool that has been adapted for use in assessing a person’s risk of recidivism and the best sentencing options. An algorithm processes data from ten key areas: criminal history; leisure/recreation; companions; family/marital; accommodation; education/employment; financial; alcohol/drug problems; emotional/personal and attitudes/orientation.

The use of algorithms in the justice system may be attractive from an efficiency perspective, but there are ethical and human rights concerns that merit close examination.

Predictive policing, for example, relies on existing police data. If this data contains biases, as is virtually unavoidable, these will be fed into the algorithm, thereby reinforcing those biases while giving them a veneer of objectivity. To give a concrete example: if more people from ethnic minorities are discretionally stopped and searched, their profiles will be more prominent in police data and will therefore be identified as more likely to commit a crime by an algorithm. An algorithm may just be a set of instructions, but it would be a mistake to assume it therefore operates without bias or prejudice.

One of the criteria that was taken into account by the designers of the Durham custody algorithm when deciding whether a suspect was low, medium or high risk of recidivism was the postcode of their home address. This is problematic as including location and socio-demographic data can reinforce existing biases in policing decisions and the judicial system. Further, if police respond to forecasts of high risk in particular postcode areas then this additional law enforcement activity may amplify existing patterns of reported offending. In other words, the more resource you focus on a particular area, the more crimes you will detect there.

Both the Met and South Wales Police face legal challenges to their use of facial recognition technology in public spaces on the grounds that this violates rights to privacy, free expression and protest. Civil rights group Liberty asserts that it is not “authorised by any law, guided by any official policy or scrutinised by any independent body.”

Another concern arises from the involvement of private, commercial companies that in most cases build the algorithm and, for commercial reasons, do not allow the “black box” or foundation of the mechanism to be opened. This impenetrability means decision-making process cannot be thoroughly reviewed or investigated. For example, the LSI-R was developed by Canadian company Multi-Health Systems. The proprietary nature of the tool means that detailed information on how the tool works is not publicly available. This raises issues with transparency and the right to due process.

I chair the Law Society’s Technology and the Law Policy Commission which is examining ethical, rule of law, and human rights issues relating to the use of algorithms in the justice system. The commissioners are taking evidence from a wide range of stakeholders to inform recommendations. We are interrogating the accuracy, validity and reliability of the data used to train machines; the design, review and sale of software; the lack of remedies available to those subject to algorithmic decisions; and the viability and shape of any future regulation of the use of algorithms in the justice system.

Christina Blacklaws, Law Society President.

For more information, to submit evidence, or to attend a session of the Law Society’s Technology and the Law Policy Commission, please go to: www.lawsociety.org.uk/tlc

 

Leave a reply

Time limit is exhausted. Please reload CAPTCHA.

Copyright © 2015 The Barrister. All rights reserved.