Policing by machine

Police forces across the UK are using computer programs to predict where and when crime will happen – and even who will commit it. These dangerous practices entrench biased approaches to policing and threaten our human rights.  

At a time of swinging cuts to public services and rising concern around serious youth violence, technology and data is being heralded as the panacea – from the discriminatory Gangs Matrix through to the roundly criticised Prevent program. The Government is seeking to harness hordes of data to categorise people, make predictions about their behaviour and ultimately exert control.

At the centre of these approaches is the notion of “pre-criminality” – that some people or communities are pre-disposed to offending behaviour because they display “risky” characteristics, justifying increased surveillance. This offensive idea lies at the heart of new predictive policing being used by police forces across the UK.

WHAT IS PREDICTIVE POLICING?

Predictive policing computer programs use algorithms to analyse masses of police data, identifying patterns to make predictions about crime. These programs can often “learn” over time and become more autonomous without having to be programmed.

In 2018, Liberty sent Freedom of Information requests to every police force in the UK to build up a picture of how predictive policing was being used. Liberty’s ‘Policing by Machine’ report collates the results, as well as outlining the risk that the use of these programs presents to our rights.

The report focuses on two key types of predictive policing program: predictive mapping programs and individual risk assessment programs.

Predictive mapping programs

Predictive mapping programs use historical police data to identify “hot spots” of high crime risk on a map. Police officers are then directed to patrol these areas, which are often already subject to disproportionate over-policing.

 

The following police forces have used or are planning to use predictive mapping programs:

Avon and Somerset, Cheshire, Dyfed Powys, Greater Manchester, Kent, Lancashire, Merseyside, Metropolitan Police, Northamptonshire, Warwickshire & West Mercia, West Midlands and West Yorkshire.

 Individual risk assessment programs

Individual risk assessment programs use data, including personal characteristics, to predict how an individual person will behave, including whether they are likely to commit – or even be victims of – certain crimes. This encourages an approach to policing based on discriminatory profiling.

Individual risk assessment programs are being used by: Avon & Somerset, Durham and West Midlands.

DISCRIMINATION

The data used by predictive policing programs inevitably reflects pre-existing patterns of discrimination. For example, predictive mapping programs utilise police arrest data, which does not present an accurate picture of where crime is happening – instead, it reflects how different communities are already being policed and how crime is currently reported or goes unreported for reasons including mistrust and fear. Mapping programs are likely to spark “feedback loops” – a process which sees officers sent back to patrol communities which are already experiencing over-policing. The use of these programs puts a “neutral” technological veneer on biased policing and further embeds this approach into policing practice.

Discriminatory approaches are also entrenched in individual risk assessment programs, which encourage an approach to policing based on offensive stereotyping. For example, Durham Police have used a program called Harm Assessment Risk Tool (HART) since 2016. The program uses machine learning to decide how likely a person is to commit an offence over the next two years. It analyses data including a person’s age, gender and part of their postcode which may act as a proxy for race and encourage dangerous profiling.

The HART program has previously used data provided by Experian to link people’s names to stereotypes: for example, people called Stacey are likely to fall under “families with needs” who receive “a range of benefits”, while Terrence and Denise are “low income workers” with “few qualifications”. Running this kind of data through individual risk assessment programs inevitably encourages discriminatory associations between factors such as family circumstances, income, class and the propensity to commit crime.

Meanwhile, a new risk assessment tool being developed for use by West Midlands Police will draw upon stop and search data when assessing someone’s risk – even though this police power is disproportionately deployed against people from BAME communities. Government statistics show that black people are around nine and a half times as likely to be stopped as white people.[1]

A UBIQUITOUS WEB OF SURVEILLANCE

 The potential for predictive policing to be utilised alongside other deeply invasive tools gives rise to a sinister web of surveillance that utilises biometric, surveillance and communications data together to create an unprecedented intrusion into our daily lives. Combined with the roll-out of facial recognition, mobile phone data extraction tools and body worn video, we move towards a model of policing which sees a significant shift in the balance of power between the state and the individual, with people forced to justify their entitlement to privacy rather than the state justifying its intrusion.

 

As we become more conscious of the ways in which our data can be used against us, there will be a chilling effect on what we say, where we go and who we associate with.

 

And the threat to our free expression and free association will be most keenly felt by those communities which are already subject to disproportionate interventions by the police.

 TRANSPARENCY

The calculations performed by predictive policing programs are opaque, and their recommendations cannot be adequately explained, interrogated or challenged.  While this is particularly the case where predictive policing programs have been purchased from private companies – because the algorithms are likely trade secrets – it is also the case with internally developed programs. Even the scientists who create the algorithms will be unable to fully explain how a program arrives at its decision. This means that we can’t interrogate these predictions for bias or challenge their predictions in the same way as we could a human officer.

While many police forces will rely on the fact that a human officer will always be involved in the decision-making process, this raises significant concerns about automation bias – where a human decision-maker defers to the machine and accepts its recommendation. Enabling humans to work alongside algorithms requires in-depth and long-term research, analysis and testing – and there is no evidence, as yet, that automation bias can be sufficiently mitigated.

 OVERSIGHT AND ACCOUNTABILITY

 A multitude of oversight committees and boards have been introduced over recent months, focused on everything from algorithmic bias to the ethical concerns presented by individual technologies. While critical engagement is to be welcomed, we should be cautious around how far these bodies are equipped to push back against the introduction of these programs and practices. These technologies should be considered using a human rights framework rather than one of unaccountable ethics – and before we ask about fairness, accountability and transparency we should be asking what the rights risks are and whether these technologies should play a role in policing at all.

THE FUTURE

Liberty calls for the use of predictive mapping programs and individual risk assessment programs as it currently exists to be ceased. At the very least, police forces in the UK should fully disclose information about their use of predictive policing programs so there can be a well-informed public and parliamentary debate around the way that we want to be policed.

The Government should now focus on meaningful responses to underlying causes of crime. Technology cannot provide a quick fix to problems which require a holistic consideration of, and investment in, areas such as education, housing, employment, mental health support and social care.

Computer programs which can assist in preventing crime before it even happens may sound laudable, but  they perpetuate approaches to policing predicated on these dangerous ideas around who is inherently risk and likely to offend. The impact of these approaches on our privacy and freedom of expression is significant – but they also entrench the discrimination which is deeply embedded in our society’s data.

 

By Hannah Couchman, Policy and Campaigns Officer, Liberty

Liberty’s report, Policing by Machine, is available at: www.libertyhumanrights.org.uk/pbm

[1] https://www.ethnicity-facts-figures.service.gov.uk/crime-justice-and-the-law/policing/stop-and-search/latest

Leave a reply

Time limit is exhausted. Please reload CAPTCHA.

Copyright © 2015 The Barrister. All rights reserved.