Unlocking the Potential of AI for English Law

 

Artificial intelligence (AI) is attracting enormous hype in the media and public discourse. Well-publicised recent successes have included self-teaching board game champions and leaps towards self-driving cars. Economists see AI as a nascent general purpose technology, capable of transforming working patterns in professional sectors, including law, in a way that some liken to the impact of the industrial revolution on manual labour.

We are collaborating with an interdisciplinary team of academics at Oxford and several private sector partners on an ambitious programme of research entitled Unlocking the Potential of AI for English Law. This is investigating the opportunities for, and barriers to, application of AI in law. The project is funded by UK Research and Innovation as part of its Next Generation Services investment programme, an Industrial Strategy Challenge Fund intended to stimulate research partnerships between academia and the private sector in areas of importance to the UK economy. Our research questions include the impact of AI on business models in legal services; the role AI might play in dispute resolution, and its impact on the quality of, and access to, justice; the new skills needed to make the most effective use of AI; and ways of delivering training and education to meet these needs. In this short piece, we focus on the issues in relation to dispute resolution.

The Rise of AI: this time, it’s different?

Recent advances in AI have mainly been based on machine learning (ML). This relies on applying computing power to very large amounts of data, the availability of which has blossomed in recent years. Within specific domains, current ML systems have achieved (super)human performance in relation to many specific tasks, including image recognition, language translation and information retrieval. However, there is little potential for transfer of understanding between applications. A system that delivers general human-level general performance (artificial general intelligence or ‘AGI’) is—according to experts—anywhere between a decade and two centuries away.

This is not to say that lawyers shouldn’t pay attention: one context in which ML techniques have already achieved superhuman performance is in identifying relevant documents from amongst very large bodies of materials. In contentious matters, this is known as “technology-assisted review” (TAR), spurred by rapid growth in electronically stored information relevant to litigation. Similarly, large numbers of documents must be navigated in the context of transactional due diligence, again making it economic to apply supervised learning techniques. A number of platforms applying this kind of technology are now in use by large UK law firms in these contexts.

Another fast-growing application, which may be of particular concern to lawyers worried about being replaced by robots (albeit perhaps not in the immediate future), is the use of technology to predict case outcomes. A range of tools that mine and aggregate data from prior disputes promise to give parties information about the prior record of particular judges and lawyers. These data can then be fed into an ML model to predict outcomes. Early work has produced results achieving in excess of 70 per cent accuracy in predicting win/lose outcomes for disputes in the European Court of Human Rights and the US Supreme Court. However, the enormous variety of disputes complicates the analysis, and early commercially-available versions of this type of application focus on particular dispute types – from challenging parking fines to predicting employment status – to achieve better results.

 

Mapping the opportunities

It is important to understand that in these applications, the AI system is not actually “applying” the law. Rather, it is modelling statistical relationships between the language and outcomes in prior disputes to draw inferences about likely outcomes in other matters. However, simply predicting likely outcomes with a sufficient level of accuracy is likely to be very useful for many commercial parties. A prediction permits parties to determine an appropriate settlement value, and avoid the costs of litigation.

The application of AI models to precedent data raises many important questions. From a practical standpoint, appropriately-trained AI model be embedded within an arbitral or other dispute resolution mechanism, to provide a cheap means of resolving disputes? Even if the AI a model’s predictions of how cases “ought” to be resolved contained a degree of random error (as compared to application of the law by a human judge or arbitrator), it could still be attractive for parties who are repeat players in commercial disputes, across which individual errors could average out. From a normative perspective, automation of dispute resolution raises further questions about the circumstances under which such a process might be subject to (human) judicial review.

Access to Justice

More fundamentally, by lowering costs automation also holds out the promise of facilitating access to justice for many parties, including vulnerable individuals and small businesses. Yet while automation may readily promote access, it will also present challenges regarding the nature of the justice thereby engaged. The constitutional guarantees underpinning access to justice invoke long-standing principles including the central role of adversarial litigation and the right to a trial at common law, as highlighted by Lord Reed in the UK Supreme Court’s decision in R (on the application of Unison) v Lord Chancellor [2017] UKSC 51 at [66]-[85].

In this regard, a significant limitation the use of ML-based AI in legal applications is the lack of transparency concerning factors relevant to a prediction. ML can give an expected outcome—and perhaps even suggest the appropriate quantum of damages—but generally cannot not provide any sort of readily-interpretable explanation behind this. For the time being, at least, this seems a fundamental obstacle to automation within our existing procedural rules.

A “front end” for a system based on ML-based AI could be framed in terms of lay questions, to which the user could provide answers, and an outcome is automated: indeed, early experimentation as part of HMCTS’s digitalisation agenda is beginning to develop case triage on that basis. Simple matters such as conveyancing, lease agreements and wills can readily be automated; personal injury could be turned into a liability estimation mechanism, and so forth. As trust in such systems grows, parties might begin to feel comfortable waiving their right to a trial before a judge.

The Need for Data

The automation of legal decision making, finally, is not just a question of automating analytical processes through ML and related techniques: the best algorithm is of little use unless it can be ‘trained’ on relevant data sets. Data availability is thus crucial: whereas significant proportions of US and European Case Law have been made available in machine-processable formats, English law (as well as justice system data more broadly) are yet to become similarly available. The challenge does not end with the provision of data:  A further, related risk inherent in the application of ML to existing datasets of human practice is that the data may reflect some element of bias in prior decisions against persons in respect of (now) protected characteristics.  Given changing attitudes—and law—over time, it seems plausible that such bias is more likely to be present for older decisions. ML applications coded on such data may simply replicate this bias. Because ML cannot explain how results are achieved, it is not possible simply to examine the process of reasoning.  Instead, it is necessary to explore other mechanisms for ensuring that the decision is free from discrimination.

Conclusion

As even this short overview demonstrates, the application of AI to law raises many interesting and challenging questions.  Our research project will explore these over the course of the next two years: if you are interested to learn more or become involved, please get in touch!

 

john.armour@law.ox.ac.uk

John Armour, Hogan Lovells Professor of Law and Finance at the University of Oxford, and a Fellow of Oriel College

jeremias.prassl@law.ox.ac.uk

Jeremias Prassl is Associate Professor in the Faculty of Law, University of Oxford

Leave a reply

Time limit is exhausted. Please reload CAPTCHA.

Copyright © 2015 The Barrister. All rights reserved.