There have been enormous advances in the scale and availability of data, which is essential for machine learning.

This has led to Artificial Intelligence (AI) being adopted in the private sector at a greater rate. But can AI improve decision-making in an area where personal freedom is at stake?

That’s what I recently discussed with Professor Richard Berk, Professor of Criminology and Statistics at the University of Pennsylvania. He told me that there are a variety of applications currently being trialled: “One is called ‘predictable policing’ which allocates police resources, let’s say patrol cars, on the basis of forecasting where and when there’s a risk of crime. The problem is it’s very hard to do that better than police departments already do. The other application is called ‘risk assessment’. Before a judge releases an individual on probation or before a parole board releases somebody from prison, they commonly assess ‘future dangerousness’. That’s a forecast, and the algorithms do that quite well. There’s no question that we can be fairer, more accurate and more transparent than a judge who’s deciding how a particular offender should be treated, based on personal experience or background.”

Protecting trust and transparency

The criminal justice system relies on the transfer of information and data. The ability to access, curate and manage data assets is one of the key aspects of investigating crime and bringing people to justice. However, data is siloed and owned by different people. To be more effective, we need to bridge those siloes and enable the police, courts and social services to share data to improve their decision making. Yet using AI to help draw intelligent conclusions from this data poses ethical issues. How do we re-create human intelligence using machines and data, and pair that with the actual intimacy of our personal and professional lives? It raises complex questions regarding truth, transparency, and the transformation of the data itself. So, I think the jury’s still out in terms of who’s actually benefiting from this.

Professor Berk agreed that while there’s lots of evidence from the private sector that AI calculations can be useful in financial markets, advertising and medical applications, there’s some way to go before we see it embedded in the criminal justice system: “In areas like criminal justice, we still haven’t demonstrated that we can do a whole lot better than current practice. Then there are a whole range of difficult, ethical issues that are, at this point, unresolved.”

It’s important to remember that AI is not being used in the criminal justice system to replace human decision making. Instead, it’s trying to amplify and improve the quality of the decisions that are made at different stages of the justice process.

Richard explained that in the US, where AI is used to allocate police resources and supervise people after their release from prison, it has caused controversy, not from the technology itself, but from the human involvement with it: “A common example is that we often use prior record as a key indicator of future risk, but prior record depends upon having previous contacts, or arrests, with the criminal justice system. There will be points made that because of racial disparities, and how police are allocated, African Americans are going to have longer prior records because there are more police in their neighbourhoods, which exposes them to more arrests. Some people argue that’s an indication of bias, other people argue that, in contrast, we’re allocating police to where the crime is, and they are driven primarily by emergency calls, so the police are actually being responsive to folks in those communities.”

There are also issues with using AI for facial recognition. Societies around the world have responded differently to its use and the question of whether it erodes personal privacy. Professor Berk believes it can have a practical use: “Facial recognition is an interesting example of something which is just calculations but looks to be something that’s intelligent. I mean it’s ‘human vision’ after all. Facial recognition is just analysing pictures. You lay a grid over pictures and then there are pixels and those pixels have properties like colour and brightness. After looking at thousands of pictures, computers can learn which features of those pixels are associated with one person versus another.”

Developing a partnership with AI

The application of AI in the criminal justice system aims to reduce the amount of uncertainty about whether a person is going to offend or reoffend. I can see how AI can be used as a trusted partner, as another voice or another input, but not as the key decision-maker. Professor Berk agreed that, just as with human decision-making, the use of AI can’t be 100% accurate all of the time: “We could do a pretty good job with AI of predicting which individuals, when released from prison, are going to be rearrested. That accuracy is pretty high. But, experienced criminal justice decision makers can do much the same thing; maybe not as well but that’s not a surprise. It’s only ever a probability, not a certainty. So, we can predict with high confidence but not complete confidence.”

I believe that AI should be advancing the use of data for good - for society, consumers and for citizens. It can amplify human brilliance, the ability for people to make decisions based on their own professional experience, and aid and challenge that decision-making. Certainly, with justice and crime the use of AI needs to be approached as a partnership and a collaboration. Ultimately, humans still need to make the decisions.

Written by

Doug Brown

Doug Brown

AI and Data Lead for Capita Consulting

As the Chief Data Scientist and Partner in Capita’s new consultancy practice, Doug brings over 26 years of extensive practical experience of delivering award-winning digital/Big Data transformation projects gained from working at leading advisory firms and start-ups both in Europe and the US.

Our related insights

 

 

Thinking about your organisation?