Can police ethically implement complex algorithms?

8 February 2024

Two police officers standing side by side, wearing uniforms and looking alert.

Article Written By

Dr Ruth Spence, Dr Tamara Polajnar, Hazel Sayer

AI can ‘transform’ policing but concerns about bias with its use in the criminal justice systems, quality of data and a lack of transparency must be addressed

Police resources are stretched by the demands of responding to increasingly complex crimes. As such they are turning to ‘data-driven policing’, using data analytics and machine learning algorithms to assist in aspects like crime prediction and resource allocation. This had led to a range of frameworks such as ALGO-CARE and the Algorithmic Transparency Recording Standard to help guide police in their journey. However, legal scholars and human rights groups have continued to raise concerns around using algorithms in policing, raising the question of whether they can truly be introduced ethically.     

One of the issues groups have drawn attention to is the persistence and exacerbation of bias. Several studies have found models used in the criminal justice system are biased, and susceptible to feedback loops, an endless set of instructions, which can worsen this issue. Function creep is another issue raised, where the aim of the model is expanded or changed so it is no longer being used in the way it was originally intended, making it more susceptible to abuse and inaccuracies. On top of this, there are potential problems with the limited data available, the quality of the data and a lack of transparency around how the models work. Together, activists express concerns that these can undermine human rights. For instance, the right to a fair trial is based partly on being able to contest the evidence, but if individuals cannot understand the reasoning behind the algorithmic output, it is hard to argue against it.  

And there are practical and organisational barriers. The development of complex algorithms depends on data science expertise, but often domain experts and data scientists work in silos (working on the same objective without communication). However, without knowing what is involved, police may embark on data modelling projects that ultimately waste limited resources, end up being ineffective or worse – introducing some of the problems described above. Conversely, data scientists may spend time on projects that are irrelevant to current operational needs and are therefore never properly implemented. Equally, the current data landscape is fragmented, as police data is managed across multiple systems that are not necessarily compatible, and police often do not have enough machine learning expertise to take advantage of the advancements in this area.  

Yet none of these problems are insurmountable, and if done right AI in policing could deliver huge benefits. We were funded to explore how predictive modelling can be used to improve accurate prioritisation and prediction, as well as evaluate how it can be done ethically. Several factors become apparent over the course of our yearlong project, including: there were no practical guidelines available that set out how police could develop and implement complex algorithms; the police view data science as a way of increasing their capacity to respond to crimes, however models like these can only point who or where to target and further investment in staff is required to effectively act on that output; multi-disciplinary teams that include data scientists and domain experts (including frontline staff) are needed to ensure models are addressing the right problems; and lastly, the required machine learning and data science expertise needed to properly develop these models often tends to be lacking. 

To help police address some of these issues, we developed a practical ‘how-to’ guide for police to draw on. RUDI – Rationale, Unification, Development and Implementation – guides police through the process of developing and implementing complex algorithms. This includes describing the sort of machine learning expertise needed and highlighting capacity and team-working issues. Like other frameworks that have gone before, RUDI cannot fix the issues, but it does point police in the direction of how they can fix them. 

So, to come back to the initial question of whether police can ethically implement complex algorithms, we believe the answer is yes. The drive towards digital transformation, demonstrates the continuing motivation by police to implement data modelling approaches and the combination of national guidance, frameworks and now also practical advice means there are solutions available to ensure the benefits of AI outweigh the risks. However, there are still challenges which need to be addressed and it remains to be seen if the strategic motivation will be matched by any enthusiasm to do the groundwork.

About the authors

Hazel Sayer is a Research Fellow at Bournemouth University. Hazel draws on qualitative and quantitative research methods to investigate transformative policing of Violence Against Women and Girls (VAWG).

Dr Ruth Spence is a Senior Research Fellow at the centre for Child Abuse and Trauma Studies (CATS) at Middlesex University. Ruth uses quantitative and online methodologies to research online harm, trauma and its sequalae, working with partners in the third sector. police, and industry.

Dr Tamara Polajnar is a freelance Machine Learning Scientist consulting through Middlesex University on ethical AI usage in Policing. Tamara specialises in ML applications, in particular in Natural Language Processing.