Skip navigation

PROBabLE Futures

PROBabLE Futures (Probabilistic AI Systems in Law Enforcement Futures) is a four-year, £3.4M RAi funded project led by Northumbria University and involving Glasgow, Northampton, Leicester, Aberdeen and Cambridge Universities and endeavours to deliver an ‘operational-ready’ blueprint of an evidence-based, contextually informed and future-oriented framework for responsible AI in law enforcement and criminal justice.

The research of the PROBabLE Futures project team draws on the expertise of the Alan Turing Institute, the National Police Chiefs Council, police forces, the Home Office, JUSTICE (the legal charity) and three industry partners including Microsoft. The project’s focus is on probabilistic AI in policing and the wider criminal justice and law enforcement system – it will deliver tested and trusted advice and frameworks to commercial system developers and public sector bodies on when and how to deploy AI in this sector effectively and with the long-term trust of communities in mind. Activities will also include mock trials with AI outputs as evidence, applying lessons from prior technological developments to study the use of ‘Probabilistic AI’ in policing, intelligence, probation and wider criminal justice contexts.

Probabilistic systems supported by AI, such as facial recognition, predictive tools, large language models (LLMs) and network analysis are being introduced at pace into law enforcement. Whilst these systems offer potential benefits, decision-making based on ‘Probabilistic AI’ has serious implications for individuals. The key problem for responsible AI is that the uncertain or probable nature of outputs is often obscured or misinterpreted, and the underlying data is sensitive and of varying quality. If AI systems are to be used responsibly, attention must be paid to the chaining of systems and cumulative effects of AI systems feeding each other. PROBabLE Futures will review, question and evaluate the use and applications of probabilistic AI across different stages of the criminal justice system. 

Inspiration

Decision-making based on Probabilistic AI in the law enforcement process can have serious consequences for individuals, especially where the uncertain or probable nature of outputs is obscured or misinterpreted.

Ambition

To build a holistic, rights-respecting framework to steer the deployment of Probabilistic AI within law enforcement, creating a coherent system, with justice and responsibility at its heart.

Impact

Law enforcement bodies, policy-makers and law-makers will deploy the multiple factors and requirements in our framework, contributing to the development of a system-based approach to law enforcement AI.

Objectives

  • Mapping the probabilistic AI ecosystem in law enforcement
  • Learning from the past
  • Scoping for the future, including evaluation of contested technologies such as remote weapons scanning
  • Focusing upon practical use of AI & the interaction of multiple systems (chaining)
  • Using XAI taxonomy and novel methods including story-telling and mock trials using AI evidence
  • Establishing an experimental oversight body including members representing under-represented groups

PROBabLE Futures Logo Full TextRAi Logo Mini

    

 


PROBabLE Futures is a Keystone Project Funded by Responsible Ai UK


Back to top