Introduction

On 6 May 2025, I attended a webinar “AI and Machine Learning in Criminal Investigations”. It was presented by Katharine Pearson. In the webinar an overview of the historical development of AI, its current applications in law enforcement, the ethical issues it raises were discussed. This report examines the potential uses of AI in policing, bias and privacy challenges, and the future of AI-driven investigations. Based on insights from the webinar, this report explores how AI can improve the criminal justice system while considering risks.

History of AI and Alan Turing

The history of artificial intelligence (AI) began with Alan Turing’s in 1949 question, “Can machines think?”. Turing introduced the (Turing Test), where a judge tries to tell the difference between a person and a machine by asking questions. In 2014, a chatbot named Eugene Goostman passed the test by convincing 30% of judges (10 out of 30) that it was human. However, this claim was criticised, as the chatbot was programmed to be a 13-year-old Ukrainian boy, which may have led the judges to have different expectations regarding language proficiency. More recently, OpenAI’s GPT-4.5 was (judged) to be a human in 73% of the time, demonstrating significant progress in AI’s ability to mimic human conversation.

Before Turing, asking “Can machines think?” was a philosophical question with no clear way to resolve it. However, by rephrasing it as “Can a machine fool a human into thinking it’s another human?”, Turing gave researchers a concrete, behavior‑based benchmark rather than an abstract, untestable one.

Specific Applications of AI in Criminal Investigations

AI is used in criminal investigations in various ways:

  • UK Policing: AI is used to predict peak demand and automate the redaction of sensitive data, enabling officers to focus on critical tasks (Policing and AI).
  • Global Examples:
    • In China, AI optimises traffic lights to reduce police response times.
    • In Spain, the VioGen system predicts domestic violence incidents, with a 95% acceptance rate by police. However, failures like (the murder of Catalina), misclassified as medium risk, demand re-evaluation of the need for human involvement.
  • Forensic Tools: AI can enhances DNA analysis, fingerprint matching, and crime scene image analysis. A 2025 study in the Journal of Forensic Sciences demonstrated the potential of AI in forensic image analysis, supporting human experts (AI in Forensic Analysis).
  • Video Analytics: AI can be used for facial recognition and activity detection, though there are concerns about misidentification (AI in Criminal Investigations).

Challenges and Biases in AI Systems

The use of algorithmic tools in policing could lead to reduced pressure on resources, improved public safety and more consistent outcomes (UK Parliament). However, integrating AI into policing raises ethical concerns. For example, facial recognition tools are at risk of being racially biased because they are trained using historically biased crime data.
The Brennan Center for Justice also notes such risks of reproducing existing racial biases (Predictive Policing Explained). The Viogen case in Spain, where a misclassified risk led to a tragic outcome, illustrates the dangers of over-reliance on AI.

The future of AI in policing looks promising. Quantum computers could allow faster data analysis and more complex simulations, improving predictive models (Your Quick Guide to Quantum and AI: The Future of Computing or Just Hype?). Achievements in natural language processing (NLP) could be used for threat detection through real-time analysis of communication data.

However, these advancements require ethical considerations to prevent misuse.

Criminals’ Use of AI

Criminals are increasingly exploiting AI:

  • Deepfakes: Used for blackmail, fraud, or disinformation (Criminals Use AI).
  • Phishing: AI creates convincing emails increasing scam success rates on a scale that was previously impossible for criminals.
  • Voice Cloning: Enables impersonation.
  • Automation: AI can automate cyberattacks and efficiently identify vulnerabilities.

Law enforcement must develop AI-based countermeasures.

Human and AI Collaboration

AI should be used to augment, not to replace, human capabilities. Although AI is excellent at processing data, human judgement is crucial for understanding context and making critical decisions. For example, automated redaction requires human verification to ensure accuracy, and predictive policing depends on officers interpreting AI outputs (AI in Criminal Justice).

Data Privacy

The reliance of AI on data raises privacy risks when it is stored in the cloud. As noted in the webinar, offline AI solutions offer a way to keep data secure. However, these solutions have serious limitations.

Public Trust

Webinar attendees raised concerns about AI echo chambers, where AI-generated content could reinforce biases and harm trust in policing. Transparent communication about the use, limitations and oversight of AI is extreme;y important for public confidence.

Conclusion

AI surely has the potential to transform criminal investigations. However, there are potential risks of bias, privacy violations and public distrust. By prioritising human oversight and transparency, AI can support justice and safety.