PSNI Sees Surge in AI‑Related Reports, Deepfakes Drive Concerns
Belfast, Northern Ireland (UK) — The Police Service of Northern Ireland (PSNI) has reported a significant rise in the number of AI‑related incidents being brought to its attention — a trend that underscores how artificial intelligence tools are increasingly impacting public safety and law enforcement. The jump in reports adds to broader concerns about misuse of generative AI technologies and the challenges police face in responding.
Sharp Increase in Complaints Linked to Generative AI
According to figures obtained through a Freedom of Information request, the PSNI recorded 63 reports mentioning the term “AI” in 2025 — up from 48 in 2024. The majority of these reports involve deepfake content, particularly of a sexual nature, including some cases involving children.
A smaller portion of reports — about 13 — were related to mental health concerns where AI was referenced but not the central issue.
Deepfakes Trigger Serious Safety Worries
Recent incidents that have drawn public and police attention include:
-
Blackmail attempts involving AI‑generated sexualised images, prompting warnings to youth and parents from community organisations after victims were targeted.
-
Circulation of intimate deepfakes on social media tools such as Grok, an AI platform linked to Elon Musk’s X, prompting calls from campaigners for stronger moderation.
Experts warn that such misuse of AI technologies may be difficult to curtail because sophisticated tools are widely available and easy to use, making policing and prevention more complex.
Calls for Broader Action and Awareness
Academic and policing voices emphasise that the rise in AI‑related harms — particularly those involving synthetic media — reflects a societal issue as well as a technological one. They argue that a mix of regulation, public education and accountability measures is needed to help mitigate misuse, alongside law enforcement efforts.
Context: AI and Policing Challenges
Northern Ireland’s police force is already under pressure to adapt to digital crime and resource constraints. Recent oversight reports have highlighted gaps in capacity to tackle online abuse and digital threats — areas where AI misuse is increasingly intersecting with traditional crime investigation challenges.
Looking Ahead
As AI tools continue to evolve and spread across platforms, law enforcement agencies like the PSNI are grappling with how to identify, investigate and prevent harms without stifling legitimate innovation. Experts say ongoing training, legislative updates and partnerships with technology platforms will be crucial to keeping pace with AI‑linked threats.