Artificial intelligence and terrorism global warning
Experts warn on AI risks linked to terrorism

AI has already revolutionized industries like healthcare, education, and logistics. However, its dual-use nature — meaning the same technology can be used for good or harm — is at the heart of expert warnings. A recent report by the United Nations Interregional Crime and Justice Research Institute (UNICRI) and the UN Counter-Terrorism Centre highlights how AI could fall into the hands of extremist groups worldwide, lowering the technical barriers to planning and executing attacks.

Such technologies can potentially automate complex tasks, generate realistic fake text, audio, and video, and even speed up planning or reconnaissance — something that would previously have required teams of skilled operatives.


Propaganda, Recruitment, and Radicalisation

Security agencies and analysts have already observed terrorist groups experimenting with AI tools to produce fake images, videos, and content that spreads extremist messages and recruits supporters online. These AI-generated materials can be highly convincing and spread rapidly across social media, making it harder for authorities to track the origin and intent of such messages.

In some cases, generative AI systems have been used to produce multilingual propaganda that crosses cultural and linguistic barriers, reaching new audiences with radical ideologies.


Cyber Threats and Wider Security Risks

Beyond propaganda, AI also contributes to the rise of cyber threats. Experts note a surge in cyberattacks, deepfake scams, and other malicious activities that harness AI’s automation and pattern-recognition abilities. These threats intersect with national security concerns — from information warfare to digital sabotage.

Meanwhile, United Nations counterterrorism initiatives are now integrating AI tools into early-warning networks aimed at detecting and preventing extremist activity — a sign that governments see both the opportunities and risks of these technologies.


Calls for Regulation and Proactive Safeguards

Experts stress that legislation, international cooperation, and technology governance must keep pace with AI innovation. Without safeguards, the technology’s accessibility could enable groups traditionally limited by skills and resources to carry out malicious operations with minimal effort.

Some global security leaders also warn that unchecked AI might further inflame online hate speech and social divisions — factors that extremists exploit to attract recruits and justify violence.


A Complex Global Challenge

The threat posed by AI is not limited to traditional forms of terrorism, but extends to information manipulation, cyber disruption, and the erosion of trust in digital spaces. Experts emphasize that while AI can be a force for good, preparing for misuse is essential to protect societies from future threats.

Governments, international organisations, tech companies, and civil society all play a role in shaping ethical AI development — balancing innovation with responsibility to keep technology out of the hands of those who would use it to harm others.