AI-powered online romance scams
AI drives industrial-scale romance scams

🤖 AI-Driven Conversations

Modern romance scams use advanced language models to automatically craft fluent, personalised messages that mimic natural conversation. This removes many of the old red flags — such as awkward phrasing or mismatched replies — that used to give away fake profiles.

📸 Deepfake Images and Identities

AI can generate real-looking profile pictures and simulated voices or video interactions. Scammers use these tools to create convincing online personas that appear to be genuine people, often portraying attractive “matches” that don’t exist.

📈 Industrial-Scale Operations

Rather than single scammers working manually, many operations now resemble small call centres, with teams and automated tools handling thousands of fake interactions. Experts say this shift has turned online romance fraud into a big-business problem — with losses reaching billions globally.


Impact on Victims and Regions

In India, research shows that a large number of people have encountered fake profiles or AI-generated bots on dating apps and social platforms, with many reporting financial loss or deception during interactions.

Globally, similar trends exist: authorities in countries like the Philippines have issued warnings about AI-enhanced love scams because emotional grooming paired with financial pressure makes victims more likely to send money or share sensitive personal details.


How Scams Typically Work

These scams usually start like a normal online relationship:

  1. A scammer’s AI-generated persona reaches out on a dating app or social media.

  2. They build emotional trust with the victim over time.

  3. Once trust is established, they introduce requests involving money transfers, QR payments, cryptocurrency investments, medical emergencies, or fake business deals.

  4. Victims send funds or sensitive data, only to discover the entire relationship was fake.


Authorities and Expert Advice

Cybercrime agencies and consumer protection groups urge users to:


Looking Ahead

As AI technology continues to improve, law enforcement groups and tech companies are investing in AI-powered scam detection tools — but experts emphasise that awareness and vigilance remain the first line of defence. This evolving threat underlines how innovations meant to connect people can also be misused to exploit trust and relationships in harmful ways.