
Global / February 2026: Artificial intelligence — especially generative models that create text, images, or documents — is now being misused in ways that threaten the integrity of legal systems. In recent months, courts and prosecutors have encountered forged evidence and fraudulent court materials generated or enabled by AI — from doctored bank records to fabricated legal citations and false video evidence.
AI-Driven Forgeries Spark Real Fraud Charges
In Busan, South Korea, police have charged a man in his 20s after prosecutors uncovered that he used AI to manipulate his bank balance digitally, creating a fake statement that showed over ₩900 million (about USD 650,000) instead of the actual 23 won. Authorities allege the forgery was intended to deceive financial institutions and possibly a court or official process, and the suspect has been detained on fraud and forgery charges.
This case highlights how AI tools can be repurposed to fabricate official documents — in this instance a financial record — that appear legitimate but are entirely false.
AI Hallucinations Hit Courtrooms Too
AI “hallucinations” — a term for when models generate false but plausible-sounding information — are proving problematic in legal practice as well. In the United States and elsewhere, lawyers have been flagged for submitting briefs that included non-existent case citations or legal authorities because they were generated by AI assistants and never verified. Judges in some proceedings have expressed frustration and alarm at the use of such bogus content in court filings.
One judge reportedly remarked that the experience reduced his trust in AI tools — illustrating the growing challenge of ensuring accuracy and authenticity when technology is used for research or drafting in legal contexts.
Experts Warn of Systemic Risks
Legal analysts and experts have been raising the alarm for some time. A report on AI in courtrooms highlighted how video, audio, and other forms of evidence — now easily generated by AI — could be submitted as part of legal proceedings, blurring the line between genuine evidence and sophisticated forgeries.
Similarly, security specialists note that even the most convincing documents or media can be fabricated using modern AI tools, making it harder for courts and investigators to distinguish real from fake without advanced verification methods.
Legal Systems Seek New Safeguards
These incidents underline a broader concern: the legal system’s traditional standards for verifying evidence and legal citations weren’t built for an era of highly realistic AI‐generated content. Courts and lawmakers in multiple countries are now considering reforms to prevent fraud and protect judicial integrity. Proposed measures include:
-
New verification protocols for digital evidence to confirm authenticity before use in trials.
-
Stricter sanctions on professionals who submit AI-generated forgeries without proper checks.
-
Updated rules of evidence that specifically address AI-produced content and its admissibility.
Why This Matters
The misuse of AI in legal settings doesn’t just affect individual cases — it challenges public confidence in justice systems. If forged documents or fabricated citations become common, courts may hesitate to accept digital evidence, slowing proceedings and increasing costs for genuine litigants.
As AI becomes more powerful and widespread, experts stress the need for both technological safeguards and legal frameworks to adapt — ensuring that innovation doesn’t outpace trust and fairness in the justice system.