Tuesday, December 16, 2025

Advanced digital fraud nearly triples in 2025 driven by widespread use of artificial intelligence

Share

The number of sophisticated digital fraud attempts has nearly tripled in 2025 compared to the same period in 2024, according to a Sumsub report cited by Agerpres. The surge is attributed to the rapid adoption of artificial intelligence for creating synthetic identities, deepfakes and autonomous systems capable of executing multi-step attacks without human involvement.

The share of advanced fraud in the total volume of incidents rose from 10% to 28% in just one year, signaling a shift from high-volume, opportunistic attacks to “precision operations” that are significantly harder to detect.

Phishing remains the dominant method (45%), but breaches originating from third-party providers now account for 36% of incidents. Synthetic identities generated using advanced AI models are also becoming increasingly common.

Read also: Visa Blocks Over $1 Billion in Fraud Using Artificial Intelligence

“The global fight against digital fraud has become far more complex. Cybercriminals have moved from opportunistic attacks to highly sophisticated AI-driven operations,” Sumsub analysts note. Their conclusions are based on more than four million fraud attempts and surveys involving 300 fraud-risk professionals and 1,200 end users. Fraud schemes involving advanced deception techniques, social engineering, AI-generated identities and telemetry manipulation increased by 180% compared to last year.

One of the most concerning developments is the emergence of autonomous fraud agents. AI-generated documents — passports, driver’s licenses, utility bills — accounted for just 2% of forged IDs last year, but the rapid evolution of tools like ChatGPT, Grok and Gemini has accelerated adoption. The report also highlights text-to-video deepfakes designed specifically to bypass liveness detection mechanisms.

In the United States, overall fraud rates declined by 15%, but 21% of incidents already involve synthetic or AI-generated identities. Chargeback abuse (16%) and account takeovers (19%) remain significant threats.

The report emphasizes that 2025 marks the appearance of AI agents capable of independently completing an entire fraud chain. “These are not traditional bots. They combine generative AI, automation frameworks and reinforcement learning to create identities, interact in real time with verification systems, and adapt automatically. They are still emerging, but could become mainstream within 18 months, especially in organized fraud networks,” warns Pavel Goldman-Kalaydin, Head of AI at Sumsub.

Photo: freepik.com

Teodora Helerman
Teodora Helerman
Online editor, content writer, blogger, and social media specialist, with experience in writing and publishing news, creating original content, and adapting materials for various digital platforms.
spot_img
spot_img

Read more