By Deviprasad Thrivikraman
Artificial intelligence has transformed the digital world, but it has also empowered malicious actors with tools that make scams more convincing, scalable, and harder to detect. Fraudsters now use generative AI to mimic voices, create synthetic identities, and craft hyper-personalised messages that bypass traditional red flags.
From deepfake videos impersonating executives to AI-generated emails that mirror an individual’s tone and writing style, these attacks exploit trust and familiarity, making them significantly more dangerous than earlier forms of digital fraud.
As AI adoption accelerates, both individuals and enterprises are facing an unprecedented wave of deception fuelled by automation and intelligence.
How Modern AI Scams Operate
Today’s AI-enabled scams are built on precision and scale. Deepfake audio and video allow attackers to convincingly impersonate anyone, from family members in distress to CEOs instructing employees to make urgent fund transfers.
AI-powered phishing engines scrape personal data, analyse behaviour, and generate messages customised for each target, increasing click-through success rates. Fraudsters also deploy AI to spin up fake websites, clone brand identities, and simulate entire customer support environments in seconds.
At an enterprise level, attackers are using AI to automate reconnaissance, break into corporate email ecosystems, and execute Business Email Compromise schemes with startling accuracy. They also leverage large language models to bypass CAPTCHA, generate malicious code and orchestrate high-volume credential-stuffing attacks, creating industrial-scale fraud operations.
These scams no longer rely on mass, low-effort attempts; they are adaptive, data-driven, and intelligent, making the threat landscape far more complex.
Human & Enterprise Impact
The fallout from AI-driven fraud isn’t just financial. For individuals, these scams can wipe out savings, damage reputations, and exploit emotional vulnerabilities with absurd realism that was impossible before. According to Paytm, fraudsters are creating fake or cloned UPI apps that steal user credentials and gain access to linked bank accounts.
For enterprises, the risks are even more serious. AI-enabled social engineering like deepfake executive impersonation can lead to unauthorised payments, supply chain disruption, and leaked trade secrets.
A recent example is the cyber-attack on JLR, where a sophisticated intrusion forced operational shutdowns and highlighted how even large organisations can be severely impacted by modern threats. A single deceptive AI-generated request can bypass traditional control mechanisms if employees trust what they see and hear.
Meanwhile, fraud rings are increasingly leveraging synthetic identity fraud, creating entirely fabricated personas. According to RGA (Reinsurance Group of America), the life insurance industry loses an estimated $74.7 billion annually to fraud, with synthetic identities as one of the fastest-growing threats.
Spotting The Red Flags
AI scams are becoming more convincing, but they still leave patterns that careful users can detect. Phishing emails, for instance, often come from domains that look slightly off, even when the message imitates a trusted individual or organisation. Deepfake calls typically arrive unexpectedly and attempt to push the victim into acting quickly, creating pressure so the person does not think clearly.
Even if the voice or face resembles a known individual, these attackers generally fail to respond to personal questions or details known only to the real person.
Be cautious when scanning QR codes, especially if received over messages or from unknown sources. Scammers can embed malicious payment requests in QR codes that appear normal at first glance.
Romance and “pig-butchering” scams tend to build trust over weeks or months using AI chatbots that simulate emotional closeness, but these identities rarely have verifiable proof of existence.
Deepfake videos may show elements that are not normal, such as mismatched lighting or odd facial transitions, while AI-cloned voice calls can sound flat or lack natural background noise or even talk about things that are irrelevant.
When in doubt, it’s critical to perform a secondary verification through an alternate communication channel, such as a direct phone call or an official support line.
At an enterprise level, fighting fire with fire is the most effective path. AI-powered fraud requires AI-powered defence. Organisations can deploy advanced systems that detect AI influence in voice, images, and videos, helping teams identify deepfake-based impersonation attempts before damage occurs.
Transaction-monitoring AI models can analyse massive volumes of behavioural and financial data to spot anomalies that humans would miss, mapping patterns across geography, time, device fingerprints, and historical behaviour to block fraud in real time. These systems continuously learn, ensuring that fraudsters are never several steps ahead.
Fixing The Problem
Combating AI-driven scams requires strong policy, responsible AI development, and widespread public awareness. SAFE AI principles, emphasising transparency, accountability, continuous validation, and risk monitoring, form the backbone of systems that can detect manipulation and prevent misuse.
At the societal level, education and awareness are critical, as even the most secure technology cannot protect users who are unaware of how fraud works. In India, initiatives such as “Vigil Aunty” by HDFC Bank demonstrate how institutions can proactively educate the public with accessible, engaging content on emerging fraud techniques, helping citizens stay alert and informed.
For enterprises, the fix lies in combining advanced AI-powered detection systems with rigorous access controls, verification protocols, and employee training that reflects modern threats. The goal is not just to block scams, but to build an ecosystem where humans and AI work together to outsmart malicious actors, making fraud harder, costlier, and less likely to succeed.
(The author is the Managing Director at Zentis AI)
Disclaimer: The opinions, beliefs, and views expressed by the various authors and forum participants on this website are personal and do not reflect the opinions, beliefs, and views of ABP Network Pvt. Ltd.


