By Rajesh Chhabra
About a decade ago, helping older citizens become familiar with modern technology was considered a great achievement. Families took pride in teaching their parents and grandparents how to use internet banking, video calls, and smartphones. The internet bridged generations and distance. But today, that same connectivity has created a new and dangerous threat — one where AI is making it harder than ever for older users to differentiate between legitimacy and deception.
As AI-powered chatbots, deepfakes, and voice-cloning tools become more advanced, cybercriminals are exploiting the trust that older users place in digital communication. What started as simple phishing attempts and lottery scams has evolved into sophisticated schemes that use a loved one’s cloned voice or a deceptively real-looking video call. For many older users, the difference between a genuine message and a fake one is nearly impossible to detect.
The scale of this threat is becoming increasingly alarming. Global data shows a sharp rise in AI-driven scams, including fake investment pitches, romance scams, and emergency call frauds where criminals impersonate family members or friends. The toll on victims is not only emotional but also financial — a person’s savings and lifelong trust can be destroyed with a single persuasive phone call or video message. Older adults may be more susceptible in this new era because they may not instinctively question what they see or hear online, unlike younger users who are more digitally aware.
More Than Just Tech
The danger is not only technological; it’s also psychological. Imagine a retiree receiving a late-night call that sounds exactly like their grandchild’s voice, pleading for urgent help. Or seeing an AI-generated fake video of a trusted public figure giving harmful advice. The impact is deeply personal, shaking the very foundation of how older people engage with technology and society. Once trust is broken, many become afraid to use digital tools altogether, cutting themselves off from essential services and connections.
Part of the challenge lies in awareness and design. The internet was built on openness, not authentication. Today, AI tools make it harder to distinguish perception from reality. Cybercriminals exploit popular platforms such as Facebook, WhatsApp, and Gmail to target older users who use these services frequently. Their vulnerability, often stemming from limited privacy regulations and insufficient digital literacy training, is easily exploited.
We must change our collective thinking to confront this challenge. The need to protect older people online must be treated with the same urgency as protecting children. Installing antivirus software is just one aspect of cyber defence; awareness, readiness, and compassion are equally important. Before responding to unexpected calls or unfamiliar links, families should practice healthy scepticism, discuss AI fraud openly, and review examples of voice cloning or fake videos together.
Awareness At Each Level
Organisations have a moral duty to strengthen digital verification processes, particularly banks, telecom providers, and government agencies. Streamlined reporting procedures, AI-based fraud detection, and multi-factor authentication can prevent small mistakes from becoming catastrophic losses. Tech companies must also take responsibility by watermarking AI-generated content and enabling instant reporting of suspicious activity.
Governments and communities also play a vital role. Dedicated helplines for digital theft, senior citizen awareness initiatives, and community workshops can all make a meaningful impact. Preventing cyberattacks is most effective when families, platforms, and policymakers work together to create a culture where technology empowers rather than threatens.
The internet has helped unite generations — now it must be used to protect them. Younger, more technologically familiar users have a moral obligation to use their skills to safeguard their older loved ones, especially as AI blurs the line between perception and reality. Trust, once broken, can never be fully regained — and in an era where deepfakes and scams make deception easier than ever, trust has never been more precious.
(The author is the General Manager, India & South Asia at Acronis)
Disclaimer: The opinions, beliefs, and views expressed by the various authors and forum participants on this website are personal and do not reflect the opinions, beliefs, and views of ABP Network Pvt. Ltd.

