By Ankit Sharma
In 2025, artificial intelligence (AI) and cyber-defence are converging into a pivotal and ethically charged stage. Recent statistics show that 45 per cent of organisations now use AI in the area of cyber-defence to provide automated incident detection and threat hunting, while 51 % of cyber-leaders confirm that AI-assisted, generative-AI-powered phishing is one of their greatest concerns.
At the same time, the use of AI to defend systems, and in what way it could overstep the very ethical boundaries, has become acute. At their best, AI in cyber-defences appear to promise enormous benefits. By mining machine learning and behavioural analytics, intelligent systems can detect anomalies, block malicious traffic, and respond to threats much faster than manual measures can achieve. One source suggests that the use of AI-detectables can be 60 % faster, and thereabouts, plus false positive results decreased by upwards of 85 %. What this means for organisations is better, faster defences in a constantly increasing complexity of threat environment.
But where do we draw the line? The question of the ethics of AI in cyber-defence raises many questions, and perhaps the most pertinent one would be: how far is too far?
Data rights and the privacy invasion creep
AI systems used in cyber-defence often rely on vast amounts of data: user behaviour logs, device telemetry, network flows, even biometric or behavioural authentication, etc. But unless there are sufficiently transparent and effective control methods, these same systems may easily then degenerate into mass forms of monitoring of users or employees.
Given the misuse of data, bias and uncontrolled decision-making, this is an extensive ethical concern, in that AI systems may highlight behaviours or behaviours unfairly, and similarly and differently treat individuals and groups of people. Defence is attractive enough, but at what cost to privacy and fairness?
Human oversight vs full automation
In critical situations, AI can automate incident response or isolation of devices or even countermeasures. But when we allow AI to make decisions, independently of human supervision, then there are risks of unintended consequences, either from misclassification that would disable critical services or adopted strategies by AI to escalate rather than reduce the situation.
The ethical use of this technology demands the notion of accountability in definitions of roles and in the imposition of human-in-the-loop oversight.
Offensive deployment and dual-use risks
AI tools designed for defence will also frequently be used for attack, e.g., phishing-simulation campaigns or automated red-teaming, or even offensive deployment by States or proxies. A recent paper by academics noted that 86% of advanced LLM offensive-security prototypes note ethical reservations, reflecting the difficulties of dual-use.
Once your defensive tool could be made into an offensive weapon, it raises issues surrounding ethics and regulation — at what point does deployment of such AI become aggression?
Trust, bias and unintended discrimination
Algorithms learn from data, but if the data is biased or incomplete, then AI may profitably target users adversely or otherwise interpret their behaviour incorrectly. In cyber-defence, false-positive results may penalise innocent users, and bias further increases distrust.
Regulation, Transparency and Public Accountability
Governance frameworks are emerging; for example, the Artificial Intelligence Act in the EU came into effect in August 2024, creating a risk-based regulatory regime for AI across such areas as security. But regulation has not caught up; many defenders rely on internal policies, but no external scrutiny. Ethical AI use in cyber-defence requires transparency, auditability and clear constraints on what the system is allowed to do.
How Far Is Too Far?
The line is crossed when the AI implementation compromises fundamental rights or avoids accountability of humans: when monitoring becomes mass surveillance; automation denies meaningful oversight, or when cyber-defence tools become offensive weapons without proportionate oversight.
Ethical practice requires that AI is used to defend, not dominate; to enable man, not replace him; to protect, not pursue. AI is undoubtedly changing cyber-defence, enabling faster detection, better analytics, and cleverer responses. But the ethical dimension must keep pace. Defenders and the organisations must build governance frameworks, insist on human oversight, protect privacy and fairness and reject the temptation to push AI into areas where it is capable of more harmful than helpful effects.
(The author is the Senior Director and Head – Solutions Engineering, Cyble)
Disclaimer: The opinions, beliefs, and views expressed by the various authors and forum participants on this website are personal and do not reflect the opinions, beliefs, and views of ABP Network Pvt. Ltd.

