Share this Article

The pursuit of “algorithmic overmatch” in modern military doctrine represents a paradigm shift of such magnitude that it arguably exceeds the strategic impact of the internal combustion engine or even the splitting of the atom. One of the least examined but most consequential effects of artificial intelligence in military operations is how it reshapes institutional behaviour over time. Militaries do not merely adopt technologies; they reorganise around them. Training pipelines, promotion incentives, procurement priorities, and doctrinal assumptions gradually realign to accommodate whatever system appears to deliver advantage.

The Consequence Algorithmic Systems

While proponents of algorithmic warfare argue that these systems will bring about an era of “cleaner” and more “precise” conflict, the reality is far more complex and dangerous. By delegating the decision-making processes of the battlefield to autonomous or semi-autonomous systems, we are not merely upgrading our weaponry; we are fundamentally altering the nature of human responsibility and the very definition of conflict itself.

When AI systems demonstrate an ability to process intelligence faster than human staff, Headquarters reduce the analyst billets. When AI targeting tools produce large volumes of actionable outputs, commanders begin to expect that tempo as normal. Over months and years, what began as an augmentation becomes a dependency. This dependency is dangerous because it quietly erodes the organisation’s capacity to function without the system. Human expertise atrophies – meaning, the gradual decline or loss of practical skills, cognitive abilities, and critical thinking.

Intuitive judgement, once honed through experience, is replaced by procedural compliance with algorithmic outputs. When the system is unavailable, degraded, or compromised, the institution finds itself cognitively unprepared to compensate. This is not a hypothetical risk. It has occurred repeatedly in civilian sectors where automation displaced human skill, from aviation to finance. In war, however, such degradation does not merely reduce efficiency; it creates vulnerability.

The Breakdown of International Humanitarian Law (IHL)

The most immediate and harrowing concern regarding the militarisation of AI lies within the erosion of the legal and ethical frameworks that govern the conduct of war, specifically International Humanitarian Law (IHL). The principles of distinction and proportionality are the bedrock of civilised conflict, requiring combatants to differentiate between military targets and civilians, and to ensure that any collateral damage is not excessive in relation to the military advantage gained. These are not merely binary calculations; they are deeply contextual, qualitative judgments that require an understanding of human intent, cultural nuance, and the unpredictable nature of human behaviour.

AI, regardless of its processing power, remains a pattern-recognition engine that operates on statistical probabilities. It does not “understand” what a civilian is; it merely identifies pixels or signals that match a pre-trained dataset. When an algorithm is tasked with identifying a target in a chaotic urban environment, it lacks the intuitive capacity to recognise a child playing with a toy gun as anything other than a “threat” if the training data has not explicitly accounted for such a scenario. This fundamental lack of comprehension creates a “liability gap” where the commission of a war crime by an autonomous system leaves no clear path for justice, as traditional legal structures are designed to hold humans, not code, accountable for the consequences of their actions.

Degradation of Human OODA Loop – Observe, Orient, Decide, and Act

Furthermore, the introduction of AI into command-and-control structures creates a systemic risk known as “algorithmic escalation,” which could trigger a global conflict at speeds that far exceed human cognitive capacity. In traditional warfare, the pace of escalation is governed by the human OODA loop—Observe, Orient, Decide, and Act—which provides a slim but vital window for diplomatic intervention, de-escalation, and the exercise of restraint. However, as nations integrate AI into their early-warning and response frameworks to counter “hypersonic” threats, the tempo of decision-making is compressed into milliseconds. This creates a terrifying “use it or lose it” dynamic where an adversary’s AI-driven posture might be interpreted by a defensive algorithm as a prelude to a strike, triggering an automated counter-response before any human leader is even aware a crisis has begun. This phenomenon mirrors the “flash crashes” observed in high-frequency financial markets, but with the catastrophic difference that the currency being traded is human lives and national survival. The lack of a “shared reality” between opposing AI systems—each operating on proprietary, secret algorithms—means that signals intended for deterrence could be fatally misinterpreted, leading to a rapid, uncontrollable spiral into full-scale conventional or even nuclear conflict that bypasses human diplomacy entirely.

The erosion of the “OODA loop” is not just a technical change but a fundamental stripping away of human judgment. When the “Decide” and “Act” phases are automated, the “Observe” and “Orient” phases are also increasingly delegated to sensors and algorithms that filter out information the system deems “irrelevant.” This selective perception creates a feedback loop where the human operator only sees what the machine wants them to see, effectively trapping the commander in a digital echo chamber. This narrow-focus intelligence can lead to “strategic blindness,” where the bigger picture of a conflict—its political nuances, its humanitarian impacts, and its long-term consequences—is lost in the pursuit of immediate tactical optimisation. We are moving toward a world where we win the battles according to the data but lose the wars according to reality. The machine optimises for the kill-chain while the human loses the capacity to ask why the chain was activated in the first place, leading to a state of perpetual, automated, and meaningless conflict.

Manipulation and Poisoning of AI

The technical fragility and inherent “brittleness” of AI systems represent a further vulnerability that an adversary can exploit with devastating efficiency through adversarial machine learning. Unlike a human soldier, who can adapt to novel situations and recognise when something “doesn’t look right,” an AI system is uniquely susceptible to subtle manipulations of input data that can cause it to fail in spectacular and unpredictable ways.

A strategically placed piece of “noise” on a satellite image or a specific pattern of infrared signals can “poison” an AI’s perception, leading it to misidentify a civilian school as a military command centre or to ignore an incoming threat entirely. This is not a hypothetical risk; it is a core characteristic of current AI architectures, which perform excellently within the narrow parameters of their training sets but fail catastrophically when confronted with “out-of-distribution” scenarios.

In the context of national defence infrastructure or critical logistics chains, such a failure is not merely a technical glitch but a systemic collapse. If the logistics AI responsible for transporting essential military materiel or managing defence manufacturing supplies were to be compromised by data poisoning, the entire operational capacity of a region could be paralysed without a single shot being fired, proving that the digital vulnerabilities of AI are as much a threat as its kinetic capabilities.

Psychological Dimension

Beyond the technical and strategic risks, the widespread adoption of military AI fosters a profound “digital dehumanisation” that lowers the political and psychological threshold for entering into and sustaining a conflict. When layers of automation and unmanned systems obscure the human cost of war, the gravity of the decision to engage in hostilities is dangerously diminished. Political leaders may find it easier to authorise “sterile” autonomous strikes or deploy swarms of drones into contested territories, believing that the lack of their own “boots on the ground” mitigates the domestic political consequences of the action. This creates a moral hazard where war becomes an exercise in technical management rather than a last resort of statecraft.

The “automation bias”—the tendency for human supervisors to defer to a machine’s output even when it contradicts their own intuition—further compounds this issue. As we become increasingly reliant on algorithmic “truth,” we lose the ability to question the machine, effectively ceding our moral agency to a tool that lacks the capacity for empathy, guilt, or the understanding of the tragic weight of history.

The peril of AI in the military is therefore not just the risk of a “rogue machine,” but the very real possibility of a “roboticized humanity,” where we have engineered ourselves out of the most critical decisions of our existence.

Internal Integrity of Military Institutions

The internal integrity of military institutions is also at risk as the nature of service is transformed from one of tactical skill and physical courage to one of digital monitoring and data entry. Soldiers who are relegated to the role of “human-on-the-loop” supervisors of autonomous systems often suffer from a unique form of moral injury and cognitive dissonance. They are granted the nominal authority to “veto” a machine’s strike, but are often denied the context or the time necessary to make an informed decision. This creates a state of “distributed responsibility” where no one feels truly responsible for the outcome of an engagement. The psychological distancing from the act of killing—where a life is ended via a “confirm” button on a screen—sanitises the violence in a way that can lead to increased disregard for civilian life. The loss of the personal, human dimension in combat removes the innate checks of conscience and guilt that have historically limited the scope of atrocities. By roboticizing the soldier, we are dismantling the very moral guardrails that have defined the profession of arms for centuries, risking a future where the military becomes an unthinking extension of an algorithmic will.

AI Arms Race

The geopolitical implications of an AI arms race are equally destabilising, as the pursuit of “algorithmic overmatch” creates a permanent state of insecurity among global powers. Because AI development is opaque and the software can be updated or altered in an instant, it is nearly impossible to verify an adversary’s capabilities or to establish effective arms control treaties. This lack of transparency encourages a “sprint” mentality, where nations feel compelled to deploy AI systems as quickly as possible, often sacrificing safety protocols and ethical oversight in the name of speed. This rush to deployment increases the likelihood of accidental engagement and ensures that the global security environment is dictated by the lowest common denominator of caution.

Moreover, the dual-use nature of AI technology means that advancements in the private sector are rapidly weaponised, blurring the lines between commercial innovation and military expansion. This creates a feedback loop where the drive for profit and the drive for power become indistinguishable, resulting in a world where the infrastructure of our daily lives is inextricably linked to the mechanisms of autonomous warfare, making every technological breakthrough a potential source of global instability.

Technological Neo-Colonialism and Supply Chain Fragility

Economic and structural dependencies created by a reliance on military AI lead to a concentration of power that could facilitate a new era of technological neo-colonialism. The development of high-end AI requires a level of data, computing power, and specialised human capital that is only available to a handful of global powers and massive corporations. This creates a “digital divide” in sovereignty, where nations that cannot afford or build their own “sovereign AI” are forced to rely on systems provided by others, often with “backdoors” or built-in biases that serve the interests of the provider. Furthermore, the extreme reliance on specialised hardware, such as advanced GPUs and high-bandwidth memory, makes a nation’s entire defence posture vulnerable to supply chain disruptions. A conflict over a single geographic point—like a semiconductor fabrication plant—could effectively disarm an entire AI-driven military overnight. This paradox of “high-tech fragility” means that in our pursuit of total security through AI, we have created new, singular points of failure that an adversary can target to paralyse an entire nation’s defence infrastructure without firing a single kinetic shot.

Challenge to the Principle of “Meaningful Human Control”

The “black box” nature of deep learning models presents a fundamental challenge to the principle of “Meaningful Human Control” and the basic requirements of military command. Military operations rely on a clear chain of command and the ability of a leader to explain the “why” behind an order. However, even the engineers who design complex neural networks often cannot explain why an AI chose one specific target over another in a complex environment. This lack of “explainability” is entirely incompatible with a disciplined military structure. If a commander cannot understand the logic of their weapon system, they cannot predict its failure modes or correct its biases. This opacity also makes it impossible to conduct meaningful post-action reviews or to learn from mistakes, as the “error” is buried deep within millions of uninterpretable weight adjustments. Without transparency, we are essentially deploying “oracular” weapons that demand blind faith from their users.

This is not leadership; it is a surrender to a machine logic that is fundamentally alien to human reason, turning the battlefield into a theatre of the incomprehensible where human lives are traded for reasons that no human can actually explain.

Moreover, the psychological toll of “automation bias” creates a dangerous inertia in military decision-making. Research has shown that humans are predisposed to trust the outputs of automated systems, especially when under stress or time pressure. In a command centre where dozens of screens are providing AI-generated recommendations, the tendency to simply “click confirm” becomes overwhelming. This creates a facade of human control while the actual power has shifted entirely to the algorithm. This bias makes it nearly impossible for a human supervisor to effectively intervene when the AI is making an error, as the human lacks the granular data or the time to construct a counter-argument to the machine’s “certainty.” We are essentially creating a generation of commanders who are trained to be passive observers of their own wars, further detaching the exercise of lethal power from the exercise of human will. This detachment is the ultimate peril, as it removes the last remaining barrier to total, unrestricted warfare: the human sense of restraint and the fear of moral consequence.

Risk of Proliferation to “Non-State Actors”

The risk of proliferation to “non-state actors” is another terrifying dimension of the AI arms race. Unlike nuclear technology, which requires massive industrial capacity and rare materials, AI software can be stolen, leaked, or reverse-engineered and then run on relatively standard hardware. Once a sophisticated military AI is developed by a superpower, it is only a matter of time before its code finds its way into the hands of extremist groups, private militias, or rogue states. These actors do not have the same ethical or legal constraints as sovereign nations, and they may use autonomous systems to conduct targeted assassinations, mass surveillance, or asymmetric attacks with a level of precision and scale that was previously impossible. The democratisation of autonomous lethal force means that the same tools designed for national defence could become the primary instruments of global terrorism, creating a world where no one is safe from the invisible, algorithmic hand of an untraceable adversary.

Unpredictable System Interactions and Cascading Failures

The potential for “emergent behaviours” in complex AI systems further complicates the reliability of military automation. In the field of complexity science, emergent behaviours are those that arise from the interaction of simple components in ways that the designers did not anticipate.

In a military context, this could mean that a swarm of autonomous drones develops a collective behaviour—such as a specific flight pattern or a targeting priority—that was never explicitly programmed, and that creates unintended consequences on the ground. These behaviours are often only discovered in the real world, where the consequences can be fatal. The unpredictability of these systems means that military planners are essentially gambling on the stability of their own technology. If an AI system enters an “edge case” scenario during a high-stakes mission, its response may be entirely logical from a mathematical perspective but utterly catastrophic from a human or strategic one. This inherent uncertainty makes AI-driven warfare a chaotic gamble where the house always loses.

Environmental and Physical Costs of Military AI

The environmental and physical costs of military AI development are often ignored, but they represent a significant long-term peril. The energy required to train massive military models and the mineral resources needed to build high-performance computing clusters create a massive ecological footprint. This hidden cost adds another layer of instability to the global system, as nations compete for the rare earth elements and energy supplies necessary to maintain their AI superiority.

Furthermore, the physical infrastructure of AI—data centres, undersea cables, and satellite networks—represents a new class of high-value targets that can be destroyed to paralyse a nation’s military capability. This vulnerability means that in the pursuit of “cloud-based” warfare, we have created a centralised and fragile target that an adversary can exploit to achieve a decisive blow with minimal effort. The dream of a decentralised, resilient AI military is a myth; in reality, these systems are deeply dependent on a fragile and easily disrupted physical world.

The long-term impact on global norms and the “taboo” against certain types of warfare is being systematically dismantled by the push for AI integration. Throughout history, certain weapons and tactics have been deemed too inhumane for civilised society—chemical weapons, biological agents, and landmines are notable examples. However, the narrative around AI is one of “cleanliness” and “precision,” which serves to legitimise a form of warfare that is fundamentally dehumanising. By framing autonomous systems as a humanitarian solution to the “error-prone” nature of human soldiers, we are setting the stage for a future where mechanised slaughter is not only accepted but encouraged as a “best practice.” This shift in norms makes it harder to advocate for restraint and easier for nations to justify increasingly aggressive military postures. The peril of AI is therefore not just the risk of a rogue machine, but the very real possibility of a rogue humanity that has engineered itself to be comfortable with the automated destruction of its own kind.

Effect on Defence Manufacturing

In the context of specialised defence hubs and industrial logistics, the introduction of AI adds a layer of complexity that can lead to catastrophic “cascading failures.” For example, if a defence manufacturing facility relies on AI to optimise its supply chain and that AI is compromised, the failure could ripple out to every unit that depends on those supplies, creating a ripple effect of unreadiness. Similarly, in large-scale transportation contracts, like those for coal or strategic resources, an AI error in route optimisation or resource allocation could lead to massive economic disruption or environmental disaster.

The “efficiency” promised by AI is a double-edged sword; while it streamlines operations under normal conditions, it removes the “slack” or “redundancy” that humans provide, making the system incredibly brittle when things go wrong. In a military or strategic context, this lack of resilience is a fatal flaw that can be exploited by any adversary who understands the underlying logic of the system.

The Erosion of Human Agency and Moral Injury

The “moral injury” experienced by the programmers and engineers behind military AI is another critical, yet often overlooked, peril. These individuals are frequently removed from the direct consequences of their work, yet they are the ones who define the parameters of life and death on the battlefield. When an algorithm they designed is used to commit a war crime or trigger an unintended escalation, the psychological burden on the creators can be immense. This “distributed responsibility” does not relieve the moral weight; it merely scatters it, creating a sense of guilt and alienation among the very technological talent that a nation depends on. The militarisation of AI thus creates a toxic environment for innovation, where the brightest minds are forced to choose between technical advancement and their own ethical integrity. This internal rot can weaken a nation’s technological base from within, proving that the costs of military AI are not just strategic and legal, but also social and psychological.

Lack of Cross-Domain Coordination

The lack of “cross-domain coordination” in AI systems means that a failure in one area can lead to a failure in another without warning. A targeting AI might work perfectly, but if the logistics AI fails to deliver the necessary munitions, or if the communications AI is jammed, the entire system collapses. Because these systems are often developed by different contractors and operate on different proprietary platforms, they rarely “speak” to each other in a way that allows for graceful failure. This “siloed” development creates a “system of systems” that is incredibly complex and almost impossible to test as a whole. In the heat of battle, these unforeseen interactions can lead to “blue-on-blue” incidents, failed missions, and a complete loss of situational awareness. The complexity of military AI is its own greatest enemy, as it creates more opportunities for failure than it does for success.

The Imperative for Human Control

The final and perhaps most profound peril is the erosion of the “human element” in the most consequential decision a society can make: the decision to kill. War is, and must always be, a human tragedy governed by human responsibility. When we hand that responsibility over to an algorithm, we are not making war more “efficient”; we are making it more certain and less humane. The risks of accidental escalation, legal voids, and psychological detachment are not bugs that can be “fixed” with better data; they are inherent features of the technology itself. The pursuit of military efficiency through AI must be tempered by a radical commitment to keeping humans at the centre of every kinetic decision. Without a global consensus on the regulation of autonomous weapons and a rejection of the idea that machines can replace human judgment, we risk sleepwalking into a future where our tools of defence become the instruments of our own destruction. The “peril” is not that the machines will turn against us, but that we will turn into the machines—devoid of empathy, detached from morality, and trapped in a loop of automated violence from which there is no escape.

Conclusion

Ultimately, the integration of Artificial Intelligence into military operations is not a simple technological progression but a profound existential gamble. While the promise of increased efficiency and precision is alluring, the costs—measured in the erosion of law, the loss of human agency, and the increased risk of accidental global conflict—are simply too high to ignore. We are at a crossroads where the decisions we make about “Meaningful Human Control” and international regulation will determine whether we remain the masters of our tools or become the victims of our own ingenuity. The pursuit of military advantage must not come at the expense of our humanity, for a war fought by machines, governed by algorithms, and detached from human morality is not a war that can ever truly be won; it is merely a descent into a mechanised, automated chaos from which there may be no return. The “perils” of AI in the military are not distant future threats but immediate realities that demand a radical reassessment of our relationship with technology and our commitment to the preservation of a world governed by human responsibility and the absolute sanctity of the decision to take a human life.

Title Image Courtesy: https://lieber.westpoint.edu/

Disclaimer: The views and opinions expressed by the author do not necessarily reflect the views of the Government of India and the Defence Research and Studies. This opinion is written for strategic debate. It is intended to provoke critical thinking, not louder voices.


References

  1. International Committee of the Red Cross (ICRC) — Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons (2016–2023 series).
  2. UN Office for Disarmament Affairs (UNODA) — Lethal Autonomous Weapon Systems (LAWS): Legal, Ethical, and Security Dimensions.
  3. Human Rights Watch — Losing Humanity: The Case Against Killer Robots (updated editions).
  4. Ian Goodfellow et al. — Explaining and Harnessing Adversarial Examples (2014). Foundational work on adversarial inputs and AI misclassification.
  5. DARPA — GARD (Guaranteeing AI Robustness against Deception) program publications.
  6. MIT CSAIL / Stanford AI Lab — Research on out-of-distribution failure, data poisoning, and perception spoofing (LiDAR, IR, radar).
  7. Stanislav Petrov Incident (1983) — Declassified Soviet records and multiple Cold War analyses documenting false missile alerts and human override.
  8. Stockholm International Peace Research Institute (SIPRI) — Reports on AI, nuclear command-and-control, and escalation risk.
  9. Center for a New American Security (CNAS) — Paul Scharre, Army of None; reports on autonomy, escalation dynamics, and machine-speed warfare.
  10. RAND Corporation — Dangerous Thresholds: Managing Escalation in the AI-Enabled Battlefield.

By Lt Col Nikhil Srivastava

Lt Col Nikhil Srivastava served in the Regiment of Artillery, Indian Army from 1993 to 2015. He has had a distinguished career in the Army having served in Jammu & Kashmir, Siachin Glacier. He was part of the First Strategic Missile Unit raised by the Indian Army and was also part of the Quality Assurance team of India’s Integrated Missile Development Programme. He is a qualified Instructor in Gun Systems, Ballistics and Missile systems. He is a Graduate in Mathematics and Masters in Business Administration and holds certifications in Project Management (PMP) and Lean Six Sigma. He is a motivational speaker and has delivered numerous lectures at colleges and schools.