Operation Epic Fury: The Dawn of the AI War
By Meghna Pradhan
The eruption of high-intensity hostilities in the Persian Gulf in late February and early March 2026, led jointly by the US and Israel under the operations named ‘Operation Epic Fury’ and ‘Operation Roaring Lion,’ marks a pivotal moment in modern warfare. This escalatory conflict, initiated with an aim to execute ‘decapitation strikes’ targeting Iran’s nuclear capabilities, command-and-control centers, and senior military leadership, has rapidly progressed into a multi-dimensional theatre involving both kinetic and cyber warfare. What stands out is its distinction as the first acknowledged full-scale ‘AI war.’
The Role of Artificial Intelligence in Warfare
This designation goes beyond mere terminology. The conflict has witnessed an unparalleled integration of AI-driven assets functioning as Decision Support Systems (DSS), transitioning from secondary analytical tools to primary enablers of lethal engagements. Traditionally, the intricate process of intelligence gathering, target identification, simulation execution, damage assessment, predictive analysis, weapon assignment, and mission deployment would span several weeks, if not months. However, in this unprecedented conflict, operations were executed at speeds that could only be characterized as ‘the speed of thought.’ Astonishingly, the US managed to conduct nearly 900 strikes on Iranian targets within just the first 12 hours, totaling over 5,500 strikes within the initial ten days of the conflict.
Tools and Technologies
To achieve such an extraordinary execution scale, the United States Central Command (CENTCOM) employed advanced AI tools, notably Palantir’s MAVEN Smart System (MSS) in conjunction with Anthropic’s Claude LLM. Utilizing vast troves of unstructured, classified data from satellites and intelligence sources, these AI systems facilitated real-time targeting and prioritization. A prime example is the precision strike that led to the assassination of Iran’s Supreme Leader, Ayatollah Ali Khamenei; meticulously gathered data from long-term espionage contributed to a successful operation.
The Emergence of Low-Cost Combat Drones
Simultaneously, the US introduced the Low-cost Unmanned Combat Attack System (LUCAS), a ‘Kamikaze’ drone system designed to serve as a cost-effective high-volume defense mechanism against Iranian aggression. With a production cost of roughly $35,000 each, these drones present a stark contrast to traditional munitions, like the Tomahawk missiles, which can exceed $2.4 million per launch. Interestingly, the LUCAS drone system was reverse-engineered from Iran’s own HESA Shahed-136 drones, which gained notoriety during the Ukraine-Russia conflict. These drones are equipped with AI capabilities allowing for autonomous operation and swarm tactics, marking a significant shift in the understanding of asymmetric warfare.
Iran’s Countermeasures and Cyber Warfare
In response, Iran has adeptly utilized drone saturation and cyber warfare tactics against US and Israeli targets, claiming responsibility for the deaths of six US military personnel in Kuwait through drone strikes. Iranian drones have concurrently targeted American data infrastructures, with strikes reported on facilities linked to Amazon in the UAE. The Iranian hacker group Handala has also executed a series of cyber operations against US and Israeli interests, including attacks on military personnel and critical infrastructure. With indications of AI-assisted reconnaissance, Iran has shown that it is equally capable of employing advanced technologies in its military strategy.
The Human Cost of AI Warfare
Despite the advantages offered by AI and drone technology—such as rapid decision-making and cost-effective saturation—the human toll has escalated dramatically. The rapid decision-making cycles limit the opportunity for human operators to conduct thorough verifications, often resulting in tragic oversights. The conflict has seen drone strikes and missile attacks inadvertently cause civilian casualties, with incidents such as the strike near Shajareh Tayyebeh Girls’ Primary School in Minab resulting in the deaths of over 170 individuals, predominantly children, underscoring the inherent risks in automated targeting systems.
Friendly Fire and Miscalculations
Moreover, the complexities of AI-driven warfare have led to instances of friendly fire, highlighting the vulnerabilities of automated systems in high-stress combat situations. The downing of three US F-15E Strike Eagles by Kuwaiti anti-aircraft fire exemplifies how the chaos of battle can lead to catastrophic errors when AI systems misinterpret engagements.
Accountability in the Age of AI
The advancements in military AI, while promising greater efficiency, raise pressing questions about accountability and adherence to International Humanitarian Law (IHL). Casualties that once took months to accumulate can now occur within days or hours, revealing the dark side of accelerated military operations. Efforts to integrate AI into warfare, if not managed with adequate safeguards, might worsen rather than alleviate the human costs associated with conflict.
Diplomatic Efforts Amidst Warfare
Astoundingly, as hostilities unfolded, various diplomatic efforts to establish norms around military AI were simultaneously underway. In February 2026, world leaders convened at the Responsible Artificial Intelligence in the Military Domain (REAIM) Summit, while the UN Convention on Certain Conventional Weapons (CCW) held discussions on autonomous weaponry. Ironically, as officials debated the ethical implications and regulatory frameworks, major powers were actively employing AI in real-time combat, potentially undermining diplomatic efforts.
The Paradox of Military AI
The current conflict presents a paradox: the unchecked proliferation of military AI diminishes the effectiveness of global diplomatic initiatives, even as the urgent need for these discussions is underscored by the ongoing war. Despite a growing acknowledgment of the risks associated with autonomous warfare, major nations have shown reluctance to commit to binding agreements limiting the use of AI in military contexts.
With operational capabilities rapidly evolving, the international community faces the daunting task of formulating robust frameworks that ensure ethical use, legal compliance, and necessary human oversight in military operations utilizing AI. Failure to implement such frameworks risks normalizing practices that could lead to irreversible humanitarian consequences.
As we witness the unprecedented unfolding of the first full-scale AI war, the implications extend far beyond the battlefield to shape future diplomatic, ethical, and legal discussions surrounding warfare in an increasingly autonomous age.
