The intersection of artificial intelligence (AI) and defense technology poses one of the most convoluted and critical regulatory challenges of our time. Governments and international entities are wrestling with fundamental questions regarding human oversight and the preservation of humanitarian law in warfare. This article delves into the current state of AI regulation in defense technology, examining legislative developments, ethical frameworks, and the pathway forward for responsible governance.
Military applications of AI are advancing rapidly. While lethal autonomous weapons systems (LAWS) attract significant attention, AI’s applications span much broader realms including logistics, intelligence, surveillance, reconnaissance (ISR), semi-autonomous and autonomous vehicles (drones), cyber warfare, disinformation, and more.
These technologies serve both offensive and defensive roles, aiming to enhance or replace human operators, allowing them to focus on more complex, cognitively demanding tasks. AI systems can:
- React significantly faster than systems that rely on human input.
- Handle an exponential increase in available data for analysis.
- Enable innovative operational concepts such as swarming, wherein unmanned vehicles autonomously coordinate to achieve strategic objectives, potentially overwhelming adversary defenses.
As such, ethical and legal considerations are paramount when shaping the development and deployment of military AI.
The Current Legislative Landscape
United States: Federal and Defense Approaches to AI Governance and Ethics
The US military has traditionally relied on technological superiority to ensure national security. Notably, the 2018 and 2022 US National Defense Strategies underscore AI as essential for maintaining military superiority. Prominent examples of AI-enabled weapons systems include the MQ-9 Reaper drone, which utilizes AI for target identification and tracking, and the Sea Hunter, an autonomous naval vessel designed for anti-submarine warfare.
In October 2023, President Biden issued Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, creating foundational requirements for AI safety and security across federal agencies, including defense applications. The order mandates that federal agencies establish governance structures for AI and conduct safety evaluations of AI systems that may pose national security risks. However, this order was revoked by President Trump in January 2025, leading the White House to launch its AI Action Plan, which emphasizes growth over safe AI development.
The US Department of Defense (DOD) has developed its own AI ethics principles through the Responsible AI Strategy and Implementation Pathway. This framework champions human-machine collaboration rather than fully autonomous systems. It promotes ethical and lawful AI integration to maintain military advantage while fostering trust among allies. The Pentagon’s approach emphasizes “meaningful human control” over lethal decisions, aligning with broader humanitarian law principles.
Congressional oversight has intensified as various committees closely scrutinize AI developments. The National Defense Authorization Act directs the DOD to accelerate the development and responsible integration of AI technologies across military operations. The act underscores human oversight and mandates pilot programs, research, and collaboration to ensure that AI systems are secure and interoperable. Both the House and Senate Armed Services Committees have conducted hearings on AI governance, though comprehensive federal AI legislation remains in development.
The US also proposed a non-binding Code of Conduct for Lethal Autonomous Weapon Systems to promote responsible behavior and adherence to legal standards, opposing any preemptive ban on these systems. In February 2023, at the Summit on Responsible Artificial Intelligence in the Military Domain (REAIM) in The Hague, the US Political Declaration on Responsible Military Use of AI and Autonomy established a framework to guide the ethical and responsible use of AI in military contexts, emphasizing compliance with international law and the necessity of human accountability. As of November 2023, 45 countries have endorsed this declaration, although it was disavowed by the Trump administration.
United Kingdom: Principles-Based Regulation
The United Kingdom has adopted a proactive and structured approach to the ethical and regulatory challenges raised by AI in defense technology. Central to this is the UK Ministry of Defence’s (MOD) commitment to ensuring that AI systems are responsibly deployed in line with international humanitarian law and are anchored in clear ethical principles. The UK Defence AI Strategy, published in 2022, emphasizes transforming the MOD into an ‘AI-ready’ organization by developing essential skills, technical frameworks, and research initiatives to enhance AI capabilities. It insists on preserving human judgment in critical decisions and advocates for transparency and risk mitigation in military applications.
Ethical oversight within the UK defense sector is formalized in the Joint Service Publication 936 (Part 1), which outlines the MOD’s AI governance model and ethical principles. This framework embeds principles like accountability, fairness, and reliability throughout the lifecycle of AI systems. It mandates structured ethical assessments and a tiered risk management process to promote responsible AI development, reiterating the need for “meaningful and informed human involvement,” especially in harmful contexts.
Practically, the UK employs a multi-stakeholder, evidence-based model for setting standards and safeguards concerning AI in defense. This approach entails collaboration across government departments, industry, and academia, alongside participation in international frameworks like the UN Convention on Certain Conventional Weapons (CCW), which evaluates Lethal Autonomous Weapons Systems (LAWS). Through these partnerships, the UK seeks to influence global norms and ensure domestic frameworks are interoperable with those of allied nations and compliant with international law.
While the MOD’s governance systems are distinct from civilian regulatory regimes, they align with the broader UK government’s “pro-innovation” stance on AI regulation, as articulated in a 2023 White Paper. This cross-sector strategy prioritizes safety, accountability, and contestability without imposing overly burdensome legislation, allowing the defense sector to lead on tailored governance while maintaining alignment with national AI principles. These frameworks collectively reflect a cohesive ethical direction, balancing technological progress with legal and moral integrity.
European Union, Council of Europe, and UN Negotiations
The European Union’s Artificial Intelligence Act (Regulation (EU) 2024/1689) expressly exempts military applications under Article 2(3). Although defense-specific systems are excluded, the principles in the AI Act provide an indicative benchmark for military AI governance. In situations involving dual-use applications, the AI Act will apply, mandating transparency and thorough assessments for high-risk systems.
A key legal concept within the AI Act is the stipulation for human oversight in high-risk AI systems, which closely aligns with the ethical principle of ‘meaningful human control’ (MHC) in defense. MHC ensures that decisions related to using force remain under human authority. Analogously, the AI Act requires operators of high-risk systems to maintain the ability to interpret and intervene in AI decision-making processes. Documentational requirements for high-risk AI systems support compliance with international humanitarian law, facilitating post-operation reviews and legal accountability.
This legal framework encourages structured ethical assessments in military AI, promoting practices such as tiered risk analysis and ethical impact assessments. The responsibilities assigned to various stakeholders under the AI Act are relevant in a defense context, where defining legal responsibility for autonomous decisions poses a complex challenge.
Consequently, the AI Act might serve as a guideline for the defense sector in contexts beyond pure dual-use applications, promoting the adoption of its principles in operational military AI scenarios. This framework could also enhance ethical standards for AI integration in defense, providing a comprehensive guideline for responsible AI application.
The European Parliament has actively addressed autonomous weapon systems through various resolutions. A January 2021 report endorsed the creation of a legal EU framework to require “meaningful human control” over military AI while calling for a prohibition on autonomous lethal systems. Additionally, the 2020 A9-0186 resolution advocated for robust oversight of military AI and its alignment with international humanitarian law, urging legally binding norms for autonomous targeting and deployment functions.
The Council of Europe’s 2024 Framework Convention on AI emphasizes human rights, democracy, and rule of law throughout the AI lifecycle. While it exempts military and national security applications from direct obligations, it emphasizes compliance with broader international human rights and rule-of-law commitments under the European Convention on Human Rights and other treaties.
Global Initiatives
Globally, nations involved in the CCW have discussed the prohibition and restriction of certain conventional weapons deemed to cause unnecessary suffering since 2014. The Group of Governmental Experts (GGE) formed under the CCW has focused on emerging technologies related to LAWS, highlighting the necessity of human responsibility and accountability, ensuring that decisions concerning the use of force are human-controlled throughout the lifecycle of these weapons. The GGE also pushes for the development of a normative and operational framework to tackle challenges posed by LAWS.
Some nations and inter-governmental organizations advocate for a preemptive ban on LAWS, yet reaching consensus on this front has proven difficult. Many countries prefer regulatory approaches or codes of conduct over outright bans due to the potential strategic advantages that LAWS offer. Meanwhile, UN Secretary-General Antonio Guterres has consistently called for a binding instrument prohibiting LAWS without human control or oversight by 2026. However, without a definitive international framework, the risk of an arms race involving autonomous weapons looms, likely leading to greater global insecurity.
Ethical Frameworks and Normative Standards
Law and policy frameworks across various regions increasingly emphasize foundational ethical principles for defense AI:
- Meaningful Human Control: The authority to decide on the use of force should remain with a human, limiting autonomy to non-lethal support functions. Clear definitions of “meaningful control” range from real-time engagement to pre-mission authority or reviewability.
- Distinction and Proportionality: AI systems must differentiate between combatants and civilians and align military actions with the proportionality standards stipulated by international humanitarian law.
- Transparency and Traceability: Operators must document AI-driven decisions, ensuring that oversight bodies can audit system design and operations. This principle faces challenges when military AI systems are classified, limiting transparency.
- Accountability and Responsibility: Assuring accountability for autonomous AI decisions can be complex. If an AI system inadvertently causes harm, attributing liability becomes challenging. Clear legal frameworks must define fault attribution–whether it lies with the commander, manufacturer, programmer, or deploying state.
Despite widespread acceptance of these principles, implementing them presents significant challenges. Ambiguity surrounding human oversight criteria engenders uncertainty about what constitutes “meaningful control.” The closed-source nature of military AI hinders ethical oversight and raises questions about the efficacy of existing safeguards. Furthermore, the absence of robust enforcement mechanisms leaves compliance with guidelines dependent on voluntary adherence, prompting concerns about the adequacy of self-regulation. There are overarching fears regarding AI’s potential to escalate conflicts; reducing risks to one’s military personnel may lower the political barriers to initiating force, potentially leading to more frequent armed conflicts.
Looking Forward: Critical Junctures Ahead
As the legislative backdrop evolves, diverse approaches emerge–from budding regulatory frameworks in the United States and United Kingdom to influential EU resolutions and UN-level treaty negotiations. Across jurisdictions, consistent ethical principles underscore the need for human oversight, transparency, accountability, and compliance with international humanitarian law. The next few years may prove pivotal for the future of AI regulation in defense technology, potentially transitioning from normative rhetoric to enforceable regulation.
Find out more about issues and opportunities in the aerospace & defense sector and how we can assist here.
