Regulation of Military AI: The Role of Rules of Engagement
Introduction to Military AI Regulations
As artificial intelligence (AI) technology continues to evolve, its applications in military contexts raise pressing ethical, legal, and operational questions. Military AI systems, particularly those that exhibit autonomous decision-making capabilities, necessitate regulatory frameworks that comply with existing international law. However, the current legal landscape lacks specific guidance for military AI applications, emphasizing the need for robust systems to oversee these cutting-edge technologies.
The Need for Military AI Regulations
The unique characteristics of AI distinguish it from traditional military hardware and software. AI systems can autonomously make decisions, which can lead to significant risks if not properly regulated. For example, the U.S. Long Range Anti-Ship Missile (LRASM) is reported to autonomously select and engage targets without human intervention. Additionally, during the Libyan conflict, the Kargu-2 drone reportedly engaged targets without direct human oversight. These instances highlight the moral and operational dilemmas posed by military applications of AI, underscoring the urgency for proper regulatory frameworks.
International and Domestic Initiatives
In response to these challenges, various international bodies are working to establish comprehensive guidelines for military AI. Initiatives include the Group of Governmental Experts on Lethal Autonomous Weapon Systems, responsible discussions on Responsible AI in the Military Domain, and the U.S.-led Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. Domestically, countries are adopting policies guiding AI use within military structures. The U.S. Department of Defense has issued Ethical Principles for Artificial Intelligence, while NATO has introduced its own NATO Principles of Responsible Use of Artificial Intelligence in Defence. Yet, despite these advancements, regulations require a coherent framework to operationalize their core objectives within the unique context of AI.
Understanding Rules of Engagement (ROE)
Rules of Engagement (ROE) provide a structured method for regulating military actions and decisions. Modern militaries employ ROE to clarify when and how forces can be used, turning political and strategic intentions into actionable military protocols. As stated in the San Remo Handbook on Rules of Engagement, ROE must integrate military and political policies while adhering to existing legal parameters.
Typically, ROE outline:
- Authorization and limitations regarding the use of force.
- Positional directives concerning troop movements.
- Guidance around employing various military capabilities.
ROE may prescribe specific instructions, including the circumstances under which force can be used and restrictions on targeting vulnerable populations or non-combatants.
ROE as Regulatory Frameworks for Military AI
Given the inherent complexities of AI technologies, ROE can effectively serve as a regulatory framework addressing military applications of AI. This holistic approach ensures that policy covers a broad spectrum of ethical, legal, and operational considerations.
Holistic Considerations
Modern military conflicts are often governed by ethical frameworks that transcend traditional military logic. Discussions surrounding military AI emphasize the necessity for regulations to incorporate ethical constraints, including the extent of autonomy allowed for AI systems and the level of human oversight required. ROE effectively amalgamate these diverse layers of considerations into a coherent set of guidelines tailored for specific technologies.
Specificity in Regulations
Given the varied applications of AI—from weapon systems to logistical management—it is essential for regulations to be specific. Different systems will require tailored ROE; AI employed in an offensive capacity, for instance, will necessitate distinct rules compared to AI utilized in medical support or logistics. ROE can therefore outline operational limitations while staying aligned with overarching norms established by international laws and military policies.
Concrete and Flexible Execution
ROE must not only provide concrete directives but also possess the flexibility to adapt to dynamic operational environments. Each military mission presents unique challenges, requiring ROE that can evolve in response to changing circumstances. Typically, ROE include a hierarchical structure that allows for adjustments based on commanders’ assessments during operations, thus ensuring effective governance of AI systems capable of learning and adapting to their environments.
ROE on Human-Machine Teaming and Control
A critical aspect of regulating military AI lies in defining the parameters for human-machine teaming and human control over AI systems. ROE can stipulate how a commander or operator monitors and controls AI during operations. This may include delineating operational zones or task parameters where AI systems may be deployed.
Moreover, ROE can specify the required forms of human oversight, such as direct or supervisory control, critical for ensuring that human judgment remains paramount, especially when AI systems are involved in targeting decisions. Notably, states may restrict AI from making autonomous targeting choices, necessitating strict guidelines on human participation.
Time Checks and Legal Oversight
ROE can also define mechanisms for ensuring that operators remain vigilant and accountable. Provisions might include mandatory notifications when AI systems encounter unexpected scenarios or when decisions deviate from established protocols. This aspect of ROE can play an essential role in minimizing unlawful or unethical use of AI in military engagements.
In instances where the human-machine collaboration intersects with complex legal frameworks—such as targeting laws—ROE can mandate commanders to obtain legal advice before engaging in operations involving AI, ensuring that legal and moral standards are consistently maintained as technology evolves.
Conclusion
ROE possess the potential to be a fundamental tool for regulating military AI, embodying a holistic yet specific approach adaptable to rapidly changing operational realities. These rules bridge the gap between military directives and ethical considerations, ensuring that human oversight and legal obligations remain at the forefront of military AI applications. By doing so, ROE can not only guide military operations but also support the ongoing development of fair and responsible regulations in an increasingly complex technological landscape.
Dr. Tobias Vestner is the Director of the Research and Policy Advice Department and the Head of the Security and Law Programme at the Geneva Centre for Security Policy (GCSP).
Photo credit: Unsplash
