The Ongoing Feud Between the Pentagon and Anthropic: A New Era in AI and Military Ethics
The fierce public feud between the U.S. Department of Defense (DoD), sometimes referred to as the Department of War, and its AI technology supplier Anthropic presents a fascinating clash of state power against corporate interests. In an arena that typically fosters close partnerships, this rift is particularly striking and highlights critical questions about the ethical use of artificial intelligence in military contexts.
Roots of the Disagreement
This conflict has its origins in a series of criticisms aimed at Anthropic, particularly from figures in the Trump administration. Donald Trump’s appointee and self-styled AI and crypto “czar,” David Sacks, has vocalized concerns about the company’s perceived “woke” policies, challenging the appropriateness of its ethical stances. As the discussion evolved, it became increasingly public, setting the stage for the clashes that would follow.
Tensions reached a boiling point when media reports surfaced alleging that Anthropic’s technology, particularly the large language model Claude, was used in a highly controversial military operation—the violent extraction of former Venezuelan president Nicolás Maduro by U.S. forces in early 2026. This incident, according to anonymous sources within Anthropic, sparked discontent within the company and raised questions about the company’s ethical guidelines and their implementation.
Denial and Ultimatums
Anthropic has vehemently denied any violations of its internal policies related to the Maduro operation. Insiders maintain that the company found no evidence of ethical breaches that would arise from the use of its technology in the raid. Yet, this hasn’t quenched the fire of discontent.
In a significant escalation, U.S. Secretary of Defense Pete Hegseth has issued a stern ultimatum to Anthropic. The company is expected to relax its ethical limits by February 27 at 5:01 PM Washington time, or it risks the invocation of the 1950 Defense Production Act. This would enable the DoD to commandeer Anthropic’s technology without its consent, while simultaneously potentially classifying the company as a supply chain risk, jeopardizing its government contracts.
The Ethical Quandary: AI in Military Service
Central to this dispute is a profound question: How should Anthropic’s Claude be utilized in military contexts? Designed to handle a variety of automated tasks including writing, coding, reasoning, and analysis, Claude integrates into myriad sectors of industry. However, its role, particularly within military operations, raises serious ethical concerns.
In July 2024, Anthropic partnered with Palantir, a company deeply embedded in U.S. government contracting, to integrate Claude into defense operations. Following this partnership, a $200 million contract was signed with the DoD, outlining stringent terms regarding the acceptable use of Claude, including prohibiting its application in mass surveillance of U.S. citizens or in the development of fully autonomous weapon systems. These restrictions underscore Anthropic’s commitment to what it defines as “responsible AI.”
Pushback from the Pentagon
Despite these guidelines, the DoD argues that such restrictions are overly burdensome, particularly in a global climate marked by conflict and unpredictability. Hegseth has underscored the necessity for what he calls “any lawful use” of AI by the military, challenging Anthropic’s limits as excessive ideologically driven constraints.
In a memorandum issued in January 2026, Hegseth insisted that social ideologies like Diversity, Equity, and Inclusion should not influence military AI applications. The memo pushed for the alterations of future contracts to ensure AI can be employed without ideological interference.
Competitive Pressures and Market Dynamics
While Anthropic currently enjoys a competitive edge due to its established contracts and security clearances, it faces increasing pressure from rivals. Companies like Palantir have been expanding their offerings in partnership with the Pentagon, while tech giants like Google and OpenAI have begun softening their ethical guidelines. Google has explicitly dropped its ban on using AI for weapons, and OpenAI has redefined its mission, indicating a willingness to align closer to Pentagon standards.
The Testing Point for Anthropic
Anthropic finds itself at a pivotal crossroads. In February, the company announced an update to its responsible scaling policy, which included dropping the promise not to release AI models if the company couldn’t guarantee risk mitigation. Chief Science Officer Jared Kaplan elaborated that such unilateral commitments no longer seemed feasible in the face of rapid advancements in AI by competitors.
The ethical landscape becomes murky here. While many Silicon Valley companies use ethical language to differentiate themselves from “bad actors” abroad, they face real challenges when these ethics come at a potential cost to their competitiveness.
Broader Implications for AI in Warfare
The timing of this conflict aligns with international dialogues aimed at regulating AI in military contexts. In early February 2026, representatives from various nations—excluding the U.S.—gathered to discuss frameworks for responsible AI use in military applications. The United Nations is also slated to conduct discussions on limiting the use of lethal autonomous weapon systems in March.
This public feud between Anthropic and the Pentagon encapsulates a broader, urgent conversation regarding the ethical dimensions of AI in warfare, a dialogue that is not only overdue but essential as technology continues to evolve. While Anthropic’s resistance to the Pentagon’s demands is admirable, the realities of military engagement in an AI-driven world may challenge any ethical guidelines set forth today.
