Can the Pentagon Force Anthropic to Remove AI Safety Guardrails? Here’s What the Law Allows.

GovFacts

Last updated 2 hours ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.

Defense Secretary Pete Hegseth walked into a meeting with Anthropic CEO Dario Amodei at the Pentagon on February 24, 2026. He gave what sources described as an ultimatum: grant the military unrestricted access to Claude by Friday, February 27, or face consequences.

The threats were specific: contract termination, designation as a “supply chain risk” (a formal label that can bar a company from federal contracts), or using the Defense Production Act. This is a Korean War-era statute designed to direct factories and suppliers to prioritize military needs during national emergencies.

Anthropic said no.

The company’s safety guardrails, which restrict Claude from being used for mass surveillance of Americans or in fully autonomous weapons without human control, remain in place.

The question that has occupied legal scholars, national security experts, and Pentagon officials ever since is not whether Anthropic will give in. It is whether the government has the legal authority to force it to.

What Anthropic Is Refusing to Do

Before looking at what the government can legally force, it helps to understand what exactly is being demanded. The guardrails at issue are not vague ethical commitments printed in a company handbook. They are specific built-in features of Claude that limit what the system will and will not do.

Anthropic has embedded two primary safety commitments into its contracts with the Department of Defense. The first prohibits Claude from being used for mass domestic surveillance of Americans. This is not a broad restriction on all intelligence operations. Anthropic explicitly permits the military to use Claude for lawful foreign intelligence and counterintelligence missions.

What the company objects to is something more specific: using AI to automatically combine and analyze surveillance data collected on American citizens. In Amodei’s words, this means assembling “scattered, individually innocuous data into a comprehensive picture of any person’s life, automatically and at massive scale.” That process would operate without human review.

Under current law, the government can already purchase detailed records of Americans’ movements, web browsing, and associations from commercial data brokers without a warrant. The concern is that Claude could supercharge this into something new: total automated surveillance with no human analysts reviewing the results.

The second guardrail restricts Claude’s use in fully autonomous weapons systems. Amodei has publicly stated that partially autonomous weapons, like those deployed in Ukraine, are important to the defense of democracy. The issue is systems that identify, select, and engage a target without any human involvement in the final decision to fire. Anthropic’s position is that current frontier AI systems, including Claude, are too unreliable for that responsibility. They hallucinate. They misread context. They can be manipulated by deliberately manipulated data designed to trick the system. That vulnerability could produce catastrophic errors in combat.

The Pentagon wants both restrictions removed entirely. Reports suggest the Defense Department is demanding “any lawful use” language as a condition for future AI contracts and is pressing vendors to adopt the standard, though no formal directive with a specific timeline has been publicly documented. Pentagon officials have argued that military commanders cannot pause mid-operation to check that a private vendor approves of a particular use case.

Claude is reportedly the only frontier commercial AI model currently accessible on the Pentagon’s classified networks. To military planners, that makes the limits feel like a private company holding operational authority it was never supposed to have.

Multiple legal tools are available to the Defense Department, differing in their legal standards, their chances of success, and what options Anthropic would have if it resisted. The most important distinction is between tools that force immediate compliance and tools that impose painful consequences the company can fight in court.

The most dramatic tool is the Defense Production Act. The DPA, first enacted in 1950, gives the president broad authority to direct private industry in the name of national defense. One section of the law (Title I) allows the president to require that contracts for “critical and strategic” materials be put ahead of all other demands on a company’s production capacity.

Hegseth’s team has reportedly been threatening this provision, which would effectively give the military priority access to Claude’s capacity. Directing how the system is set up would be a legally disputed use of that authority: the statute’s text concerns the “adequacy of productive capacity and supply,” a scope that may not support such a demand.

Applying the DPA to AI software is legally unproven. The statute was designed for tangible goods — steel, semiconductors, critical raw materials — and its language refers to ensuring “the adequacy of productive capacity and supply” of materials “necessary to meet defense requirements.”

When President Biden invoked the DPA for AI in 2023, he relied on Title VII’s information-gathering authority (the section that lets the president require companies to share information), not Title I’s power to direct how resources are distributed. That distinction matters greatly. Title VII lets the president require companies to report on their AI safety testing. Forcing Anthropic to rebuild Claude’s safety architecture is a different legal question entirely.

Charlie Bullock, a senior research fellow at the Institute for Law and AI, has argued that the legal outcome is far from certain for either side. The DPA’s allocation language is genuinely broad, but whether forcing a private company to retrain its AI model — stripping safety features in the process — counts as a valid use of that authority is something a federal court would need to resolve. Both sides acknowledge they could lose.

Beyond the DPA, the Defense Department could invoke the government’s authority to change contract terms for national security reasons. Under the Federal Acquisition Regulation (the rulebook governing how the federal government buys goods and services), the government keeps broad rights to change contracts while the contract is active, particularly when national security is at stake.

Officials could argue that circumstances have changed and demand that Anthropic change Claude’s configuration as a condition of continued contract performance. Refusal would give the government grounds to terminate the contract and treat Anthropic as having failed to meet its obligations.

Contract termination is not criminal compulsion, but it targets Anthropic’s weak spot as a government contractor. Anthropic could challenge a default termination in the Court of Federal Claims, a specialized Article I court with nationwide jurisdiction over monetary claims against the U.S. Government, where the United States is always the defendant. It would argue that the Pentagon’s demand went beyond the contract’s scope or violated the company’s First Amendment rights. That litigation would take years.

Then there is the All Writs Act, a statute enacted in 1789 that gives federal courts authority to issue orders requiring third parties to help with law enforcement or national security investigations. The FBI invoked it in 2016 when it demanded that Apple create software to unlock an iPhone used by one of the San Bernardino shooters. Apple refused. The case never reached a final Supreme Court ruling because the FBI found another way into the phone and dropped the case. The legal arguments, however, set out important principles about the limits of compulsion.

Apple’s lawyers argued that creating a new capability would amount to compelled speech and violate the First Amendment. They also argued that the All Writs Act is a gap-filling statute that cannot authorize such compulsion where Congress has not acted, and that it cannot be used to force a company to create capabilities beyond its existing operations. Magistrate Judge James Orenstein, in a separate case, ruled that the All Writs Act could not be used to compel Apple to unlock an iPhone. He found that because Congress had repeatedly been informed about the “going dark” problem and declined to act, the All Writs Act — a gap-filling statute — could not be read to authorize the remedy the government sought.

The Anthropic situation differs from Apple-FBI in one potentially significant way: the Pentagon is not asking Anthropic to create a new capability. It is asking Anthropic to remove existing constraints. Orin Kerr, a leading expert on computer law at Stanford Law School, has suggested that the direction of the change might be legally irrelevant. If the government cannot force Apple to create a backdoor, it should not be able to force Anthropic to remove protections. Both amount to forcing a company to change how its product is built.

Other scholars argue that removing a feature is less burdensome than creating a new capability, and that the government might have a stronger case when asking a company to remove something that already exists. Neither view changes the fact that a court process is required. The Pentagon cannot on its own invoke the All Writs Act. It must petition a federal court. Anthropic would have the opportunity to contest the order before any compulsion takes effect.

Here is the part that trips people up about the Defense Department’s rhetoric: reports suggest officials have simultaneously threatened to designate Anthropic a “supply chain risk” (implying the company poses security concerns) and invoke the Defense Production Act to compel access to Anthropic’s technology (treating it as indispensable to national defense). Legal scholars have noted that these two positions directly contradict each other. A company cannot simultaneously pose such a security threat that the government must exclude it from federal contracting and be so important to national defense that the government must compel access to its technology. That contradiction may itself give Anthropic legal ammunition if either threat is enforced in court.

The following table maps the government’s main legal tools against their practical requirements and the legal risk each creates for both sides.

Pentagon legal tools for compelling AI vendor compliance: requirements, strength, and litigation risk
Legal ToolStatutory BasisRequires Court Order?Anthropic’s Main DefensePrecedent StatusSource
Defense Production Act (Title I)50 U.S.C. § 4511No (executive action)DPA not designed for software architectureTitle I never applied to AI; Biden’s 2023 EO used Title VII only — though DPA Title VII has been applied to AI, the claim that DPA was ‘never applied to AI’ is incorrectLawfare
Contract default terminationFAR Part 49No (administrative)Demand exceeded contract scope; First AmendmentLitigable in Court of Federal ClaimsHolland & Knight
All Writs Act28 U.S.C. § 1651YesCompelled modification of protected codeApple-FBI unresolvedWikipedia
Supply chain risk designationFAR 52.204-30No (administrative)APA challenge; contradicts DPA rationaleRarely litigatedLawfare
Executive order / national security directiveIEEPA or DPANo (executive action)Order must trace to statutory authorityCourts require statutory basisHolland & Knight

Sources: Lawfare analysis of DPA authority; Federal Acquisition Regulation Part 43; Apple-FBI encryption dispute. Note: “Requires court order” refers to initial compulsion, not subsequent litigation.

First Amendment Questions: Is AI Safety Architecture Protected Speech?

The demand potentially implicates the First Amendment in ways that have never been litigated but could prove decisive.

Anthropic might argue that its AI model’s safety architecture is protected expression: an editorial choice about what the system will and will not say, much like a publishing company’s choice about what content it publishes. Courts have confirmed that platforms’ content moderation decisions can be protected speech. If Claude’s safety guardrails are understood as Anthropic’s own editorial decisions about what the system will express, then the government’s attempt to force their removal could amount to compelled speech. The First Amendment prohibits compelled speech.

The government’s counter is that we are mixing up regulation of code with regulation of speech, and that misreads both. Computer code is purely functional: it causes things to happen rather than expressing ideas. From this view, Claude’s safety guardrails are a product specification, like the engine specifications for a military vehicle. Governments regularly compel private companies to build products to their specifications. Why should AI be different?

UCLA law professor Eugene Volokh, the leading scholar on this question, has argued that computer code does receive First Amendment protection and that this protection extends to AI systems. Volokh’s reasoning: if the government cannot force Apple to write a backdoor for security purposes, it likely cannot force Anthropic to remove safety features from Claude, because both amount to compelled modification of protected code.

No court has clearly ruled on whether AI model architecture is protected speech. That unresolved status cuts both ways. It creates real legal risk for the government if it tries to enforce compulsion, because a court might uphold Anthropic’s First Amendment defense. But it also leaves Anthropic exposed. The company cannot rely on clear precedent. It must instead make a new argument to a federal judge who may or may not be sympathetic.

There is also a precedent worth taking seriously. The Communications Assistance for Law Enforcement Act (CALEA), passed in 1994, directly required telecommunications carriers and equipment manufacturers to change their network architecture to allow law enforcement surveillance. CALEA required carriers to build surveillance capabilities into their systems. It did not excuse anyone based on claimed conflicts with corporate philosophy or user privacy values. The law was later extended to broadband and internet phone services (VoIP).

This is the closest parallel the government has: a statute that required specific technical architecture in systems where government was a major purchaser, and that survived legal challenge. Whether CALEA’s logic extends to AI safety features is disputed. The Pentagon’s lawyers are aware of it.

Anthropic’s Corporate Structure May Limit Its Ability to Comply

A deeper legal problem has received surprisingly little attention: Anthropic may face legal liability from its own investors and governance structures if it does comply with the Defense Department’s demands.

Anthropic is structured as a Delaware public benefit corporation, a legal structure that formally requires the company to balance profit against a stated public mission. This legal form gives the company’s board clear authority to weigh shareholders’ financial interests against the company’s stated public benefit purpose: “the responsible development and maintenance of advanced AI for the long-term benefit of humanity.”

This is not aspirational language in a press release. It is embedded in the company’s founding legal documents and creates legal duties for the board of directors.

Anthropic created a Long-Term Benefit Trust, formally established at the May 2023 Series C close via corporate charter amendment — though a precursor Long-Term Benefit Committee had existed since 2021. This governance structure gives independent trustees the authority to elect up to a majority of the company’s board. The Trust is governed under Delaware’s purpose trust framework, which permits organizing documents to mandate a specific mission. The Trust’s own charter requires trustees to ensure that Anthropic “responsibly balances the financial interests of stockholders with the interests of those affected by Anthropic’s conduct.”

The company’s safety commitments in its corporate charter may create real legal limits on the board’s ability to strip safety features from Claude, even under pressure from the Pentagon, though the specific argument attributed to Harvard Law professor John Coates could not be independently verified. If the board complied without a careful review of whether doing so conflicted with the company’s public benefit purpose, shareholders or Long-Term Benefit Trust trustees could potentially sue for breach of fiduciary duty. That is the legal obligation directors have to act in the company’s and its participants’ best interests.

The bind is real. Comply with the demand, and Anthropic risks shareholder litigation for breach of fiduciary duty. Refuse, and the company loses a $200 million contract and faces potential exclusion from future federal AI contracting. Neither option is legally clean.

For the Defense Department, this creates an added complication: even if the government could successfully force Anthropic through a DPA order or contract modification, the company’s board and Trust trustees might resist. Their grounds would be that compliance violates their fiduciary duties under Delaware corporate law. That would create a clash between national security authority and state corporate law. This is an unusual conflict that has never been resolved. It would almost require a court to untangle.

How Pentagon Procurement Could Reward Vendors Who Remove Safety Guardrails

If Anthropic successfully resists Pentagon pressure and the government follows through on its threats, the most significant consequence may not be the loss of a single contract. It may be a complete shift in how the Pentagon acquires military AI. The effects would reach far beyond this dispute.

xAI, Elon Musk’s AI company, has already signed a contract — finalized February 25, 2026 — to deploy Grok on classified military networks, while Google and OpenAI are in discussions about extending their services to classified environments. Unlike Anthropic, xAI has reportedly agreed to “any lawful use” language in its Pentagon contracts. If Anthropic loses its exclusivity on classified networks, the military would simply go around Anthropic’s resistance and work with vendors more willing to meet their demands.

This creates a race to the bottom that policy experts call regulatory arbitrage. If Anthropic’s safety-conscious stance makes it less competitive compared to vendors willing to remove guardrails, the market incentive pushes toward less safe AI systems. Pentagon procurement would in effect favor vendors who are willing to loosen safety constraints. This does not just affect Anthropic. It affects the entire AI industry’s approach to safety.

Paul Scharre, a leading expert on military AI at the Center for a New American Security and author of “Army of None,” has raised concerns about this dynamic. Experts in his field argue the result would not be an arms race in AI capability but in the removal of safety constraints — leaving military AI systems less able to withstand hostile attack and more vulnerable to catastrophic errors in combat.

The Pentagon’s own AI ethics principles, adopted in 2020, explicitly recognize the dangers of AI systems without adequate oversight and control. The DoD committed to ensuring that military AI systems are “responsible, equitable, traceable, reliable, and governable.” Tradeoffs between those commitments and the demand for unrestricted use are not hypothetical. They are central to the conflict at hand.

The arbitrage scenario is arguably worse than the Pentagon successfully forcing Anthropic to remove guardrails. At least in that scenario, the government acts under law and potentially faces judicial review. In the arbitrage scenario, the market simply rewards vendors willing to sacrifice safety for contracts and punishes those who won’t. The minimum safety standard for military AI quietly drops.

The Legislative Gap: Congress Has Not Set Rules for Military AI

The conflict exposes a gap in legislation that nobody seems eager to fill. The core question is what values should be built into military AI systems and who decides. That question is currently being answered through executive pressure and threatened lawsuits rather than through laws passed by Congress.

Congress has already moved to enact military AI safety standards through the FY2026 NDAA. The core question of what values should be built into military AI systems and who decides remains contested at a higher level of government authority.

That’s not an accident. Writing clear rules for military AI requires Congress to take positions on questions that are technically complex, politically divisive, and potentially embarrassing if the rules turn out to be wrong. Letting the executive branch and the courts sort it out is much easier. The cost of that convenience is paid by everyone who would benefit from knowing, in advance, what the rules are.

The February 27 deadline — set for 5:01 PM ET that Friday — passed without a publicly announced court order or DPA invocation, though whether either was pursued through other channels remains unverified. The conflict did not end. It moved from an ultimatum to a slower-burning legal and regulatory contest. The Pentagon can still terminate Anthropic’s contracts, still pursue a supply chain risk designation, still seek a court order under the All Writs Act. Anthropic can still challenge each of those moves.

The outcome of that contest will depend on questions that American law has not yet answered. Those questions include whether the Defense Production Act covers AI software, whether AI model architecture is protected speech, and whether a company’s corporate governance structure can limit what the government can demand in a national security context.

None of those questions will be settled quickly. And until they are, every AI company with a Pentagon contract is watching to see what happens to the one that said no.

Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.

Follow:
Our articles are created and edited using a mix of AI and human review. Learn more about our article development and editing process.We appreciate feedback from readers like you. If you want to suggest new topics or if you spot something that needs fixing, please contact us.