Anthropic doesn't trust the Pentagon, and neither should you

Summary of Anthropic doesn't trust the Pentagon, and neither should you

by The Verge

48mMarch 12, 2026

Overview of "Anthropic doesn't trust the Pentagon, and neither should you"

This episode of Decoder (host Nilay Patel) features Mike Masnick (founder/CEO of TechDirt) and explores the Anthropic–Pentagon dispute as a lens on the larger history and mechanics of U.S. surveillance law. It explains why private companies distrust government promises about "lawful" surveillance uses of AI, how legal interpretations and secret practices have expanded government surveillance powers over decades, and what’s at stake as AI dramatically amplifies those capabilities.

Key takeaways

  • The Pentagon designated Anthropic a "supply chain risk"; Anthropic sued, arguing the designation violates its First and Fifth Amendment rights and threatens its business value.
  • Anthropic’s red-line concerns center on mass surveillance (and separately on autonomous weapons). The mass-surveillance worry is rooted in decades of legal and covert practices that allowed the government to collect and search vast troves of data on U.S. persons.
  • Government lawyers and intelligence agencies have repeatedly interpreted words like “target” and rules like Executive Order 12333 in expansive ways that effectively permit bulk collection and retrospective searches of communications.
  • Two legal doctrines are central: the FISA/secret-court regime (one-sided, ex parte review) and the Third-Party Doctrine (which reduces Fourth Amendment protections for data held by service providers).
  • Companies can and sometimes do resist government requests (e.g., Apple vs. FBI), but the Anthropic case is unusual because the administration used an aggressive supply-chain designation as leverage against a U.S. company for its policy commitments.
  • There’s a plausible First Amendment (compelled-speech) argument: forcing a company to build tools it objects to (e.g., for mass surveillance) could be unconstitutional because code is a form of speech.
  • The dispute is a public, blunt-faced version of policies that were usually hidden or argued in secret courts—making the tradeoffs visible and politically urgent.

Background and legal context

What changed over time

  • Post‑9/11 laws (Patriot Act), secret FISA court processes, and practices under Executive Order 12333 created a legal and operational environment where surveillance authority expanded incrementally across multiple administrations.
  • Revelations (e.g., Snowden) exposed that agencies like the NSA had interpreted rules broadly and collected foreign-centric data that included communications of U.S. persons, then performed “backdoor” searches on that data.

Key legal concepts explained

  • FISA Court: secret, one-sided review of intelligence collection requests (historically high grant rates).
  • Executive Order 12333: an older executive order used to justify foreign‑intelligence collection that can sweep up communications routed outside the U.S.
  • “Target” redefinition: intelligence practice of redefining terms so that communications incidentally referencing non‑U.S. persons become fair game.
  • Third-Party Doctrine: courts have held that information held by third parties (phone companies, cloud providers) has reduced Fourth Amendment protections, enabling easier government access.
  • Compelled speech/code is speech: argument that forcing a company to write code or build systems it objects to may violate the First Amendment.

Anthropic vs. the Pentagon — the immediate dispute

  • Anthropic refused to agree to language allowing government use of its AI for broad surveillance (and for certain military/autonomy roles), seeking explicit guardrails against mass surveillance.
  • The Pentagon responded by labeling Anthropic a supply‑chain risk, a designation typically aimed at malicious foreign vendors; Anthropic challenged this in court.
  • The administration’s blunt approach (public threats, supply‑chain designation) marks a departure from previous, more discreet legal bargaining and has escalated the conflict beyond typical sealed legal disputes.

The role of private companies and differing corporate approaches

  • Anthropic framed itself as a safety‑focused company and refused to be a tool for mass surveillance; that stance put it in direct conflict with an administration seeking wide access.
  • OpenAI (Sam Altman) publicly stated it would allow “lawful uses,” which appeared to rely on the statutes’ plain wording; critics argue OpenAI either didn’t understand how the government interprets the law or chose to rely on opaque government interpretations.
  • Historically, tech companies have sometimes resisted government access (e.g., encryption backdoors), but enforcement mechanisms and national-security rationales can pressure companies or lead to litigation.

Implications and risks

  • AI makes previously impractical surveillance massively scalable (constant, automated analysis of huge datasets), increasing the stakes for Fourth Amendment and privacy protections.
  • Public, political fights over AI and surveillance are likely to continue and intensify—this may be the first time many of these balancing questions are discussed in public, not just in secret courts.
  • The Anthropic case could set precedent for how much leverage the government can exert over U.S. companies on security and surveillance uses of AI.
  • If courts accept compelled-speech arguments, future government orders forcing specific code/behavior could face constitutional limits—but litigation will be complex and protracted.

Notable quotes and insights

  • “The NSA has its own dictionary” — meaning intelligence agencies have long reinterpreted common legal terms to expand their surveillance authority.
  • “The third-party doctrine has sort of swallowed the entire Fourth Amendment” — Mike Masnick on how modern data centralization undermines traditional privacy protections.
  • FIRE’s point: forcing Anthropic to build tools it objects to may amount to compelled speech because code is a form of expression.

What to watch next (actionable items)

  • Court filings and rulings in Anthropic’s lawsuit challenging the supply‑chain designation and alleging constitutional violations.
  • Any policy changes or legislative efforts to clarify limits on AI use for government surveillance, and efforts to narrow the Third-Party Doctrine.
  • How other AI companies publicly frame their red lines; whether more firms adopt Anthropic‑style guardrails or acquiesce to government “lawful use” demands.
  • Emerging precedent on “code as speech” and whether courts accept First Amendment claims in this context.

Recommended further reading / listening

  • The full Decoder episode for deeper context and examples (host Nilay Patel with guest Mike Masnick).
  • TechDirt coverage (Mike Masnick) for ongoing analysis of surveillance law and policy.
  • Reporting on Snowden disclosures and Executive Order 12333 for historical background on intelligence collection practices.

This episode frames the Anthropic–Pentagon clash not as an isolated corporate dispute, but as a public moment that exposes long-standing legal evasions and the urgent need to revisit how surveillance law meets modern AI capability.