Overview of At the Pentagon, OpenAI Is In and Anthropic Is Out
This Hard Fork episode (The New York Times) walks through a chaotic 48-hour standoff between Anthropic, the Pentagon, and OpenAI. Anthropic refused to accept contract language that it says would permit use of its models for mass domestic surveillance and fully autonomous weapons; the Pentagon moved to severely restrict Anthropic’s ability to work with the U.S. military; and OpenAI announced a separate deal with the Pentagon that it says includes the same red lines. The hosts unpack what happened, what’s still uncertain, and why the dispute matters for control of powerful AI systems.
Timeline of key events
- Feb 26: Anthropic CEO Dario Amodei posts that Anthropic will not compromise on two “red lines”: (1) mass domestic surveillance and (2) fully autonomous weapons.
- Late Feb 26–27: Back-channel negotiations reportedly continue between Anthropic and the Pentagon.
- Friday afternoon: Former President Trump posts on Truth Social directing federal agencies to phase out use of Anthropic’s Claude over six months (did not mention supply chain designation or Defense Production Act).
- Shortly after: Pete Hegseth (Undersecretary at the Pentagon) posts on X directing the Department to designate Anthropic a “supply chain risk,” barring contractors/suppliers doing business with the military from commercial activity with Anthropic.
- Around same time: Sam Altman/ OpenAI sends a message of solidarity with Anthropic to employees (leaked). Within ~24 hours OpenAI announces a deal to deploy its models in the Pentagon’s classified networks, stating the agreement includes protections against the two red-line uses.
Main points and takeaways
- Surface paradox: Two companies claim identical red lines, but the government has moved to ban Anthropic while signing with OpenAI.
- Uncertainty: No public contracts; much depends on precise legal language (the “all lawful use” phrasing vs. enforceable restrictions).
- Legal gray zone: U.S. law lacks comprehensive AI- or privacy-specific regulation, so “all lawful uses” can include practices that are functionally surveillance but legally permitted (e.g., data broker purchases).
- High stakes: This affects who controls advanced AI capabilities — private labs or the state — and raises civil‑liberties and national-security concerns.
- Politics and personality: Part of the conflict may be ideological or personal; some see the Pentagon’s move as punitive/ideological, others as insisting on control over military applications.
Stakeholders and positions
- Anthropic (Dario Amodei): Refused to accept language that could allow mass domestic surveillance or fully autonomous weapons; framed refusal as a matter of conscience.
- Pentagon (senior staff like Pete Hegseth): Pushed for language permitting “all lawful” uses; publicly moved to designate Anthropic as a supply‑chain risk (via social media posts).
- OpenAI (Sam Altman): Announced a classified‑network deployment and contends the agreement includes the same red lines; touts a “safety stack” and stronger protections.
- White House / Trump: Trump’s Truth Social post directed agencies to cease using Anthropic; did not explicitly mention supply‑chain designation or the Defense Production Act.
- Employees across labs: Internal employee activism and open letters expressing solidarity with Anthropic and opposition to surveillance/weaponization are important pressure points.
- Public/consumers: Early signs of consumer shifts (e.g., some users switching providers) could emerge as companies’ government ties are scrutinized.
Legal and technical nuances
- “All lawful use” vs. explicit contract red lines: On paper “all lawful” sounds permissive to the Pentagon; Anthropic argues that many surveillance-like activities are lawful currently, so “all lawful” doesn’t constrain future abuse.
- Supply chain risk designation: A harsh, punitive label (historically used against non‑U.S. or adversarial firms) that would bar government contractors from working with Anthropic. At the time of the episode, the designation appeared announced via social posts but lacked formal public proceedings/documentation.
- Defense Production Act (DPA) risk: Worst‑case—government compulsion to produce a modified model—was not invoked publicly, but was feared.
- Data brokers: Buying and aggregating personal data is legal and can produce functionally surveillance‑capable datasets; guarding model outputs alone (a “safety stack”) may not prevent downstream misuse if inputs are problematic.
- “Safety stack” skepticism: Promise of built‑in protections is questioned by hosts and sources as potentially weak or superficial security theater.
Two ways to interpret what happened
- Political/ideological purge: The administration targeted a company it dislikes (Anthropic) and favored a politically closer partner (OpenAI) — a punitive, unprecedented action against a major U.S. tech firm.
- Substantive difference in contracts: OpenAI genuinely accepted binding language and technical measures that protect against the red-line uses, while Anthropic judged the Pentagon’s offered language (and legalese) to be ineffective.
Hosts stress that both interpretations remain plausible and that key evidence (actual contracts and formal government filings) is missing.
Implications and what to watch next
- Will the Pentagon formally initiate a supply‑chain risk process and what legal effect will that have for Anthropic?
- Will contract text for OpenAI’s deal be disclosed or leaked, showing what “protections” were actually agreed?
- How will employee activism (whistleblowing, public letters) influence corporate behavior and contract implementation?
- Might federal policy or litigation follow (e.g., Anthropic suing to block the designation)?
- Consumer behavior and reputational effects for both labs — early signs of user switching could grow into broader market responses.
Notable quotes and observations
- Dario Amodei: Framed Anthropic’s refusal as a matter of conscience — “we cannot in good conscience accede to their request.”
- Hosts’ framing: This is potentially “the most punitive action the U.S. government has taken against a major American company” in recent memory.
- On data/legal gray zones: “It is legal for data broker companies to buy up data on millions of Americans… functionally equivalent to domestic surveillance.”
Recommended follow-ups (what the hosts will pursue)
- Obtain and analyze the contract language that Anthropic and OpenAI negotiated with the Pentagon.
- Track any formal declaration or legal proceeding about Anthropic’s supply‑chain risk status.
- Monitor whistleblower or employee disclosures about how deployed systems are actually used.
- Watch for policy or legal responses that define or constrain “lawful uses” of advanced AI.
Bottom line
The episode frames a rapidly unfolding constitutional, ethical, and commercial battle over who controls powerful AI tools. The headlines capture a dramatic near‑overnight reversal — Anthropic pushed out (or threatened with exclusion), OpenAI brought in — but the substantive reality hinges on contract wording, enforcement, and how “lawful” practice will be interpreted and regulated. The hosts warn that this moment is both unprecedented and consequential for civil liberties, national security, and the future of AI governance.
