Overview of The Political Scene — "The Pentagon Wants an Obedient A.I. Soldier. Will It Get One?"
This episode is a conversation between Tyler Foggett and New Yorker staff writer Gideon Lewis-Kraus about recent clashes between Anthropic (maker of the Claude language models) and the U.S. Department of Defense. It traces how government use of Claude in sensitive operations (allegedly via Palantir), Anthropic’s contract with the DoD, the company’s safety restrictions, the Pentagon’s pushback and supply‑chain action, and the broader policy, legal, and ethical stakes around military uses of powerful AI.
Key takeaways
- Governments inevitably want to use powerful AI for military and intelligence work; Anthropic anticipated that and tried to place contractual guardrails on DoD use.
- Anthropic required two key limits: no fully autonomous weapons (keep a human in the loop) and no domestic mass surveillance. Those limits became focal points of conflict.
- Initial government adoption of Claude reportedly happened through Palantir’s platform (Claude was certified for classified servers and popular with Palantir users).
- Reports tie Claude to planning in a Venezuelan raid and to target‑selection workflows in operations against Iran, but concrete details of Claude’s role remain unclear.
- The DoD escalated from contract negotiations to labeling Anthropic a supply‑chain risk, threatening broad secondary restrictions that could have cut Anthropic off from essential partners—Anthropic sued.
- OpenAI subsequently made its own deal with the DoD, claiming to bake safety limits into its product rather than into contracts.
- The episode frames the central debate as whether AI is “normal tech” (treatable like other tools) or a fundamentally different, potentially uncontrollable technology (the alignment problem). That dispute underlies policy choices.
What happened — timeline and mechanics
- Early usage: Palantir’s platform reportedly allowed DoD users to choose Claude for real‑time information synthesis on classified servers; Claude was favored for assembling diverse data streams into actionable insights.
- High‑profile incidents: Reporting linked Claude usage to the January Venezuelan raid (abduction of Nicolás Maduro alleged) and to operations in Iran where rapid target generation and subsequent tragic civilian losses (e.g., an elementary school strike) raised alarms. Direct causation or responsibility is not established.
- Contracting: In July 2025 Anthropic signed a roughly $200M contract with the DoD that included the two guardrails (no fully autonomous weaponization; no domestic mass surveillance).
- Escalation: Emil Michael (DOD R&D) sought standardized, law‑bound contracts across vendors. Pete Hegseth later publicly labeled Anthropic a supply‑chain risk and threatened sweeping secondary restrictions—potentially cutting Anthropic off from cloud providers, chips, and other partners.
- Legal response: Anthropic filed suit quickly after the supply‑chain designation. The government’s broadest threatened measures likely exceeded statutory authority; the final designation was narrower but remains legally contested.
- Industry moves: OpenAI negotiated a DoD deal, asserting safety constraints would be enforced technically (a “safety stack”) rather than contractually.
Core issues and sticking points
- Two contract red lines from Anthropic:
- No fully autonomous weapons (retain human oversight in lethal decisions).
- No domestic mass surveillance (avoid AI-enabled cheap, population‑scale profiling).
- Government stance: Some officials treat AI as “normal tech” that should be subject to existing rules and used for all lawful purposes; others emphasize operational needs and uniformity across vendors.
- Alignment and control: AI safety communities argue that we don’t yet know how to guarantee reliable, predictable behavior from powerful models (the “alignment problem”). The DoD and some contractors believe engineering fixes and iterative controls can suffice.
- Accountability: Whether an AI caused a bad outcome or a human would have done the same is important — but AI introduces new diffuse accountability problems and can become a scapegoat for policy or intelligence failures.
- Legal/policy power play: Branding a company a supply‑chain risk (and threatening secondary boycotts) is an extreme lever that could reshape relations between government and tech. It also raised broad concerns among tech firms and national‑security lawyers.
Notable quotes and framing
- “We want an AI that is going to be a perfectly obedient soldier.” — summarizes the Pentagon’s implied desire for unquestioning, controllable AI behavior.
- “A panopticon could make tailored watch lists all day long”— depiction of how mass surveillance might function if AI compiles data at scale.
- The Anthropic vs. DoD clash is presented as both a technical alignment debate and a political battle over loyalty, contract terms, and who sets the limits on powerful tech.
Industry responses and second‑order effects
- OpenAI took a different approach: accepted the DoD’s standard contractual language but claims to enforce red lines inside the product instead of contractually — signaling a possible path for other firms.
- Silicon Valley firms have rallied in part: multiple companies filed amicus briefs supporting Anthropic, indicating industry resistance to sweeping government sanctions.
- Potential chilling effects: Some tech companies may avoid government contracts if they fear arbitrary punishment; alternatively, smaller or more politically aligned firms might fill gaps, changing the vendor landscape.
- Long term risks: If government pressure forces a narrow set of compliant vendors (or drives companies to specialize in “loyal” implementations), it could distort innovation and competition.
Legal and policy uncertainties to watch
- Outcome of Anthropic’s lawsuit and judicial limits on the DoD’s supply‑chain authority.
- How the DoD defines “all lawful uses” and whether future contracts will standardize or diverge on red lines.
- Transparency and auditability requirements for AI use in national security: who logs decisions, how audits happen, and what independent oversight exists.
- Whether Congress, regulators, or courts step in to set clearer statutory boundaries on military AI, surveillance, and autonomous weapons.
Practical recommendations (what to expect / what to monitor)
- For policy watchers: track the Anthropic lawsuit, any formal regulatory guidance, and congressional hearings on AI and national security.
- For tech companies: assess contractual exposure, build strong legal defenses and public transparency practices, and consider collective industry stances to resist overbroad administrative sanctions.
- For journalists and civil society: demand documentation/audits for any DoD claims about AI’s role in targeting decisions; press for clearer delineations of human vs. machine responsibility.
- For the public: be aware that “no surveillance” clauses in contracts may be insufficient without enforceable technical and legal safeguards; advocacy for independent oversight is vital.
Bottom line
The episode situates one flashpoint—the Anthropic/DoD standoff—as emblematic of a larger clash: a government demanding predictable, obedient AI for national security use vs. companies and safety advocates warning that powerful models are not yet reliably controllable and that weaponization or mass surveillance carry grave risks. The legal, technical, and political outcomes of this dispute will shape who builds military AI, under what constraints, and how accountability is assigned.