OpenAI's Fog of War + Betting on Iran + Hard Fork Review of Slop

Summary of OpenAI's Fog of War + Betting on Iran + Hard Fork Review of Slop

by The New York Times

1h 5mMarch 6, 2026

Overview of OpenAI's Fog of War + Betting on Iran + Hard Fork Review of Slop

This Hard Fork episode (The New York Times) covers three core storylines: the fallout from OpenAI’s new Pentagon deal and the broader implications for Anthropic and the AI industry; the emergence and controversy of prediction markets tied to the U.S.–Iran conflict; and a “Hard Fork Review of Slop” segment documenting a surge of AI‑generated, low‑quality children’s videos on YouTube and YouTube Kids.

Episode structure / segments

  • OpenAI’s Pentagon deal, public backlash, internal dissent, and implications for industry governance and possible nationalization.
  • Anthropic’s simultaneous legal and commercial highs/lows: rapid growth versus federal pushback and supply‑chain designation risk.
  • Prediction markets (Kalshi, Polymarket) taking wagers tied to strikes, leaders’ fates and other Iran‑war outcomes — ethical, legal, and security questions.
  • Hard Fork Review of Slop: examples, research and concerns about AI‑generated children’s content on YouTube/YouTube Kids.

Key points and takeaways

OpenAI — Pentagon deal and fallout

  • Sam Altman announced an agreement between OpenAI and the Pentagon that included prohibitions on domestic mass surveillance and autonomous weapons — similar red lines Anthropic had sought.
  • OpenAI released only “relevant portions” of the contract; experts warned the full contract is needed to evaluate scope and loopholes.
  • Sam Altman later described the initial disclosure as rushed ("slopportunistic") and said OpenAI would amend language to explicitly bar deliberate tracking or monitoring of U.S. persons using commercially acquired personal identifiable information.
  • Many users canceled ChatGPT subscriptions and migrated to Claude; employee dissent at OpenAI surfaced publicly. Notable exit: Max Schwarzer (post‑training lead) left OpenAI and moved to Anthropic.
  • Observers worry about semantics and loopholes: the real-world effect (whether citizens are surveilled or models are used militarily) may hinge on definitions and legal interpretation rather than plain language alone.
  • Broader risk: data‑center and political backlash could escalate into stricter regulation or even “soft” nationalization of frontier AI capabilities.

Anthropic — growth amid government pressure

  • Anthropic has seen explosive enterprise adoption (reporting from Bloomberg cited very rapid revenue growth), driven largely by Claude and its enterprise uptake.
  • Simultaneously, the company faces a formal Pentagon supply‑chain risk designation and ongoing pressure that could escalate (including possible invocation of the Defense Production Act).
  • Some federal agencies (e.g., State Department per reporting) have switched away from Anthropic models to older GPT versions due to presidential directives, illustrating political interference and potential capability regression in government use.

Nationalization and the political dynamics of frontier AI

  • The hosts debate the possibility of government oversight or seizure as AI systems grow strategically decisive: soft nationalization (heavy regulation, contractual control) is plausible; outright takeover is not beyond the realm of possibility.
  • Companies face tradeoffs: collaborating with government can reduce the risk of forced control but also invites political backlash and employee dissent.

Prediction markets and the Iran conflict

  • Prediction markets (Kalshi, Polymarket) became focal points when users wagered on Iran‑related outcomes (e.g., leader survival, timing of strikes).
  • Kalshi barred explicit war/assassination markets but allowed some proxy questions; Polymarket was more permissive (except for explicit nuclear‑detonation bets).
  • Concerns:
    • Moral/ethical: betting on death or strikes is widely seen as distasteful and corrosive.
    • Insider trading/security: prosecutions/arrests have already occurred (Israel). Evidence showed hundreds of large bets placed correctly predicting a strike, raising suspicions of insider information.
    • Regulatory complexity: the CFTC has authority over some firms (Kalshi), but enforcement capacity and applicable rules are limited; political alignment of regulators influences enforcement priorities.
  • Policy takeaway: regulating prediction markets tied to conflict should be addressed promptly (before platforms entrench large lobbying power), but passage and enforcement are politically fraught.

Hard Fork Review of Slop — AI‑generated kids’ content on YouTube

  • Reporters analyzed YouTube/YouTube Kids recommendations and found a substantial volume of AI‑generated short videos targeting toddlers: odd alphabet songs, animals morphing from goo/paint, injections that turn animals colorful, fruit beds, and violent or bizarre character scenarios (echoes of “Elsagate” from 2017).
  • In one 15‑minute session more than ~40% of recommended shorts were AI‑generated. Detection used visual/frame inconsistencies, distorted text, morphing objects, and posting patterns.
  • YouTube policy requires labeling of realistic‑looking synthetic media, but enforcement is inconsistent; many videos are not clearly labeled and the burden falls on creators/parents.
  • Child-development concerns:
    • Short, highly stimulating, disjointed clips may overload young children's attention systems and lack narrative structure that aids learning.
    • Content can be surreal, confusing, and potentially disturbing; parents and experts worry about cognitive and emotional impacts.
  • Practical parental steps: stricter supervision, curated playlists, avoid unsupervised YouTube for young kids; for platforms — better detection, labeling, and stricter moderation of AI‑generated children’s content.

Notable quotes / soundbites

  • "You're just going to have to trust us, and the public is saying, well, we don't." — on OpenAI/Pentagon trust deficit.
  • "Slopportunistic" — Sam Altman’s response called out by hosts as a characterization of OpenAI’s rushed messaging.
  • Anthropic: "Printing money" — describing explosive enterprise adoption and revenue acceleration.
  • Prediction markets: “People around Trump are profiting off war and death” — Sen. Chris Murphy’s condemnation & proposed legislation.

Concrete recommendations / action items

  • For parents:
    • Supervise children’s YouTube usage; prefer curated playlists and verified channels; consider limiting Shorts exposure for under‑5s.
    • Use YouTube Kids settings cautiously and be aware labels may be inconsistent.
  • For AI companies:
    • Be transparent about contracts and red lines when partnering with governments; anticipate employee and public backlash; prioritize clear policy language that’s hard to semantic‑game.
    • Evaluate long‑term political and reputational costs of defense contracts and data‑center expansion.
  • For regulators and lawmakers:
    • Assess legal gaps around prediction markets, insider trading tied to national security operations, and platform responsibilities for AI‑generated content targeting children. Act sooner rather than later while the sector is smaller.
  • For consumers:
    • If uneasy with company‑government ties, consider alternatives (e.g., Claude) but be aware consumer cancellations may not be decisive even if they affect trust.

Risks and open questions highlighted

  • Does the amended contract language meaningfully prevent domestic surveillance, or do semantics and loopholes leave risks?
  • Can regulators detect and stop insider trading on fast, global, crypto‑denominated prediction platforms?
  • Will the U.S. government move toward tighter control over frontier AI (soft nationalization), and what will that mean for innovation, safety, and civil liberties?
  • Can platforms like YouTube scale effective moderation and labeling for rapidly proliferating AI‑generated media aimed at children?

Bottom line

This episode frames an anxious moment where critical AI commercial growth, national security interests, new speculative markets, and rapidly proliferating synthetic media collide. The practical implications are immediate (cancellations, employee exits, legal fights, disturbing kids’ videos) and systemic (questions about oversight, norms, and the political power of tech companies and financialized platforms). The hosts urge vigilance: clearer contracts, faster policy responses, better platform moderation, and parental awareness are needed now.