Overview of Hard Fork (The New York Times)
This episode of Hard Fork (hosts Kevin Roose and Casey Newton) covers three main stories: how AI is being used in the war in Iran, new research on “AI brain fry” (a cognitive fatigue tied to using oversight-heavy AI tools), and a consumer/ethics story about Grammarly’s now-disabled “Expert Review” feature that paired users with the names of real writers and experts without their consent.
Key takeaways
- AI is actively integrated into military intelligence and mission planning in the Iran conflict; Anthropic’s Claude is currently reported to be the model deployed in classified systems.
- Military AI use is mostly augmentative (data processing, target identification, planning), but there are real concerns about human judgment being effectively outsourced as systems become more persuasive.
- Attacks in the region are targeting AI-relevant infrastructure (data centers, fiber routes), revealing how geopolitical conflict can disrupt global AI supply chains.
- “AI brain fry” is an empirically observed phenomenon: mental fatigue from heavy oversight and multi-tool AI workflows. It is distinct from burnout.
- Grammarly’s Expert Review used named experts as purported “inspiration” without consulting or compensating them; the feature was disabled after public exposure and backlash.
Segment summaries
1) A.I. Goes to War — how AI is being used in the Iran conflict
- What’s happening: U.S. and Israeli militaries are using AI to process massive streams of data (drone footage, traffic cameras, intercepts) into real-time dashboards and prioritized targets. This reduces the “haystack” intelligence problem.
- Key players: Palantir’s Maven Smart System integrated with Anthropic’s Claude is reported to have suggested hundreds of targets and sped planning from weeks to real-time operations.
- The Claude angle: Claude is reportedly the only model currently embedded in classified systems in this conflict; Anthropic has been designated by the Pentagon as a supply-chain risk and has sued over that designation.
- Limits and risks:
- Officially, humans remain “in the loop,” but experts warn that advisory systems can progressively shift decision weight from humans to models.
- Civilian casualty incidents (e.g., school strike in Iran) raise questions about whether AI contributed to targeting errors; investigations are ongoing.
- Offensive/defensive dynamics: Iran has struck commercial data centers in the UAE/Bahrain and where possible targets are AI infrastructure — an asymmetric tactic with large civilian impact (banking, taxis, web services).
- Broader fallout: disruptions to undersea cables, shipping (Strait of Hormuz) and supply chains (semiconductor materials) could stall global AI rollouts and raise costs.
- Bigger picture: Historical corporate principles against military use of AI have been weakened; major AI companies have removed or relaxed prohibitions on military applications amid market and policy pressures.
2) Is “A.I. Brain Fry” real? — BCG research with Julie Bedard
- Study details: BCG / Henderson Institute surveyed 1,488 workers (Jan 2026) across roles and industries about AI use, cognitive state, and work outcomes.
- Definition: “AI brain fry” — mental fatigue from excessive use/oversight of AI tools beyond one’s cognitive capacity.
- Findings:
- 14% of AI users reported experiencing AI brain fry.
- Brain fry is associated with high oversight demands and work intensification (more multitasking, information overload).
- Brain fry is distinct from burnout (no direct correlation); in some cases AI reduced burnout when used to remove repetitive tasks.
- “Three-tool cliff”: productivity/stress tradeoff rises when users adopt many separate AI tools—using 4+ tools correlates with more cognitive strain.
- Marketing and highly disrupted roles report more brain fry; managers and compliance roles report less (possible reasons: oversight experience, type of tasks).
- Mitigations and recommendations:
- Individuals: acknowledge the risk, clarify outcomes (not just output), limit tool sprawl, set thresholds for “done.”
- Teams/managers: engage in open dialogue, integrate AI into shared workflows, protect cognitive load, measure outcomes rather than raw output.
- Organizations: redesign workflows and define “AI fluency” to include cognitive health and oversight practices; involve frontline workers in redesign (historical parallels to automation-era labor responses).
- Caveats: research is early; consultants’ incentives were disclosed and the lead researcher emphasized data-driven intent and field experience.
3) How Grammarly “stole” identities — Casey Newton’s experience
- The issue: Grammarly’s “Expert Review” (later rebranded under Superhuman) showed named experts/authors as inspiration/voices for editing suggestions. Many named people (journalists, authors, critics) were neither consulted nor compensated.
- Casey Newton’s findings:
- He and many high-profile writers appeared as selectable “experts” in the feature; suggestions were generic, low-quality, and often misrepresented what those experts would actually advise.
- Example output was bland and sometimes nonsensical, raising product-quality and honesty issues.
- Legal/ethical fallout:
- At least one journalist (Julia Angwin) filed a class action seeking to stop Grammarly from attributing words/voices to people who were never involved.
- After public reporting, Grammarly disabled Expert Review and announced a reimagining of the feature to give experts control; public pressure drove the change.
- Broader point: many smaller SaaS/purpose-built products risk obsolescence or ethical pitfalls when they rely on scraped content or claim human inspiration without consent or revenue sharing. Free frontier models (ChatGPT, Claude, Gemini) can replicate or outperform such features, further pressuring legacy startups.
Notable insights and quotes
- “It’s basically a coin flip” — on readers’ ability to prefer AI-written passages over human-written ones (listeners reacted angrily to that finding).
- “Shrinking the haystacks” — AI’s practical military value right now: filtering huge amounts of mostly-useless data to find actionable items.
- “Frog being boiled” — concern that AI’s militarization is normalizing progressive delegation of consequential decisions to models.
- BCG: “AI brain fry” is cognitive strain tied to oversight and intensified multi-tool workflows — distinct from classic burnout.
Practical recommendations (who should act and how)
- For policymakers and military leaders:
- Increase transparency about AI roles in targeting and mission-planning; codify human control standards.
- Protect critical AI infrastructure (data centers, subsea cables); consider diversifying and hardening supply chains.
- Revisit procurement and risk assessments for third-party models in classified systems.
- For company leaders and managers:
- Define outcomes (not just output) for AI use; limit the proliferation of overlapping AI tools.
- Create team-level AI workflows and open communication channels to reduce single-person cognitive bottlenecks.
- Build AI fluency programs that include cognitive load management and governance, not only technical skills.
- For workers:
- Acknowledge and name brain fry; talk to managers about boundaries and expectations.
- Consolidate tools where possible; focus prompts and set “done” criteria to prevent endless iteration.
- Use AI for repetitive tasks to reclaim time for higher-skill or social work that reduces burnout risk.
- For consumers and creators:
- Be skeptical of services that display or monetize other people’s identities without consent.
- Prefer transparent AI features that disclose provenance and compensate creators when their work is being used.
Context and implications
- The Iran conflict is a real-world stress test for how AI integrates into high-stakes systems. Short-term effects include better signal extraction and faster operations; medium/long-term risks include erosion of human oversight, escalation dynamics, and vulnerable global infrastructure.
- Workplace AI adoption is not uniformly beneficial: organizational design, managerial practice, and ecosystem tooling profoundly affect whether AI reduces drudgery or creates cognitive overload.
- Product ethics matter: the Grammarly episode shows rapid consumer backlash can force change, but many problematic AI practices (unauthorized use of creators’ work, hallucinated attributions) remain widespread.
If you want a one-line summary: AI is already reshaping warfighting and workplaces in practical ways — with real benefits and real harms — while the commercial AI ecosystem is still figuring out the ethics, governance, and business models that should accompany these capabilities.
