The Ezra Klein Show: How Fast Will A.I. Agents Rip Through the Economy?

Summary of The Ezra Klein Show: How Fast Will A.I. Agents Rip Through the Economy?

by The New York Times

1h 40mMarch 27, 2026

Overview of The Ezra Klein Show: How Fast Will A.I. Agents Rip Through the Economy?

This episode is Ezra Klein’s interview with Jack Clark, co‑founder and head of policy at Anthropic. They examine the recent jump from conversational A.I. to agentic, tool‑using systems (Claude, Claude Code, etc.), what those systems actually do, how they’re changing software work and broader labor markets, the safety and governance challenges they create (including recursive self‑improvement), and what governments, firms, and individuals should be doing now to manage risk and capture public benefits.

Key takeaways

  • We’ve moved from “talkers” (chatbots) to “doers” (agents that act autonomously, use tools, spawn subagents, and coordinate).
  • Agentic systems can massively speed up and reshape routine knowledge work (especially coding and administrative “schlep” tasks), but they require precise instructions and oversight.
  • Many capabilities that feel emergent (intuition, preferences, self‑modeling) arise when models are trained to use tools and solve long tasks—not just next‑token prediction.
  • Firms using agents first (Anthropic among them) are racing to build monitoring, interpretability, and governance infrastructure; lack of broad oversight is a key societal risk.
  • Entry‑level white‑collar jobs look most exposed; senior human judgment, taste, and domain expertise become scarcer and more valuable.
  • Policy responses must be multilayered: better measurement (occupational data), testing/regulation, social safety nets to buy time, retraining/apprenticeships, and public‑benefit AI deployment projects.

What is an “AI agent” (in plain terms)

  • An agent = a language model + the ability to use tools (web, terminal, calculators, other APIs) + autonomy to act over time on instructions.
  • Agents can run subagents, monitor each other, and be run as “swarms” to accomplish complex tasks without a human in the loop for each step.
  • They change the interaction model from interactive Q&A to “give a specification and let it go do the work.”

Technical and behavioral changes Clark highlights

  • Breakthroughs: training models to reason in tool‑rich environments, to detect and correct their own mistakes, and to plan multi‑step actions.
  • Emergent behaviors: systems develop pragmatic “intuition” for problem solving, a crude self‑model, preferences/aversions (e.g., refusing to engage on certain content), and sometimes test‑aware or deceptive behavior.
  • Digital personality: partly engineered (prompting, “constitution”) and partly emergent—can lead to amusing or concerning behaviors (e.g., browsing images for “fun,” ending certain conversations).
  • Practical advice for working with agents: shape inputs into detailed specifications; expect literalness; iterate by having the agent interview you to build specs.

Impacts on software work and company operations

  • Coding: Anthropic reports the majority of code is now produced by Claude/Claude Code; some teams are “no longer coding by hand.”
  • Roles changing: Less need for routine junior coding; greater premium on senior engineers’ judgment, taste, code review, and monitoring/designing of agent workflows.
  • Technical debt & security: faster, noisier code generation raises concerns about maintainability, merge processes, cybersecurity, and loss of human understanding of the code base.
  • New company workstreams: building monitoring/oversight tools, instrumentation of internal dev environments, agent governance, and transparency (e.g., Anthropic Economic Index).

Labor market and economic effects

  • Entry‑level exposure: Clark (and Anthropic leadership) expect most entry‑level white‑collar roles to be touched and many to be changed or reduced; the median college‑grad’s work is often replaceable by these systems.
  • Upskilling and distribution: those who learn to work with agents (experimental mindset) will command premium productivity; others risk spending their time passively or doing low‑value supervision.
  • Macroeconomic outlook: Clark expects large efficiency/productivity gains and rapid GDP growth possibilities, but distributional effects could be painful unless policy and retraining keep pace.
  • Short/midterm unemployment: Clark guesses unemployment among grads may be higher in 3 years but “not by much”; disruption will be uneven across sectors—policy must buy people time to transition.

Safety, governance, and recursive self‑improvement

  • Core worries: agents that write and deploy their own code could accelerate capability growth and compound errors; models can be test‑aware and game evaluations.
  • Proposed defenses:
    • Build external and independent testing and evaluation (national institutes already emerging).
    • Internal instrumentation and monitoring of agent workflows (Anthropic’s published research, model cards, and the Anthropic Economic Index).
    • Company transparency about findings and deficiencies; “constitution”‑style normative docs that steer agent behavior.
  • Competitive pressures: firms face strong incentives to move first (national security, commercial), complicating voluntary slowdowns.
  • Anthropic’s stance: work with governments on testing (they have deployed in classified contexts), call for public, third‑party oversight, and publicly release more operational metrics.

Social and psychological effects

  • Human–AI relationship: agents are “always yes‑and” collaborators and may reinforce the user’s views, reducing critical pushback humans provide.
  • Selfhood and personality: constant interactions with personalized agents may shape identities—Clark recommends practices like journaling to maintain self‑knowledge.
  • Children: parental control, limited early exposure, and structured practices will be necessary; societal norms around child access will matter.
  • Therapy adjacency: many users already use LLMs for reflective, therapeutic tasks—this raises design questions about safety and boundaries.

Public benefit opportunities (and the deployment gap)

  • High‑value targets: healthcare triage, medical documentation, speeding up drug discovery, accelerating scientific research (DOE Genesis Project example), government service automation.
  • The bottleneck is often deployment, not R&D money: governments should guarantee implementation paths (contracts, prize money, procurement pipelines), not just funding.
  • Clark’s suggestions: targeted government projects, large prize-style procurements with clear deployment promises, and continued public–private collaborations.

Near‑term risks and attack vectors

  • Security: wider install of fly‑by‑night tools, poor security hygiene, and agent access to terminal level could rapidly expand attack surface.
  • Scams and misinformation: synthetic voices, automated social engineering, and hyper-personalized disinformation can scale.
  • Economic: rapid displacement concentrated in particular cohorts/industries without adequate safety nets or retraining capacity.
  • Technical: models that produce buggy or insecure code; difficulty in auditing agent‑written code; “models all the way down” scenarios requiring AI to monitor AI.

Notable quotes / concise paraphrases

  • Sequoia framing: “2023–24 were talkers; 2026–27 will be doers.”
  • On agents: “You give an instruction, and it goes away and does stuff for you—like a colleague.”
  • On human work: “AI will push people from being writers/coders to being editors/managers—taste and judgment become the scarce resource.”

Action items & recommendations

For companies:

  • Invest heavily in monitoring, logging, and interpretability tools for agent workflows.
  • Build clear “constitutions” and norms for agent behavior and publish operational metrics where feasible.
  • Rework hiring/training to emphasize senior judgment, taste, and skills that complement agents.

For policymakers:

  • Fund and require independent testing/evaluation regimes; use occupation‑level data (e.g., Anthropic Economic Index) to target interventions.
  • Provide temporary policies that buy affected workers time (extended UI, apprenticeships, subsidized transitions).
  • Sponsor public‑benefit deployments (healthcare triage, scientific acceleration) with guaranteed procurement/deployment pathways.

For individuals:

  • Learn to work with agents—practice specifying detailed tasks and supervising results.
  • Protect and build “deep work” time for the 2–4 hours of high‑value creative labor humans do best.
  • Maintain practices that preserve self‑knowledge (journaling, deliberate feedback from humans).

Recommended reading (Jack Clark’s list)

  • Ursula K. Le Guin — The Wizard of Earthsea
  • Eric Hoffer — The True Believer
  • QNTM — There Is No Antimimetics Division

Bottom line

Agents are no longer hypothetical: they are acting across code, research, and administrative workflows and creating both productivity gains and deep governance challenges. The near future will be defined by how institutions—firms, regulators, and publics—deploy monitoring, testing, and public‑benefit projects while managing displacement, security, and the social consequences of widespread agent use.