Talking to ChatGPT drains energy. These other things are worse.

Summary of Talking to ChatGPT drains energy. These other things are worse.

by The Washington Post

16mOctober 6, 2025

Summary — "Talking to ChatGPT drains energy. These other things are worse."

The Washington Post (Post Reports) — Host: Colby Echowitz; Guest: Michael Koren (Climate Coach)

Overview

This episode examines how much energy AI chatbots (like ChatGPT) actually use, how that compares with other digital and everyday activities, and what to watch for as AI spreads. The guest, Michael Koren, explains measurement challenges, recent estimates for per-query energy use, efficiency trends, the risks of scale, and practical advice for consumers.


Key points & main takeaways

  • Early alarmist estimates overstated AI’s per-query energy/water usage; more recent measurements are much lower.
  • Typical inference (a single AI query) is now estimated at roughly 0.24–0.3 watt‑hours — about enough to power an LED bulb for a couple minutes.
  • Training large models is energy‑intensive (hundreds of megawatt‑hours), but inference (answering your question) is relatively modest.
  • Not all models are equal: large, general models require more compute; smaller/specialized models can be far more efficient and are increasingly viable for many tasks.
  • Efficiency is improving rapidly, but that can increase usage (the Jevons Paradox): cheaper/more efficient AI may lead to greatly expanded adoption and overall higher consumption.
  • AI’s share of U.S. electricity could grow from ~3% today to roughly 8% by 2030 — significant in aggregate but still small compared with some everyday activities.
  • Bigger contributors to an individual's carbon footprint remain: commuting/transport, home heating/cooling, and diet (especially beef). Digital activities like streaming video and keeping screens on also generally use more energy than occasional AI queries.
  • Data centers can stress local grids and water supplies and potentially raise utility costs in affected regions if not planned responsibly.

Notable quotes & insights

  • Opening framing: "The company behind ChatGPT has told its users that being polite to their AI chatbot is expensive." (refers to the claim that extra words increase compute cost)
  • Measurement figures:
    • “A typical AI query now consumes about 0.3 watt hours.” — Epic AI estimate
    • Google/ Gemini median text response ≈ 0.24 watt hours (Google confirmation)
  • On scale risk: “As coal-burning steam engines became more efficient, we didn't use less coal. We used more coal.” — analogy for the Jevons Paradox applied to AI.
  • On relative impact: “You would have to [run] search queries for about several thousand years to match the emissions it takes for the average American to get to and from work every year.” — emphasizes commute's outsized footprint.

Topics discussed

  • How the internet and AI consume electricity and water (GPUs, data centers, cooling)
  • Methods for estimating per-query and model training energy use
  • Differences between training vs inference energy intensity
  • Large language models vs small/specialized models (efficiency tradeoffs)
  • Corporate efforts to improve model and operational efficiency
  • The Jevons Paradox (efficiency leading to increased total consumption)
  • Projected growth of AI electricity demand (3% → ~8% of U.S. electricity by 2030)
  • Comparisons with other digital and real-world emissions (streaming, storage, commuting, diet)
  • Societal and infrastructure implications: data center siting, water use, grid stability, utility costs

Action items & recommendations

For consumers:

  • Don’t panic about occasional AI queries — single chat questions are low-impact compared with commuting, home energy, and diet.
  • Use appropriate tools: prefer simple search/smaller models for trivial queries; reserve large models for tasks that need them.
  • Expect and choose (when available) providers or models that publish efficiency data or emphasize low-carbon operations.

For companies and policymakers:

  • Increase transparency: publish clearer data on energy use and emissions from training and inference.
  • Invest in model and operational efficiency (smaller models for specialized tasks, timing workloads to match grid conditions, etc.).
  • Plan data center siting and power provision to avoid local grid stress and water-resource impacts.
  • Monitor and manage rebound effects (encourage efficiencies that do not simply enable unlimited consumption).

Personal priorities (for biggest emission reductions):

  • Focus on commuting/transport choices, home heating/cooling efficiency, and dietary choices (e.g., reducing beef consumption) for greater impact than worrying about the odd ChatGPT query.

If you want, I can convert this into a one‑page summary card, or extract the exact time-stamped quotes and figures for quick reference.