987 - May I Meet You? feat. Ed Zitron (11/17/25)

Summary of 987 - May I Meet You? feat. Ed Zitron (11/17/25)

by Chapo Trap House

1h 17mNovember 18, 2025

Overview of 987 - May I Meet You? (feat. Ed Zitron — 11/17/25)

This episode of Chapo Trap House features journalist Ed Zitron discussing a deep, skeptical read of the economics, infrastructure, and cultural hype around OpenAI and the broader AI/data‑center boom. Zitron lays out why OpenAI’s operating costs — especially inference — are ballooning, why the corporate deals and data‑center pledges look fragile or misreported, and why the whole AI funding frenzy resembles a cargo‑cult repeat of past tech bubbles. The conversation mixes technical explanation, financial detail, industry gossip, and cultural critique — and finishes with a comic aside about Bill Ackman’s dating advice (“May I meet you?”).

Key takeaways

  • OpenAI’s operating costs (OPEX) are being consumed largely by inference (the compute used to generate model outputs). Zitron reports extremely high inference spending and warns revenue is being swallowed by those costs.
  • Microsoft takes ~20% of OpenAI’s revenue under their deal; Zitron cites ~$4.33B Microsoft revenue share through Q3 and questions the sustainability of OpenAI’s path to the publicly‑stated revenue projections.
  • “Reasoning” models (chain‑of‑thought, multi‑step processing) drastically increase per‑query compute — meaning more powerful models give diminishing returns while driving exponential operational cost growth.
  • Major hardware and data‑center deals (NVIDIA, AMD, Broadcom, Oracle, CoreWeave, etc.) are complex, contingent, and in many cases economically risky or misreported; some deals hinge on building huge gigawatt‑scale capacity that may be infeasible.
  • The AI/build‑GPU boom is largely benefitting GPU vendors (NVIDIA) and construction/data‑center firms; many startups and LLM companies lack clear, sustainable monetization.
  • Zitron frames the current phase as a “cargo‑cult” repeat of past tech booms: lots of money and hype without a proven business model that scales profitably.

Technical explainer: inference, reasoning, and why costs are rising

  • Inference: the compute used when a model generates an output (what users see when they ask ChatGPT a question). Zitron emphasizes inference — not training — is now the major ongoing cost.
  • Reasoning / test‑time compute: newer model designs prioritize multi‑step “reasoning” where the model breaks tasks into subtasks. This increases tokens and math operations per query and multiplies compute requirements.
  • Practically: more compute per query → higher electricity, GPU‑time, and latency costs. Free and paid users both drive this cost; “power users” can cost the provider far more than their subscription revenues justify.
  • Hardware expectations: custom chips (Broadcom, AMD, custom silicon) were promised as a path to cheaper inference, but gains appear modest; more efficient chips don’t necessarily reduce absolute power draw enough to solve the economics.

Financials, deals, and infrastructure (what Zitron reported)

  • Inference spend: Zitron cites very large inference spending for OpenAI since the start of 2024 (figures discussed on the show; he later references ~$12.4B of inference spending).
  • Microsoft: reportedly receives ~20% of OpenAI revenue; Zitron cites ~$4.329B to Microsoft through Q3 and questions the likelihood OpenAI will reach quoted annual revenue targets without huge Q4 growth.
  • Big announced/rumored commitments:
    • NVIDIA: claims around “10 gigawatts” of IT load in various reporting; Zitron stresses that a “gigawatt data center” scale is enormous (comparable to power used by large cities) and practically difficult to deliver quickly.
    • AMD/Broadcom/Oracle: multi‑tranche deals that tie financing or equity options to building successive gigawatts and stock performance; Zitron calls the terms unusual and risky, and notes IP/sharing implications (e.g., Microsoft reportedly has access to Broadcom chip details through OpenAI’s contracts).
    • Oracle: large amount of debt/off‑balance commitments to build data centers for OpenAI; Zitron argues Oracle has mortgaged a lot of future capacity on a company that may not be able to pay.
  • CoreWeave / cloud providers: many LLMs are running on other companies’ infrastructure and paying huge AWS/CoreWeave bills (Anthropic paid $2.66B on AWS through Q3 2025, per Zitron’s reporting).

Industry implications and macro risks

  • Business model problem: unlike past tech that scaled cheaper as usage grew, LLMs often become more expensive with increased usage because of per‑query compute intensity.
  • Concentration risk: NVIDIA’s CUDA ecosystem and GPU dominance make it a near‑monopoly supplier for AI acceleration; that creates market concentration and single‑vendor dependency.
  • Bubble risks: Zitron argues the AI boom looks like a speculative construction/asset bubble (massive GPU orders, data‑center projects) that could create a painful impairment event when hardware value and demand reprice downward.
  • Too‑big‑to‑fail? Zitron thinks OpenAI isn’t “too big to fail” economically (failure would be symbolic and damaging to the AI myth), and a government bailout is politically and practically unlikely.
  • Knock‑on effects: a large correction would hurt VCs, limited partners, startups, and valuations across tech; it could permanently change how investors value growth in big tech.

Infrastructure, environmental, and local economic notes

  • Power and cooling: nicknamed “IT load” — a 1 GW IT load implies much more grid capacity; Zitron highlights that the scale of planned data centers (tens of GW) is unrealistic in the short term due to transformer, fuel, and skilled labor shortages.
  • Jobs: data centers create relatively few local, long‑term jobs; construction booms can be transient and rely on specialized contractors flown in.
  • Water and other resource use: concerns exist but Zitron emphasizes electricity is the real limiting factor; many planned builds may never be operable because the grid capacity doesn’t exist.

Cultural critique and media/political angle

  • “AI” as marketing: the panel agrees the term “AI” functions as a marketing label that gets attached to many products regardless of actual utility.
  • Tech hero worship: Zitron criticizes the cult around figures like Sam Altman and the reflex to accept grand visions without scrutiny; media and analysts have been complicit by not demanding business‑model proof early on.
  • User experience / social response: many people find LLM outputs unimpressive, unreliable, or offensive; yet boosters interpret user difficulty as user failure (“you’re not using it right”), which breeds resentment.
  • Anecdote: a lighter segment at the end riffing on Bill Ackman’s “May I meet you?” advice for meeting people, used as a comedic closer.

Notable quotes (paraphrased)

  • “Inference is eating all of their revenue.” — Zitron on per‑query costs.
  • “The more the models ‘think,’ the more compute each query requires.” — technical summary of reasoning models.
  • “OpenAI is not too big to fail — it’s too small to pull apart into enough pieces for people to eat.” — on bailout dynamics and symbolic risk.
  • “AI is a marketing term.” — characterization of hype vs. product reality.
  • “NVIDIA is basically the single person in this market.” — on CUDA/GPUs concentration.

What listeners should watch for / recommended attention points

  • Track inference and OPEX trends (public filings, vendor revenue shares) rather than just headline user or valuation numbers.
  • Monitor GPU orders and secondary market pricing — a drop in GPU value could force impairment charges at many companies.
  • Scrutinize announced multi‑billion data‑center deals for contingencies (tranche triggers, power commitments, financing).
  • Don't confuse media hype or product demos with durable monetization — ask: who pays what, and does it cover marginal (inference) costs?
  • Consider concentration exposure: companies with heavy NVIDIA + hyperscaler dependency are in a fragile position if demand or prices shift.

Where to follow Ed Zitron

  • Website: betteroffline.com
  • Social: Ed Zitron on BlueSky and Twitter (username: Ed Zitron)
  • Newsletter: wheresyoured.at (subscribe for paid/premium content)