Why people really hate AI

Summary of Why people really hate AI

by The Verge

1h 45mMarch 20, 2026

Overview of Why people really hate AI (The Vergecast)

This episode of The Vergecast (hosts David Pierce and Nilay Patel) examines a growing cultural and commercial problem: mainstream distrust and dislike of AI. The conversation is sparked by an internal OpenAI memo (Fidji Simo) urging a pivot to enterprise use cases, and ranges across polls showing negative public sentiment, VC and founder messaging strategies, concrete product controversies (NVIDIA’s DLSS 5, Samsung’s Z Trifold), debunked viral AI claims, and policy/First Amendment concerns tied to FCC rhetoric. The hosts argue the core issue is simple: AI companies are asking for huge resources and permissions without delivering consumer products people genuinely love.

Main arguments & framing

  • AI firms have not yet produced a clear, widely-loved consumer product comparable to the internet, smartphones, YouTube, or Instagram.
  • Public sentiment is more negative than positive; this hurts adoption and makes it harder to justify the industry’s demands (data, compute, copyright access, data centers).
  • The industry’s messaging mistakes — from apocalyptic “jobless future” pitches to blaming media and consumers — have backfired.
  • Companies are reacting by pivoting to enterprise/B2B (where monetization is clearer), but that won’t solve the broader reputational problem.
  • Policymakers and platform gatekeepers (e.g., FCC rhetoric) are creating chilling effects on journalism and speech when they mix regulatory threat with political aims.

Evidence & data cited

  • NBC News poll: 57% believe AI’s risks outweigh benefits vs. 34% who say the opposite.
  • Pew study: 53% think AI will worsen creative thinking (16% say it will improve); 50% think AI will worsen meaningful relationships (5% say it will improve).
  • Anecdotal reports from tech executives: Gen Z is especially skeptical or hostile toward AI.

Industry responses and narratives discussed

  • OpenAI (Fidji Simo memo): push toward enterprise and coding use cases; “acting as if it’s a code red.”
  • VC/Founder messaging:
    • Two tactics: (1) Claim media/consumers are misled and adoption will come once products reach scale; (2) Use alarmist “doomer” scenarios (AGI/jobless future) to raise huge sums.
    • Both tactics are criticized: environmental/energy arguments don’t change consumer behavior; doomerism can raise money but damages trust and public support.
  • Microsoft/Satya Nadella: calls for “social permission” — AI must demonstrably improve health, education, public sector efficiency, or business competitiveness.
  • Result: companies risk losing social license if they can’t show tangible public benefits.

Problems with current AI products (user experience & ethics)

  • Consumer products are brittle and inconsistent (examples: AI mis-answers, factual errors).
  • Monetization struggles: ChatGPT has broad adoption but costs far exceed revenue at scale (subscriptions/ads/commerce haven’t solved it).
  • Resource/externality demands: huge energy and hardware requirements (data centers, GPUs, RAM) create public friction.
  • Copyright and data-collection practices: industry often relies on scraped data, provoking legal and ethical backlash.
  • Aesthetic and agency concerns: examples like NVIDIA DLSS 5 show the risk of platform-level aesthetic imposition (the graphics card altering an artist’s intended look).

Notable episodes, controversies & examples covered

  • OpenAI memo (Fidji Simo): company pivot to enterprise; criticism that OpenAI has been scattered with "side quests."
  • NVIDIA DLSS 5: an upscaling/AI filter that sparked backlash because it can override developers’ artistic choices (memes, “yassification” of characters); Jensen Huang’s defensive response at GTC was seen as tone-deaf.
  • Samsung Galaxy Z Trifold: Allison Johnson’s odd hands-on — suspicious eBay purchase, phone likely a China model, Trump Mobile SIM, Samsung’s cancellation of the product; raises questions about foldable viability and why Samsung ghosted reviewers.
  • Foldables market: high prices, heavy devices, no clear killer consumer use case yet; Apple fold (if it appears) could be decisive.
  • Meta / Metaverse: mixed strategy (shutdowns, reversals, layoffs), Supernatural VR fitness acquisition and community fallout; Meta’s brand/trust deficits persist.
  • Threats to media/First Amendment: FCC chair Brendan Carr’s public comments tying broadcast license scrutiny to coverage of the Iran war; concerns about chilling effects and politicized enforcement.
  • Viral AI hype debunked:
    • “Fly uploaded to a computer” — overstated; researchers and reporters debunked the claim as non-equivalent to an uploaded animal.
    • “ChatGPT cured a dog’s cancer” — debunked: human researchers and concurrent therapies were the real factors; ChatGPT did not design a cure.

Notable quotes & lines

  • Fidji Simo (OpenAI): “We are very much acting as if it’s a code red.” (all-hands)
  • Satya Nadella (Davos): “We will quickly lose even the social permission to actually take something like energy… if these tokens are not improving health outcomes, education outcomes…”
  • Hosts’ synthesis: “The industry is asking for so much and they haven't delivered a product people love.”

Key takeaways

  • Social permission matters: without clear consumer value or demonstrable public benefits, AI will face sustained opposition.
  • Messaging and tone are strategic: alarmist or blame-shifting narratives (doom, “you were lied to,” media blame) erode trust.
  • Monetization remains unresolved for many consumer AI services; enterprise adoption is the current safer path for profitability.
  • Platform- or hardware-level AI changes (e.g., DLSS 5) that override creators’ choices provoke disproportionate backlash.
  • Journalistic rigor is crucial to cut through hype: many viral AI claims are unverified or false; careful reporting matters.

Actionable recommendations (for different audiences)

  • For AI companies:
    • Prioritize consumer value: build one or two killer experiences that people genuinely want and will pay for.
    • Be transparent about resource use and data practices; show concrete public benefits (health, education, government efficiency).
    • Avoid sensationalist or fatalistic messaging aimed solely at fundraising.
  • For policymakers:
    • Seek clear, narrowly tailored rules that preserve press freedom and avoid chilling effects.
    • Reward demonstrable public benefits for high-energy/high-data projects (social license criteria).
  • For journalists & newsrooms:
    • Maintain rigorous verification standards; debunk viral hype quickly and clearly.
    • Report both technical strengths and the real-world limitations of AI products.
  • For consumers:
    • Evaluate AI tools on real utility and privacy trade-offs, not just hype headlines.
    • Demand transparency: how was a model trained, what data was used, and what are the costs?

Episode structure / other segments mentioned

  • Hiring plug: Vergecast producer & Decoder supervising producer roles.
  • Interview tease: Allison Johnson on Galaxy Z Trifold.
  • Lightning round: DLSS 5, debunks of “uploaded fly” and “ChatGPT cured dog’s cancer,” Meta/Metaverse updates, FCC/Brendan Carr free-speech critique.
  • Plugged shows: Decoder episodes and Version History podcast.

Final synthesis

The episode argues the core reason “people really hate AI” right now is not a single scandal or a few bad headlines — it’s a systemic mismatch between what AI companies are asking for (compute, data, legal leeway) and what they have given in return (reliable, delightful, money-making consumer products). Until companies produce clear, everyday value or convincingly demonstrate public-good outcomes, public skepticism will persist, and political/regulatory backlash will grow. Journalists and careful reporting have an outsized role in separating hype from reality and highlighting where AI actually improves lives.