AI Backlash Intensifies, Nvidia GTC Preview, Meta’s Embarrassing Delay

Summary of AI Backlash Intensifies, Nvidia GTC Preview, Meta’s Embarrassing Delay

by Alex Kantrowitz

1h 1mMarch 13, 2026

Overview of Big Technology Podcast (Friday edition)

Alex Kantrowitz and guest Ranjan Roy (Margins) unpack a worsening public backlash to AI, preview NVIDIA CEO Jensen Huang’s messaging ahead of GTC, and examine setbacks inside major companies using or building generative AI — notably Amazon, McKinsey, and Meta. The conversation connects polling data, PR/personality dynamics, operational failures, and the strategic implications for the AI industry.

Key topics discussed

  • Why public sentiment toward AI is souring (Sam Altman soundbite and fallout)
  • Polling and usage data showing non-users are far more negative than users
  • Jensen Huang’s “five-layer cake” blog and GTC as an industry messaging moment
  • Recent operational problems tied to generative AI (Amazon outages; McKinsey red-team breach)
  • Meta’s delayed foundational model (codename “Avocado”) and the possibility the company might license Google’s Gemini
  • How reputational, political, and infrastructure issues (data centers, energy) could shape AI’s technical and business trajectory

Polling & public sentiment

Key data points cited

  • NBC News poll: 50% of voters say AI’s risks outweigh the benefits.
  • AI usage: 74% of white‑collar workers and 50% of blue‑collar workers have used AI tools.
  • YouGov: Three times as many Americans expect AI effects to be mostly/entirely negative vs mostly/entirely positive; 62% of people who’ve seen but not used AI expect negative effects.
  • Pew: Public sees data centers more negatively than positively for environment, local energy costs, and local quality of life.

Patterns

  • People who use AI regularly are notably less negative than those who only observe it.
  • Much of the public discussion centers on chat‑style LLMs (ChatGPT/Gemini/Claude), not the “hidden” AI baked into many products.
  • Negative perception is amplified by high‑profile spokespeople and worrying headlines (job loss predictions, data center expansion, corporate secrecy).

Causes of the backlash (as discussed)

  • Messaging and spokespeople: Statements taken out of context (e.g., Altman’s “intelligence as a utility”) and a roster of polarizing tech leaders feed distrust.
  • Economic concerns: Fears about job disruption, monetization of public data/sources, and unequal distribution of profits.
  • Safety and reliability: Hallucinations, security vulnerabilities, and real outages undermine confidence.
  • Infrastructure impacts: Data centers’ environmental and local community externalities create tangible opposition.

NVIDIA and GTC preview

  • Jensen Huang published a blog framing AI as a “five‑layer cake” (energy → chips → infrastructure → models → applications), emphasizing jobs and broader economic benefit.
  • Huang’s message aims to reframe AI as a job‑creating, productivity‑enhancing industry that will require vast infrastructure and skilled labor (electricians, technicians, operators).
  • Hosts suggest this is a PR opportunity: show practical, relatable benefits (e.g., AI freeing time for leisure/family) and put “friendly” faces forward to counteract distrust.

Corporate incidents & safety concerns

  • Amazon: Internal meetings followed outages tied to aggressive use of GenAI coding assistants. Rapid, mandated adoption without adequate guardrails created high‑blast‑radius failures.
  • McKinsey: Red team (Codewell) reportedly got full read/write access to an internal AI platform in two hours, exposing millions of chats, files, users, and prompts. Illustrates unresolved prompt‑injection and agent security risks.
  • Takeaway: Enterprises need structured rollout, training, and hardened security for agents and LLM-based workflows.

Meta: Avocado delay and strategic questions

  • Report: Meta’s foundational model (codename “Avocado”) underperformed rivals (OpenAI, Google Gemini, Anthropic) on internal tests; rollout delayed.
  • Worse: Meta leaders reportedly discussed temporarily licensing Google’s Gemini to power products.
  • Possible drivers: Shift in modeling techniques (more RLHF/reinforcement training), culture/integration issues after big hires, and difficulty catching up on task‑oriented capabilities.
  • Hosts: Don’t count Meta out — its user base and distribution could turn a single breakthrough into a rapid comeback, but Zuckerberg is unlikely to become the “friendly face” for AI.

Implications & likely outcomes

  • Short term: Increased political scrutiny, localized pushback against data center projects, and pressure on firms to demonstrate safety/benefit.
  • Medium term: Could incentivize:
    • More compute‑efficient approaches, smaller models, and open‑source innovation if large data center expansions face resistance.
    • Rapid growth in new job categories (security, prompt/agent ops, infrastructure technicians).
    • Consolidation of model leaders (OpenAI/Google/Anthropic) while other players pivot to advantages in distribution or vertical applications.
  • Long arc: If AI becomes integrated into everyday products behind the scenes, public perceptions may soften — but only if companies address tangibles (safety, jobs, fairness, compensation for training sources).

Actionable recommendations (from the discussion)

  • Communications: Put relatable, positive human stories front-and-center (how AI improves quality of life), and diversify messengers away from polarizing executives.
  • Enterprise rollouts: Avoid blanket mandates; highlight internal champions, showcase best-practice examples, incentivize safe/efficient deployments.
  • Security: Prioritize red‑teaming, guardrails against prompt injection, strict data access controls for agentic systems.
  • Policy & transparency: Be clearer about training data sources, compensation/credit mechanisms, environmental impacts, and local community engagement for infrastructure buildouts.
  • Innovation path: Embrace compute efficiency and open approaches if public opposition stalls large data center buildouts — that may drive healthier technical diversity.

Notable quotes & framing

  • Sam Altman’s line about “intelligence as a utility” sparked outsized backlash; hosts argue the idea of consumption‑based pricing (like electricity) is not inherently sinister but was poorly messaged.
  • “Culture eats inference strategy for breakfast” — a paraphrase capturing the view that organizational culture and integration matter as much as raw model talent or compute.
  • Repeated refrain: People who use AI tend to be more positive; exposure reduces fear (if the experience is beneficial and reliable).

Episode logistics / calls-to-action mentioned

  • Guest: Ranjan Roy (Margins). Host: Alex Kantrowitz.
  • Upcoming guest teased: Andrew Ross Sorkin to discuss AI labor, private credit, SpaceX IPO.
  • Sponsors/readouts: Nespresso, Utah Valley University, Notion (custom agents), Serval, Shopify, Red Circle, American Psychiatric Association Foundation.

If you want the quick takeaway: AI is at a reputational and operational inflection point. Technical progress continues, but trust, security, messaging, and the environmental/infrastructural realities will shape which companies and models thrive — and whether the public accepts AI as a net benefit.