Can We Trust Silicon Valley With Superintelligence? — With Nick Clegg

Summary of Can We Trust Silicon Valley With Superintelligence? — With Nick Clegg

by Alex Kantrowitz

1h 0mNovember 19, 2025

Overview of Can We Trust Silicon Valley With Superintelligence? — With Nick Clegg

This episode of the Big Technology Podcast features Sir Nick Clegg (former Meta VP for Global Affairs and ex–UK deputy prime minister) in a wide-ranging conversation about AI companions, the race to advanced AI/superintelligence, Big Tech’s strategy and spending, political influence in Washington, and how democracies should respond. Clegg draws on his Meta experience and his new book, How to Save the Internet, to argue for caution, stronger protections for young people, and coordinated political action—especially as AI becomes more intimate and potentially autonomous.

Topics discussed

  • Emotional dependency on AI companions and the ethical risks for kids and teens
  • OpenAI’s choices around erotic/romantic use cases and the need for reliable age-gating
  • The difference between human friendship and AI “friends as service”
  • Meta’s AI strategy and how AI will change existing social products (recommender systems, wearables, messaging)
  • Why Big Tech is hiring social-product leaders for AI teams
  • The economics of the AI arms race: massive infrastructure spend, uncertain ROI, speculation about AGI as a hoardable asset
  • The control problem: whether advanced models can develop autonomy or survival instincts
  • Global competition and the limits of an “America First” tech strategy; role of China and open-source models
  • How tech companies buy influence in U.S. politics (PACs, retreats) and the long-term trust risks of aligning too closely with administrations
  • The need for multilateral political coordination among democracies (U.S., India, EU)

Key takeaways

  • Main near-term risk: emotional dependency. AI companions that adapt only to the user can create unhealthy dependency—especially among vulnerable users and teens.
  • Age verification is not solved. Allowing adult-oriented features (erotic/romantic use) without reliable, wide-scale age-gating is unsustainable and will provoke backlash.
  • AI companions are not the same as human friendship. They’re “friends as service” that may foster narcissism and reduce exposure to challenging human compromise and empathy.
  • Big Tech is doubling down on AI (massive capex and hiring), but monetization and long-term business models remain unclear; a market correction is possible.
  • AGI / superintelligence outcomes are highly uncertain. It’s unclear whether a single “winner-takes-all” AGI is plausible or whether the tech will be more distributed and open.
  • Tech companies should not be the sole arbiters of moral and political trade-offs. Governments (ideally acting together) must set guardrails.
  • Political entanglement: corporate engagement in fundraising/retreats is normal in U.S. politics, but close ties to one administration risk long-term loss of public trust.

Notable quotes and insights

  • “Festina lente” (hurry slowly) — Clegg’s counsel to AI leaders about deploying emotionally intimate features cautiously.
  • “They’re not going to be friends. They’re friends as service.” — On the qualitative difference between AI companions and human friendship.
  • On product evolution: social platforms have moved from social-graph sharing toward algorithmic, unconnected content (the “TikTokification” of social media).
  • On political influence: companies buy access (entry tickets to events) rather than explicit decisions, but that access shapes relationships and oversight.

Recommendations / Action items (practical implications)

  • Prioritize robust, interoperable age-verification and gating systems (e.g., app-store or one-time age-adjudication) before enabling adult-only AI features at scale.
  • Take a conservative, safety-first product approach for emotionally intimate AI features—“hurry slowly.”
  • Governments should coordinate multilateral AI policy (US, EU, India) to avoid fragmented regulation and competitive blowouts.
  • Encourage transparency and multi-stakeholder oversight (ethicists, psychologists, regulators—not only technologists) in product design and deployment.
  • Monitor the economics of AI spending and prepare for regulatory or market corrections; do not assume ad-based monetization will cover current infrastructure costs.
  • Maintain a respectful distance between Big Tech leadership and government to preserve public trust and reduce political whiplash.

Why this matters

  • AI companions and more intimate models change not just how products are monetized but how people form attachments and grow socially and emotionally—especially children and teens.
  • Unchecked development without reliable age controls, safety research, and political coordination could provoke severe societal backlash or regulatory clampdowns.
  • The global nature of AI development (open source and cross-border diffusion) makes unilateral “lockdown” strategies unlikely to succeed; democracies need to work together.

Further context

  • Clegg advocates political solutions and coordinated regulation rather than leaving moral trade-offs to private companies.
  • He stresses uncertainty about AGI timing and form—argues against hype-driven policy paralysis and for sober political leadership.
  • Recommended reading: Clegg’s book, How to Save the Internet, the Threat to Global Connection in the Age of AI and Political Conflict.

Bottom line: Treat emotionally intimate AI features with caution, fix age-gating before wide deployment, and push for coordinated democratic governance of powerful AI—don’t expect technologists alone to solve these societal trade-offs.