Sir Tim Berners-Lee doesn’t think AI will destroy the web

Summary of Sir Tim Berners-Lee doesn’t think AI will destroy the web

by The Verge

55mNovember 10, 2025

Overview of Sir Tim Berners‑Lee doesn’t think AI will destroy the web (Decoder — The Verge)

This episode of Decoder (host Neil I. Patel) is a wide-ranging conversation with Sir Tim Berners‑Lee — inventor of the World Wide Web and founder of the Solid movement — about the state and future of the web. They cover centralization vs. openness, the rise of agentic/generative AI and “AI browsers,” the partial success of the semantic web, data ownership and “data wallets,” regulatory tension (EU vs. US), and practical technical and market ways to steer the web toward more user sovereignty.

Key topics discussed

  • The shift from open web publishing to closed platforms and apps (YouTube, TikTok, app‑centric publishing).
  • Market forces and network effects that produce dominant platforms and browser engines (Chrome/Chromium, WebKit).
  • The semantic web: partial successes (linked open data, schema.org) and new realization of machine‑readability via AI.
  • Generative AI as both an enabler (machines can extract structure from unstructured pages) and a threat (data extraction without consent, disintermediation).
  • Agentic/AI browsers and the “DoorDash problem” (AIs disintermediating merchants and draining ad/commerce revenue).
  • Data sovereignty and architectures for personal control: Solid protocols, local‑first data storage, and “data wallets.”
  • Power shifts in infrastructure (e.g., Cloudflare blocking AI crawlers / proposing paid “content signals”) and whether such centralized control is compatible with the web’s ideals.
  • The potential role of a coordinated international effort (a “CERN for AI”) and regulatory divergence between Europe and the U.S.

Main takeaways

  • Tim Berners‑Lee remains optimistic: the web’s core ideals — openness and peer‑to‑peer agency — are worth defending and can be rebuilt in modern contexts.
  • AI will accelerate a kind of semantic web because models can extract structured data from unstructured sources, but that often happens as extraction without consent or compensation.
  • Personal AIs that “work for you” (have secure access to your personal data/wallet) are crucial. They are more useful if they can combine public web signals with private, user‑controlled data.
  • Local‑first architectures and data wallets are the preferred endpoint: the user keeps control of their data and grants selective access to agents.
  • Market incentives alone have not sufficed to deliver broad user control over data; a mixture of standards, interoperable implementations, commercial incentives and regulation will be needed.
  • Browser and engine competition matters: Chromium’s dominance is worrying; more engine diversity (and letting multiple engines run on mobile platforms) would spur innovation and protect the web.
  • Centralized gatekeepers (platforms, Cloudflare, large AI firms) have accumulated disproportionate leverage; that is undesirable for the web’s health and requires both technical and policy remedies.

Notable insights & quotes

  • “That feeling of sovereignty as an individual being enabled and being a peer with all the other people on the web — that is what we are still fighting for and what we need to rebuild.”
  • On AI: “We need an AI that works for you… that has access to your personal data in your data wallet.”
  • On semantic web vs. AI: “The semantic web has succeeded in linked open data and schema.org, but AI will now do the conversion of non‑semantic data into semantic data.”
  • On centralized infrastructure: “A centralized monopoly is not good for the web.”

Tim’s current work (practical projects mentioned)

  • Solid / Inrupt: protocols and implementations to give users personal data stores (data wallets) and to let applications interoperate without central extraction of user data.
  • Charlie: an example of an AI assistant that can use a user’s personal data wallet to act on the user’s behalf (agentic assistance that respects user interests).

Specific problems highlighted

  • Extraction vs. persuasion: early semantic‑web efforts relied on persuading data owners to publish structured data. Now AI can extract data regardless of intent or consent.
  • Economic disintermediation: Agentic AIs could remove user interactions that drive ad revenue and commerce flows (search clicks, site visits), undermining the web’s business ecosystems.
  • Infrastructure leverage: Providers like Cloudflare can block or permit AI crawlers and introduce paid signals, creating new chokepoints.
  • Browser engine monopoly: Chromium dominance reduces diversity and centralizes control over web capabilities and standards on both desktop and mobile.
  • User convenience vs. privacy/control: People repeatedly trade privacy for convenience, making market shifts toward user‑controlled local storage difficult without better UX/incentives.

What Tim thinks should happen (recommendations / direction)

  • Build and adopt interoperable standards (Solid, schema, linked data) that let users control and move their personal data.
  • Push towards local‑first storage and agents running under user control (local inference where feasible).
  • Encourage browser engine competition and allow alternative engines to run on major platforms (e.g., iPhone) to revive web innovation on mobile.
  • Create economic models and incentives for websites and services to opt into interoperable, AI‑friendly but consent‑respecting protocols (payments, metadata, data wallets).
  • Combine market incentives with regulation (especially where market forces alone fail to protect sovereignty and fairness).
  • Consider international cooperative structures for AI safety and interoperability (the “CERN for AI” idea), even if politically difficult.

Actionable items for developers, publishers and policymakers

  • Developers: experiment with Solid-compatible personal data stores and local‑first app patterns; think about interoperable APIs and explicit consent flows for agent access.
  • Publishers & platforms: explore metadata and schema.org to participate constructively in machine consumption; consider business models that reward publishers for accessible, structured content.
  • Product teams building AIs/agents: design agents to “serve the user’s interests” by default, including safe access to the user’s personal data wallet rather than wholesale hoovering of web data.
  • Policymakers: assess rules that protect data owners’ rights to deny extraction and require transparency/interoperability; balance rapid innovation with protections for competition and user control.
  • Advocates/standards bodies: promote open protocols, enforceable robot/consent mechanisms, and work to decentralize choke points (e.g., prevent single entities from unilaterally dictating access).

Bottom line

Berners‑Lee sees both danger and opportunity in the AI era. AI can finally deliver parts of the semantic web vision, but without attention to consent, local control, interoperability and competition, it risks amplifying extraction and centralization. The solution he favors is technical (protocols and local data), governance (standards and industry coordination), and political (regulation where markets fail) — all aimed at restoring more digital sovereignty to individual users.