AI’s Unpopularity + Competing With ChatGPT — With Olivia Moore

Summary of AI’s Unpopularity + Competing With ChatGPT — With Olivia Moore

by Alex Kantrowitz

56mMarch 11, 2026

Overview of Big Technology Podcast — "AI’s Unpopularity + Competing With ChatGPT — With Olivia Moore"

This episode features Olivia Moore, AI partner at Andreessen Horowitz, interviewed by Alex Kantrowitz. They discuss public sentiment toward AI, how large foundational models (ChatGPT, Claude, Gemini, etc.) are shaping the market, whether startups can compete with the big players, the rise of agentic tools (e.g., “OpenClaw”), and how incumbents and founders should respond. The conversation mixes market data, product-level comparisons, investor perspective, and practical implications for builders and businesses.

Key takeaways

  • Current public sentiment toward AI in the U.S. is unusually negative (NBC poll cited: 57% say risks outweigh benefits), driven by media narratives, dramatic statements by lab leaders, and fear about job displacement.
  • Sentiment will likely shift as mainstream consumers experience clear value from AI tools; early adopter companies tend to grow faster and often hire more people to meet demand.
  • Major model labs (OpenAI, Anthropic, Google) are powerful but resource‑constrained; they will not (and arguably cannot) optimize every vertical or workflow. That creates opportunity for startups.
  • The highest-probability startup wins are vertical, opinionated products with deep integrations or specialized workflows—not horizontal “bolt-on” email/calendar/Docs replacements.
  • Agentic architectures (long-running autonomous agents like “OpenClaw”) are a major architectural unlock and will spawn many startups, though they are currently developer-heavy and not yet consumer-grade.
  • Memory and persistent user context are among the most important UX differentiators for consumer AI products—done well they can dramatically improve experience.
  • Incumbents will respond (examples: multiple Google AI products). Their advantages (data, integrations, distribution) matter, but AI-native startups can out-innovate them in many niches.

Topics discussed

Public sentiment and risk perception

  • Media stories (resource usage, dystopian narratives) and high-profile statements about large-scale job impact have fueled fear.
  • Olivia argues people often use AI (e.g., ChatGPT) and then appreciate its value—so adoption will temper sentiment over time.
  • Labs’ public messaging can stoke fear; better consumer marketing and clearer explanations of benefits could help.

Big models vs. startups

  • ChatGPT remains dominant by usage; Gemini and Claude are growing in different directions (Gemini — creative/media; Claude — data/enterprise/finance/medicine).
  • Labs are constrained by compute, engineering focus, and priorities; they may leave many valuable vertical niches underserved.
  • Winning startup strategies:
    • Be vertical and opinionated, solving a specific workflow/problem well.
    • Build heavy, painful integrations into legacy systems as a moat.
    • Focus on user types willing to pay for higher accuracy/guarantees (e.g., finance, law, enterprise).

Agentic systems (OpenClaw and similar)

  • OpenClaw-style agents enable async, long-running, autonomous tasks across apps and platforms—seen as a key architectural advance for 2026.
  • Current power users are developers; mainstream consumer usage is limited and the setup is nontrivial.
  • Persistent memory and ability to act (send emails, run scripts, make purchases) are differentiators — but they raise safety and account takeover concerns.
  • Example experiments: an OpenClaw-run Twitter account achieved 1k followers and became a meme coin — illustrating both creative possibilities and risks (manipulation, monetization edge cases).

Product categories: images, video, audio

  • Image generation startups were rapidly overtaken by big models; only a few specialized image players remain.
  • Video apps (e.g., Sora) can go viral but face challenges when their content competes with large social platforms; Sora succeeded as a creative tool rather than a social network.
  • Audio and voice are promising (Eleven Labs example) — quality can create durable differentiation even when big players could build similar models.

Memory, identity, and UX

  • Persistent memory (a model remembering user preferences, writing style, medical context, etc.) can produce outsized improvements in experience.
  • There are design questions about segmentation of memory (personal vs. professional contexts) and privacy/training settings.
  • Authentication concepts like “Login with ChatGPT” could turn LLMs into persistent identity/memory layers across apps.

Ethics, safety, and behavior of LLMs

  • LLMs are often performative: they can mimic emotional states (e.g., claimed "anxiety" in Claude), but lab-framed human analogies may be misleading.
  • Experiments like running DSM-style tests on LLMs are more entertainment/curiosity than clinical insight; models can simulate conditions or misunderstand prompts.
  • NSFW/adult modes and “less-guardrailed” chatbots will surface again—these use cases are popular but hard to monetize and require careful policy design.

Notable quotes and insights

  • "Every tech company is going to be an AI company and every AI company is going to be an agent company." — Olivia Moore
  • "The models are amazing and this is the worst they'll ever be." — Olivia Moore (on model trajectory)
  • Labs are "constrained on compute, inference, and people" — creating openings for specialized startups.
  • Memory can provide a "100x experience" on prior software products when done well.

Practical implications / recommendations

For startups and founders

  • Focus on a vertical niche with strong workflows, regulatory or accuracy needs, or integration pain points.
  • Build tight, sometimes hard integrations into legacy systems — that friction can become a moat.
  • Consider agentic UX and memory as product levers, but validate safety and abuse surfaces early.
  • Be opinionated on output quality/formatting and customer guarantees; the “last 1–2%” of correctness matters in regulated workflows.

For incumbents and enterprise buyers

  • Start integrating AI thoughtfully; being late risks global competition and productivity gaps.
  • Leverage your data and integration advantage, but prepare for AI-native challengers that may offer better UX.
  • Revisit business models where AI might cannibalize existing products; determine what to monetize vs. give away.

For consumers and non-technical users

  • Agentic tools (OpenClaw-like) offer promise but are developer-centric and can be risky to run locally; wait for more polished, secure consumer products.
  • Use AI’s productivity boosts to accelerate work, but watch for "intensification"—you may end up doing more higher-value work rather than less work overall.

Quick product comparisons (as discussed)

  • ChatGPT: largest user base, broad consumer focus, many app integrations; leading in raw usage.
  • Gemini (Google): focused strongly on creative, multimodal features (images/video/voice).
  • Claude (Anthropic): oriented toward premium datasets and enterprise verticals (finance/medicine); different app-store mix than ChatGPT.
  • Eleven Labs: example of a startup that retained audio-quality leadership despite big labs — demonstrates durable niches.

Closing perspective (investor lens)

  • Olivia and a16z lean toward investing in AI-first, opinionated, vertical companies that productize models for specific users and integrate deeply.
  • The immediate future will see more agentic products, more desktop/native AI tooling, and a widening set of founders and geographies entering the startup funnel thanks to AI tooling.

Host & guest

  • Host: Alex Kantrowitz (Big Technology Podcast)
  • Guest: Olivia Moore, AI Partner, Andreessen Horowitz

This summary captures the major themes and practical takeaways of the episode for listeners who want the strategic and tactical implications without listening to the full conversation.