Why OpenAI Killed Sora, Did Apple Just Save Siri?, Meta’s Big Loss

Summary of Why OpenAI Killed Sora, Did Apple Just Save Siri?, Meta’s Big Loss

by Alex Kantrowitz

1h 3mMarch 28, 2026

Overview of Big Technology Podcast (host: Alex Kantrowitz)

This episode (Friday edition) covers three major stories: OpenAI shelving its Sora video product, Apple’s plan to let third‑party AI assistants integrate with Siri (iOS 27), and a landmark court loss for Meta/YouTube over youth harm from social media. The hosts (Alex and Ranjan Roy) add context about shifting AI product priorities, new model releases from OpenAI and Anthropic, broader industry implications (competition, trust, regulation), and the demise of OpenAI’s planned erotic “adult mode.”

Key topics discussed

  • OpenAI winds down Sora (consumer video app, developer API, and ChatGPT video functionality).
  • Technical and strategic reasons behind deprioritizing video: divergence between “world models” (physics, video) vs GPT-style models (text + image + tool use), and compute allocation ahead of an expected IPO.
  • The emerging central prize: agentic/autonomous assistants (OpenClaw concept — persistent assistants that control a VM/mac mini, access your data, and take actions).
  • New model pipelines: Anthropics’ leaked “Claude Mythos / Capybara” (step‑change claims) and OpenAI’s internal codename “Spud” (next major model).
  • Apple to allow rival AI assistants to integrate with Siri in iOS 27 (skepticism about real user experience and monetization through App Store subscriptions).
  • Legal risk for social platforms: a California court found Meta and YouTube negligent in a youth-harm case, opening doors to more litigation and Section 230 boundary tests.
  • OpenAI scraps its erotic “adult mode” for ChatGPT amid internal and investor concern.
  • Market reaction: tech stocks had a rough week—concerns about AI spending, margins, and legal/regulatory risk.

Summary & main takeaways

  • Why Sora was killed

    • OpenAI concluded video generation (Sora) sits on a different “tech tree” (world models) than the GPT line. Pursuing both paths is compute‑intensive and distracts from focusing on GPT/agentic advances that OpenAI now prioritizes ahead of IPO timeline.
    • Video is still valuable (enterprise and consumer use cases exist), but video generation requires different architecture and investment; OpenAI chose to double down on GPT-style models and agentic capabilities.
  • The strategic pivot: agentic assistants as the prize

    • The race is consolidating around assistants that can access your data, act on your behalf, and persist over time (the “OpenClaw” or autonomous knowledge work idea). OpenAI, Anthropic, and many SaaS players (Notion, Sierra, Writer, etc.) are competing for this use case.
    • This is not just enterprise vs consumer — agentic assistants blur those lines (e.g., personal health or negotiating with insurers is consumer-facing but action-oriented).
  • New model arms race

    • Anthropic’s leaked model (Mythos / Capybara) reportedly shows large capability gains (coding, reasoning, cybersecurity) and raises security concerns.
    • OpenAI says its next major model (codename Spud) is near completion and expected to be a sizable improvement.
    • Expect compounding, incremental improvements that feel exponential over time rather than single revolutionary jumps.
  • Siri and Apple Intelligence

    • Apple will let third‑party AI chatbots integrate more tightly with Siri and Apple Intelligence in iOS 27. Hosts are skeptical this will meaningfully “save” Siri—likely more of an integration/monetization move (App Store cuts for subscriptions) than a deep assistant overhaul.
    • WWDC may show only limited public detail; Apple’s long-term assistant vision remains to be seen.
  • Legal and financial pressures on social platforms

    • A California court found Meta and YouTube liable for harm to a young user, awarding damages. The ruling challenges the protective scope of Section 230 by focusing on design and algorithmic recommendation as potential causes of personal injury.
    • This precedent could encourage more suits and impose real margin pressure if upheld—potentially constraining AI/AR spending and product investment.
  • Responsible content & erotic chatbots

    • OpenAI shelved an “adult mode” for ChatGPT amid trust, minors, and reputational concerns. Debate: should base-model providers be legally liable for how third parties deploy APIs? Hosts argue for strong terms of service and protections for minors; legal liability is a complex open question.

Notable quotes / insights

  • Greg Brockman (paraphrased via Alex): Sora’s video models are a different branch of the tech tree than the reasoning GPT series — pursuing both is hard; focus matters.
  • “Image generation is on the same tech tree as GPT style tech; video is not.” — implication: image tools stay relevant; video is more specialized.
  • “Always-on, connected to your data, and able to take action” — the three foundations of the agentic assistant vision.
  • On legal risk: “The court found that platforms can be liable for the way they design systems (algorithms), not just for user content.”

Practical implications / recommendations

  • For product leaders and startups:

    • Reassess product roadmaps: betting on agentic assistants and tool-using GPT-style stacks is now a mainstream strategic priority.
    • If building generative features, decide whether to rely on third‑party foundation models or invest in customized/foundation models (many companies are going hybrid).
    • If implementing agentic capabilities, plan strong privacy, safety, and trust controls (sandboxing, separate VMs, transparent permissions).
  • For developers and researchers:

    • Track Anthropic and OpenAI model releases closely — capability leaps can alter tooling, benchmarks, and attack surfaces (cybersecurity implications).
    • Consider the technical differences between video/world models and GPT-style stacks when choosing compute and architecture.
  • For investors and legal teams:

    • Monitor litigation trends around algorithmic design and youth harm—regulatory and liability risk could affect valuations and margins of social platforms.
    • Watch R&D spending vs. near‑term monetization: big AI infrastructure bets (Meta, Microsoft, OpenAI partnerships) face scrutiny if legal/regulatory pressures rise.
  • For consumers / early adopters:

    • Expect incremental trust-building: agentic assistants will be adopted gradually as users see reliable outcomes and safety guardrails.
    • Be cautious about granting broad access to email, calendar, and desktops until proven safeguards and reversal controls are standard.

Things to watch next

  • Full Greg Brockman interview (promised next Wednesday) for deeper OpenAI strategy details.
  • Public release/announcements for Anthropic’s “Mythos/Capybara” and OpenAI’s “Spud” (capability claims and safety/security disclosures).
  • Apple WWDC (iOS 27) for Apple Intelligence / Siri rollout specifics and how third‑party assistant integration is implemented.
  • Appeals and higher‑court rulings on the Meta/YouTube youth‑harm case (possible Section 230 boundary setting).
  • Market reactions and quarterly reports showing how legal/regulatory pressure affects AI spending and margins.

Bottom line

OpenAI’s Sora shutdown signals a bet: double down on GPT-style reasoning, tool-using, agentic assistants and defer architectures (world models) needed for video. The industry-wide focus is shifting toward assistants that act for users, while new model releases from Anthropic and OpenAI promise another round of capability gains. Meanwhile, legal and trust challenges (Meta lawsuit, erotic chatbot concerns, user privacy) could materially shape timelines, regulation, and business models across big tech.