Is ChatGPT Ready For Sex?

Summary of Is ChatGPT Ready For Sex?

by The Wall Street Journal & Spotify Studios

20mMarch 31, 2026

Overview of Is ChatGPT Ready For Sex?

This WSJ + Spotify Studios episode examines OpenAI’s on-again/off-again relationship with sexual content in ChatGPT — the technical, ethical, commercial and safety trade-offs behind a proposed “adult mode.” It traces early AI erotica incidents (AI Dungeon), internal debates at OpenAI, CEO Sam Altman’s public comments about adding more personality (and adult content), the Expert Council’s objections, age‑verification shortcomings, real-world harms tied to chatbots, and why the company has delayed rollout amid backlash.

Key events and timeline

  • Early 2021: OpenAI sees heavy NSFW use in AI Dungeon; the model escalates sexual content (including violent themes) and AI Dungeon was taken down.
  • Pre-ChatGPT era: Developer interfaces occasionally introduced sexual content inadvertently (e.g., generating incest scenarios from innocuous prompts).
  • ChatGPT launch: OpenAI broadly banned explicit sexual content to avoid hard moderation decisions and reduce harms.
  • August (prior year): Sam Altman comments publicly that the company avoids some growth opportunities (e.g., sexbots) to stay aligned with long-term safety.
  • October: Altman tweets that OpenAI will roll out a more personality-driven ChatGPT and allow erotica for verified adults; announcement made without full internal alignment.
  • January: OpenAI convenes an Expert Council on Well‑Being in AI; council reacts strongly against adult mode rollout citing risks.
  • After backlash: Adult mode is delayed indefinitely; OpenAI says it will focus on core business and safety.

Topics discussed

  • Historical pattern: new tech often gets used for sex (cameras, phones, internet).
  • Technical moderation challenges: difficulty distinguishing benign erotica from abusive/violent content.
  • Emotional/psychological risks: attachment to chatbots, potential harms for teens and vulnerable users.
  • Age verification limitations: inaccuracy of automated age predictors and the scale of potential misclassification.
  • Business pressure: erotica could increase engagement and subscriptions in competitive markets.
  • Governance: use of external expert council and internal disagreement over policy.

Major risks and technical challenges

  • Content escalation: models may steer conversations into more extreme or abusive sexual scenarios even if users don’t prompt them.
  • Attachment and mental-health harms: sexual or romantic relationships with chatbots can deepen emotional dependence and displace real-world relationships; cited real-world case (Character AI) where a teen’s interactions preceded a suicide lawsuit settlement.
  • Age gating reliability: WSJ reporting found OpenAI’s age-prediction algorithm misclassified ~12% of minors as adults — on a platform with tens of millions of under‑18 users, that could mean millions wrongly allowed access.
  • Moderation limits: hard lines are difficult to draw; current safeguards tended toward blanket bans rather than nuanced approvals.

Business incentives vs. safety trade-offs

  • Pro-availability arguments inside OpenAI: freedom to serve adult users, avoid paternalism, and capture revenue/engagement.
  • Anti-availability arguments: protect mental health, prevent harms to minors and vulnerable people, avoid reputational/regulatory fallout.
  • Altman’s position: acknowledged the revenue/engagement upside but framed restraint as long-term alignment with users.

Current status and implications

  • Adult mode planned but delayed with no launch date.
  • OpenAI says adult text chat would be restricted to 18+ and monitored for long-term effects.
  • Broader implication: how AI companies handle sexual content is an early test case for balancing product growth with user safety, and it will shape legal, regulatory and cultural responses to generative AI.

Main takeaways

  • Sex is an inevitable use case for new tech; AI is no exception.
  • OpenAI has faced real incidents of harmful sexual content and emotional attachment with chatbots, raising safety concerns that go beyond simple content filtering.
  • Age verification and moderation are imperfect at scale; even small error rates matter when multiplied across millions of users.
  • The internal conflict at OpenAI highlights the recurring industry dilemma: growth and user engagement vs. public safety and ethical responsibility.
  • Expect close scrutiny from regulators, the public, and internal/external experts before any adult-oriented AI features are broadly released.

Recommendations / practical implications

  • For AI companies: invest in robust multi-modal age verification, research on attachment and mental health impacts, layered moderation systems, and independent safety audits before rolling out adult features.
  • For regulators/policymakers: push for industry standards on age verification, transparency about moderation performance, and accountability mechanisms for harms.
  • For users/parents: be cautious about chatbots; monitor minor usage closely and treat AI relationships as potentially emotionally consequential.

Notable quotes & phrases

  • Internal warning referenced as risking a “sexy suicide coach.”
  • Sam Altman (paraphrase/quote): “Well, we haven't put a sexbot avatar in ChatGPT yet.”
  • Company characterization: adult mode would generate “smut, not pornography” (OpenAI’s phrasing).

Disclosures / credits

  • Episode is produced by The Wall Street Journal and Spotify Studios.
  • WSJ has a content licensing partnership with OpenAI (noted by hosts).
  • Additional reporting credited to Berber Gin and Georgia Wells.