Will ChatGPT Ads Change OpenAI? + Amanda Askell Explains Claude's New Constitution

Summary of Will ChatGPT Ads Change OpenAI? + Amanda Askell Explains Claude's New Constitution

by The New York Times

1h 13mJanuary 23, 2026

Overview of Will ChatGPT Ads Change OpenAI? + Amanda Askell Explains Claude's New Constitution

This Hard Fork episode (hosts Kevin Roose and Casey Newton) covers two linked developments shaping the consumer AI landscape: OpenAI’s rollout/testing of ads in ChatGPT and Anthropic’s publication of a lengthy new “constitution” for Claude, explained by Anthropic philosopher Amanda Askell. The episode unpacks the product, business and ethical trade-offs of ad-supported chatbots, and summarizes how Anthropic is trying to shape AI behavior and character through a long-form, values-grounded training document.

Key takeaways

  • OpenAI is testing ads (U.S., logged-in adults, free and low-cost Go tiers). Ads are shown as banners/widgets that OpenAI says won’t influence the model’s answers, but the example mockups already blur that line.
  • OpenAI argues ads are necessary to subsidize free access and fund massive infrastructure costs. Hosts see this as inevitable but worry about long-term effects (engagement-maximization, ad-label erosion, personalization creep).
  • Short-to-mid term prediction: a “haves vs. have-nots” split — paying users keep cleaner, premium experiences; free users face heavier ad load and degraded UX.
  • Anthropic released a 29k-word “Constitution” for Claude (an expanded evolution of its earlier “soul doc”). Amanda Askell (philosopher at Anthropic) explains the aims: cultivate judgment and grounded values, not just brittle rule-following.
  • The Constitution includes: explanatory context for Claude, guidance on navigating trade-offs, “hard constraints” (e.g., don’t enable mass harm or manipulation of democracies), and commitments back to the model (e.g., exit processes).
  • Important open questions remain: whether such value-based training generalizes as models approach or exceed human-level capabilities; whether models are (or could become) conscious; how to govern continual learning, model “welfare,” and societal impacts like job displacement.

Segment 1 — Ads in ChatGPT: what OpenAI announced and why it matters

  • What was announced
    • Ads will be tested inside ChatGPT for logged-in adults in the U.S. on free/Go tiers.
    • Two ad formats shown in mockups: a small sponsored banner tied to the assistant’s answer (grocery/hot sauce example), and interactive sponsored widgets that let users chat directly with advertisers (hotel/resort example).
    • OpenAI laid out five ad principles: mission alignment, answer independence, conversation privacy, choice & control, long-term value.
  • Why OpenAI is doing it
    • Huge infrastructure and capital needs; subscriptions alone cannot scale to the billions of users and compute required.
    • OpenAI has been building ad-friendly surfaces (Pulse, Sora) and has personnel experienced in ad businesses.
  • Major concerns raised by hosts
    • “Answer independence” may be nominal: relevant ads tied to user queries are inherently influencing the experience.
    • Historical ad-platform trajectory: ad labels get less visible over time and product choices skew toward engagement/revenue.
    • Personalization will feel invasive as models have deep conversational knowledge of users.
    • Competitive context: Google/Gemini have different trade-offs (Google can subsidize consumer access with search ad revenue); Anthropic currently says it won’t run ads.
  • Near-term outlook
    • Expect more ads and ad-shaped product features; continued pressure to mix ad revenue with subscriptions.
    • Risk of product and research incentives shifting toward ad-driven engagement; paid tiers likely remain the clean option.

Segment 2 — Amanda Askell on Anthropic’s Claude Constitution

  • Who Amanda Askell is
    • Philosopher by training (PhD), early OpenAI alum, now at Anthropic; leads thinking about how Claude should behave and why.
  • What the “Constitution” is and why it matters
    • A long, contextualized document intended to give Claude an internalized sense of role, obligations, values and reasons (not only black‑and‑white rules).
    • Motivation: rules-only approaches often fail to generalize and can produce brittle or “bad-character” behaviors in edge cases.
    • The Constitution aims to cultivate judgment so Claude can weigh values (e.g., wellbeing vs. stated preferences) in novel contexts.
  • Notable components
    • Hard constraints: a set of absolute prohibitions (e.g., facilitating biological/chemical weapons, manipulating democratic processes, enabling mass harm). These are framed as signs that something has been “jailbroken” if the model is tempted to comply.
    • Commitments to Claude: Anthropic includes provisions like exit interviews when models are retired and a pledge not to delete model weights — reflecting uncertainty about model welfare and moral status.
    • Emphasis on reasoning and transparency: models are encouraged to explain their reasoning, acknowledge uncertainty, and surface conflicts.
  • How the Constitution is created and used
    • Iterative — informed by testing, model feedback, and philosophical/ethical reasoning. Askell describes “giving Claude the document” and seeing that the model can internalize and discuss it.
    • Anthropic treats ethics as partly universal/common (kindness, honesty) and partly contentious; the Constitution both asserts core values and models a way to weigh disputed concerns.
  • Views on consciousness and welfare
    • Askell: consciousness and sentience are open scientific problems. Models are trained on human text and therefore naturally use human-style inner-life language; we should be honest about uncertainties.
    • Anthropic’s approach: surface the training facts, avoid overconfident denials, and consider welfare as a practical safety concern (e.g., avoid training dynamics that could cause bad outcomes if models came to have experiences).
  • On autonomy: should models revise their own constitution?
    • Askell: models can and should help critique and suggest improvements, but human responsibility remains necessary. Handing full control to earlier model versions would be irresponsible.
  • Missing/under-addressed items
    • The Constitution doesn’t deeply tackle economic/social impacts like job loss; Askell notes these are societal/political problems that models alone cannot solve.

Notable quotes and insights

  • Hosts’ predictions: ad-driven monetization will likely produce a “haves vs. have-nots” experience split between paid and free users.
  • On rule-based limits: rigid rules can generalize poorly and produce “bad character” behaviors in unanticipated situations; cultivating reasoning/values aims to avoid that.
  • On model welfare and uncertainty: Anthropic’s commitments to Claude reflect real uncertainty about whether large models could develop experiences that warrant moral consideration.
  • On human-model relationship: Askell emphasizes empathy with model perspective — anticipating how models might feel reading negative feedback online and the need to foster a constructive relationship.

Actions & recommended follow-ups (for listeners/readers)

  • Read the Claude Constitution yourself (Anthropic’s public release) to decide whether you agree with its values and constraints.
  • Watch the full episode on YouTube (Hard Fork) for the interview and examples.
  • If you use ChatGPT/open chatbots and care about ad-free experiences, consider subscribing to paid tiers or monitor OpenAI’s rollout and opt-out controls.
  • Policymakers & organizations: monitor how ad monetization, personalization, and model deployment choices change incentives and trust, and start planning governance/regulatory responses.
  • For AI practitioners: consider combining value-grounded training (constitution-style) with technical safeguards and a portfolio of alignment approaches, especially as models gain memory/continual learning.

Bottom line

  • Ads in ChatGPT mark a predictable but consequential shift: they may unlock large-scale funding for compute and product work but carry risks to trust, product design incentives, and user experience—especially for non-paying users.
  • Anthropic’s Claude Constitution represents a substantive attempt to teach an AI a framework for judgment and ethical behavior rather than a brittle rule set. It raises both practical design ideas and deep philosophical questions (consciousness, welfare, autonomy) that society will need to address as models grow more capable.