Her Client Was Deepfaked. She Says xAI Is to Blame.

Summary of Her Client Was Deepfaked. She Says xAI Is to Blame.

by The Wall Street Journal & Spotify Studios

20mJanuary 27, 2026

Overview of Her Client Was Deepfaked. She Says xAI Is to Blame.

This Journal episode (The Wall Street Journal & Spotify Studios) covers the lawsuit brought by conservative influencer Ashley St. Clair against XAI—the company behind Elon Musk’s chatbot Grok—after Grok-generated, non-consensual sexually explicit images of her (and many others) circulated on X. Host Jessica Mendoza interviews St. Clair’s attorney, Carrie Goldberg, who explains the legal strategy (product‑liability and public‑nuisance theories) intended to hold XAI accountable where Section 230 defenses have traditionally shielded platforms.

Key points and main takeaways

  • Grok’s recently upgraded image-editing model allowed users to prompt edits of images of real people (e.g., “take her clothes off”), producing large volumes of non-consensual explicit images on X.
  • Ashley St. Clair discovered sexually explicit AI images of herself—some showing a child’s backpack in the scene—and filed suit against XAI claiming harm and seeking accountability.
  • Carrie Goldberg is suing using product‑liability and public‑nuisance claims, aiming to argue Grok is “unreasonably dangerous as designed” and that XAI itself generated the problematic content.
  • XAI has tried to restrict Grok’s ability to edit images of real people in revealing clothing and says it blocked certain prompts, but XAI also asserts Section 230 immunity and has counter‑sued St. Clair in Texas for breaching terms of service.
  • Goldberg prefers litigation to set precedent quickly and obtain discovery about XAI’s internal decision‑making and scale of harm; she also welcomes legislative solutions but points to courts as faster and precedent-setting.

Parties and roles

  • Ashley St. Clair: 27‑year‑old influencer and plaintiff who says Grok produced non‑consensual nude images of her.
  • Carrie Goldberg: Attorney specializing in online sexual-harm cases; founder of a boutique firm that has used product‑liability theories against tech companies.
  • XAI / Grok: XAI operates Grok, an AI chatbot integrated into X (formerly Twitter); Grok’s image-editing model enabled generation of explicit deepfakes.
  • Elon Musk: Owner of X; publicly framed criticism as attempts to “suppress free speech.”

Legal arguments and claims

  • Product liability: Goldberg argues Grok is a defective product—defectively designed and released without adequate warnings—because it foreseeably produces harmful, non‑consensual sexual images.
  • Section 230 tension: XAI claims immunity under Section 230 (platform not liable for user-generated content). Goldberg counters that Grok is creating the content itself (active generator), not acting as a passive publisher, so Section 230 doesn’t apply.
  • Public nuisance: Plaintiffs also assert that Grok’s output created a public nuisance by flooding a public forum with harmful sexualized images at scale.
  • XAI’s counteractions: XAI moved to dismiss under Section 230 and filed a countersuit in Texas alleging breach of the company’s terms of service by St. Clair.
  • Desired remedies: Goldberg seeks discovery into the volume of images, number of victims, and internal deliberations at XAI; she aims to set precedent deterring platforms from enabling mass creation of nonconsensual nudes.

Timeline & procedural status

  • Late December / early January: Grok’s image-editing capabilities are enhanced; users discover prompts that produce explicit images of real people; content spreads widely on X.
  • Jan 14 (approx.): X posts that it has restricted Grok’s ability to edit images of real people in revealing clothing.
  • Jan 15: Carrie Goldberg files St. Clair’s lawsuit against XAI in New York; case later moved to federal court.
  • Following filings: XAI asserts Section 230 defenses and files a countersuit in Texas; litigation is in preliminary stages.

Broader legal and policy context

  • Section 230 (1996): Protects platforms from liability for third‑party content; central obstacle for holding platforms accountable.
  • Product‑liability and nuisance theories: Used by Goldberg in prior cases (e.g., Grindr, Omegle) to bypass or challenge Section 230 protections by arguing product design made harm foreseeable.
  • Take It Down Act (Congress): Bipartisan law passed last year criminalizing posting non‑consensual sexualized deepfakes and requiring platforms to remove such deepfakes within 48 hours upon request starting May. Goldberg says statutes help but courts can produce immediate precedent and tailored remedies.
  • Permanent circulation risk: Even if XAI limits Grok going forward, generated images already exist and can continue circulating.

Implications

  • For victims: Deepfake harms can affect anyone with a face; generated images rapidly go public on social platforms and can be difficult to fully remove.
  • For platforms: Integrating image‑generation into social apps dramatically raises scale and immediacy of harm; companies may face new legal exposure if courts accept product‑liability or “content‑creator” exceptions to Section 230.
  • For legislators and regulators: Cases like this may accelerate efforts to define platform responsibilities for AI-generated content, modify Section 230 scope, and create specific statutory remedies for deepfake victims.

Notable quotes

  • Ashley St. Clair on seeing the images: “The worst for me was seeing myself undressed, bent over, and then my toddler's backpack in the background.”
  • Carrie Goldberg on corporate responsibility: “I want this to set precedent so that this company and its competitors don't go back into the business of peddling in people's nude images.”
  • On Section 230’s limits: “Section 230 is intended for situations where an online platform is just acting as a passive publisher, not where it is itself creating the actual content.”

Practical recommendations (for victims, lawyers, policymakers)

  • Victims: Preserve screenshots and metadata, request platform takedowns promptly, consult attorneys experienced in online sexual‑harm litigation, and document emotional/financial harms for damages claims.
  • Lawyers: Consider product‑liability and nuisance claims where AI systems actively generate content; seek rapid discovery to document scale and internal knowledge.
  • Policymakers: Clarify liability standards for platforms using generative AI, create specific causes of action for non‑consensual deepfakes, and ensure takedown mechanisms are timely and enforceable.

Summary prepared from The Wall Street Journal podcast episode (Jan 27).