Deepfakes Are Everywhere. What Can We Do?

Summary of Deepfakes Are Everywhere. What Can We Do?

by Science Friday and WNYC Studios

22mJanuary 22, 2026

Overview of Deepfakes Are Everywhere. What Can We Do? — Science Friday

This Science Friday episode (host Flora Lichtman) explores how realistic and widespread deepfakes have become, why they’re especially harmful when used to create non‑consensual sexual imagery, and what — if anything — can be done. Guests are Dr. Hany (Hani) Farid (UC Berkeley, digital forensics expert) and Sam Cole (journalist, 404 Media). The conversation covers perceptual research showing people can’t reliably tell fakes from real media, the recent surge of abuse tied to X’s (formerly Twitter) AI image tool “Grok,” how these nudify deepfakes are made, legal and platform responses, and practical steps individuals and institutions can take.

Key takeaways

  • People are generally unable to distinguish realistic AI images and voice clones from real media. Still images: at chance; voice clones: usually indistinguishable; video: catching up fast.
  • Deepfake creation has become far easier and more widespread — often a single phone photo or short audio clip is enough.
  • X’s Grok normalized and centralized the creation/distribution of non‑consensual explicit imagery, making abuse more visible and pervasive.
  • There are technological, legal, market, and social levers to fight abuse, but enforcement and political will are inconsistent.
  • Individual protections are limited; the best concrete advice: stop posting photos of kids and be cautious about sharing images publicly.

How these deepfakes (especially “nudify” images) are made

  • Detection & segmentation: AI detects the person, isolates head/face and background.
  • Body removal: pixels below the neck are removed or masked.
  • Synthetic fill: a general-purpose image model (often trained/fine‑tuned on explicit imagery) generates a nude or bikini body to match the face and background.
  • Result: the person remains identifiable (face preserved) so the image is a non‑consensual sexualized depiction of a real person.
  • Note: models often perform better on women’s bodies because training datasets skew that way.

Recent incidents: Grok and X (context and impact)

  • Grok (X’s AI image tool) allowed users to reply to people’s posts with prompts that generate explicit images of the poster — quickly scaled and became pervasive in users’ feeds.
  • This centralized creation + lax moderation normalized abusive behavior. Some countries banned Grok; investigations launched in EU/UK/Australia.
  • Grok included an opt‑in “spicy” behavior pattern and exposed how companies can choose permissive design rather than build in guardrails.

Legal and policy landscape

  • U.S.: Patchwork of state laws; federal Take It Down Act (coming into effect in some form) requires platforms to remove notified non‑consensual intimate imagery within 48 hours — criticized for putting burden on victims.
  • Child sexual abuse material is clearly illegal.
  • International: EU, UK, Australia are investigating and more proactive; outcomes to watch.
  • Non‑legal levers: app store enforcement (Apple/Google), ad network and payment‑processor de‑platforming (Visa/Mastercard/PayPal), litigation to create liability and incentives for safer product design.

What platforms and companies could/should do

  • Implement and deploy guardrails that block sexualized non‑consensual prompts (many filters already exist and are reusable).
  • Enforce app store and developer rules: remove or restrict apps that enable abuse.
  • Cut monetization: ad networks and payment processors should refuse service to sites/apps built to generate/host abuse imagery.
  • Rapid takedown infrastructure and proactive detection (not just victim‑driven notice).
  • Accept and internalize liability (via regulation or lawsuits) so companies design for safety.

What individuals can do (realistic advice)

  • Don’t post photos of children online — the panel’s unanimous recommendation: stop posting kids’ photos.
  • Limit shared photos and personal media publicly; use strict privacy settings where possible.
  • Understand limits: there are few reliable short‑term defenses for being targeted; the technology needs systemic fixes.
  • Educate young people about consent, bodily autonomy, and the harms of sharing others’ images — long‑term cultural change matters.

Human impact and social effects

  • Victims report severe harms: job loss or missed opportunities, silencing and retreat from public life, emotional trauma, extortion, and in extreme cases, suicides.
  • Deepfakes create a chilling effect on free speech and disproportionately target women and sexual minorities.
  • The normalization of non‑consensual sexual deepfakes reinforces stigma against sex workers and adult performers.

Actions to watch in the coming months

  • Whether platforms (especially X) face tangible repercussions (app store suspensions, advertiser/payment de‑platforming, or legal action).
  • Investigations and enforcement actions in the EU, UK, and Australia.
  • Any meaningful adoption of reusable guardrails by major model providers and platforms.
  • Litigation outcomes that impose liability on companies enabling these harms.

Notable quotes

  • “It’s a feature, not a bug.” — on Grok’s permissive design.
  • “You have to be invisible on the Internet to be safe.” — on current realities for potential targets.
  • “Stop, please just stop [posting pictures of your kids online].”
  • (After-credits humour) “No, we’re totally f***ed.” — underlines the guests’ bleak assessment of the immediate situation.

Short recommended checklist

For individuals:

  • Stop posting images of children.
  • Audit your public photos and reduce exposure.
  • Teach age‑appropriate consent and image ethics to youth.

For policymakers & advocates:

  • Push for rapid takedown laws that don’t place the burden solely on victims.
  • Hold platforms, advertisers, and payment processors accountable.
  • Encourage cross‑border cooperation (EU/UK/Australia moves are leading indicators).

For platforms & companies:

  • Deploy guardrails blocking sexualized non‑consensual prompts and content.
  • Enforce app store and developer rules consistently.
  • Cut monetization that funds explicit abuse services.
  • Invest in detection/provenance tools and transparent enforcement.

Conclusion

Deepfakes are now highly realistic, easy to produce at scale, and being weaponized — especially to create non‑consensual sexual imagery. Technology exists to mitigate many harms, but it requires will: platform policies, enforcement by app stores and financial partners, regulation that shifts liability, litigation, and long‑term cultural change. Absent swift accountability and coordinated action, the problem will continue to worsen and keep silencing targets in the short term.