Big Tech’s $100M Democracy Detox Plan

Summary of Big Tech’s $100M Democracy Detox Plan

by Puck | Audacy

28mNovember 19, 2025

Overview of Big Tech’s $100M Democracy Detox Plan

This episode of The Powers That Be (Puck | Audacy), hosted by Peter Hamby with reporter Ian Kreitzberg, examines a new, well‑funded political effort by AI industry backers to blunt or defeat politicians pushing AI regulation. The focal story: a super PAC called Leading the Future — backed by roughly $100 million from venture and AI figures — is targeting New York congressional candidate and state assemblyman Alex Boris for his sponsorship of the RAISE Act, a package of state‑level AI safety/transparency rules. The conversation places this attack in the broader context of tech’s influence campaigns (echoing crypto’s previous playbook) and the policy battleground between state initiatives and industry‑favored federal preemption.

Key takeaways

  • Leading the Future is a heavily funded PAC (~$100M) whose backers include Andreessen Horowitz and individual OpenAI executives; its stated goal is to oppose AI regulation that it views as anti‑innovation.
  • Its first target is Alex Boris (NY-12 primary), a former software engineer who authored the RAISE Act in New York — legislation requiring transparency/safety measures for AI systems.
  • Tech’s messaging emphasizes stifling innovation, the race with China, and the harms of state‑level fragmentation; political operatives tied to prior industry campaigns (e.g., crypto) are running the effort.
  • Public polling shows broad bipartisan appetite for AI oversight, but many high‑level politicians are reluctant to move too aggressively because they want to foster local tech ecosystems and avoid alienating wealthy investors.
  • Campaign spending by tech/venture interests could make AI regulation a major midterm campaign issue and may deter knowledgeable legislators from proposing stricter, targeted rules.

Who’s involved

  • Leading the Future (super PAC) — primary funders and political operatives backing ad buys to influence elections.
  • Andreessen Horowitz — cited as the principal financial backer.
  • Greg Brockman (co‑executive at OpenAI) and his wife — personal contributions noted.
  • Other names reported as donors/actors in broader tech political spending: Joe Lonsdale, Ron Conway, Perplexity (per coverage).
  • Josh Vlasto — political operative with prior experience on crypto campaigns, associated with the PAC’s political strategy.

Case study — Alex Boris and the RAISE Act (NY)

  • Alex Boris: NY Assembly member, ~34, computer science background, running in the crowded Democratic primary for NY‑12.
  • RAISE Act (Responsible AI Safety and Education): bipartisan‑framed state bill pushed by Boris to require AI companies to publish safety protocols, implement rails to prevent dangerous outputs, and provide more transparency and accountability.
  • Status: Passed the New York Assembly and Senate; Governor Kathy Hochul has not signed it (she’s balancing innovation goals vs. regulation).
  • Why targeted: Boris is technically literate and actively pushing real, potentially impactful constraints — precisely the kind of legislator industry fears because they can craft meaningful oversight.

Political context & dynamics

  • Industry playbook mirrors crypto’s previous large outside spending to defeat or weaken critics (examples: Sherrod Brown in Ohio, Katie Porter in CA).
  • Companies push for federal preemption to prevent a patchwork of state laws that could hamper scaling (and argue that state laws could stifle innovation).
  • Messaging themes from tech backers: regulation = anti‑innovation, urgency to “win the race with China,” and the claim that benefits of AI outweigh regulatory risks.
  • Polling: voters widely favor regulation; politicians face pressure to appear proactive on safety while also courting investment and not alienating big donors.

Policy areas that are politically salient (and easier to legislate)

  • Deepfakes and non‑consensual intimate imagery — framed as clear, tangible harms; examples include fraud via voice deepfakes, impersonation scams, and harassment (Taylor Swift deepfake discussions, school‑level targeting).
  • Intellectual property / likeness protection: Tennessee’s “Elvis Act” cited as an example protecting artists' voices/likeness from AI imitation.
  • “Take It Down” style laws and state bills addressing non‑consensual AI‑generated images have bipartisan traction.
  • Broader risk‑based guardrails and transparency/reporting requirements are harder to craft and sell politically because harms can be hypothetical and enforcement unclear.

Tactics, implications, and takeaways

  • Tactics: Multimillion‑dollar ad buys, tailored political messaging (like crypto ads did), and targeting of knowledgeable lawmakers to chill effective oversight.
  • Implication for politics: AI will likely become a higher‑profile midterm issue; well‑funded industry players can shape electoral outcomes and the willingness of legislators to engage.
  • Legal/regulatory landscape: Expect continued tension between state initiatives (more aggressive on concrete harms) and industry/White House push for federal frameworks or preemption.
  • For regulators/policymakers: Bills that focus on real, demonstrable harms (deepfakes, fraud, non‑consensual imagery) are politically easier to pass; transparency and accountability requirements for big players are the core of many proposed laws.

Notable quotes / framing

  • “The people that know what's going on ... are certainly much more scary to the industry than the people who don't know what's going on.” — underscoring why industry targets technically literate legislators.
  • Industry’s core message summarized: “Any regulation is inherently anti‑innovation” and could slow crucial progress, especially relative to global competitors.

What to watch next

  • The NY‑12 Democratic primary (Alex Boris’s race) — ad spends and messaging from Leading the Future will signal the scale and strategy of tech’s political interventions.
  • Whether the RAISE Act (or parts of it) becomes law in New York and how other states respond.
  • Federal moves on AI: continued White House/industry alignment versus congressional efforts for substantive oversight.
  • How advocacy coalitions, civil‑society groups, and voters respond — particularly around tangible harms (deepfakes, fraud, safety) that resonate across party lines.

Practical recommendations for listeners/readers

  • If you follow policy: track Leading the Future’s ad buys and messaging to see how industry shapes the public debate.
  • If you follow politics: watch for campaigns that mirror prior crypto tactics — broad retail messaging that hides the specific policy focus.
  • If you care about AI harms: prioritize support for legislation addressing demonstrable harms (deepfakes, non‑consensual imagery, consumer fraud) while pushing for meaningful transparency from big AI firms.

Further reading: Ian Kreitzberg’s reporting at The Hidden Layer (Puck) for deeper coverage of the NY race and the Leading the Future PAC’s actions.