S6 Ep31: PSA: AI is NOT Your Boyfriend!! (with Megan McArdle)

Summary of S6 Ep31: PSA: AI is NOT Your Boyfriend!! (with Megan McArdle)

by The Bulwark

1h 3mMarch 28, 2026

Overview of S6 Ep31: PSA: AI is NOT Your Boyfriend!! (with Megan McArdle)

This episode of The Bulwark’s Focus Group features Sarah Longwell interviewing Megan McArdle about how artificial intelligence is reshaping journalism, politics, everyday life, and voters’ trust in information. The conversation blends Megan’s technologist-turned-columnist perspective, findings from Bulwark focus groups with mostly older (often pro-Trump) voters, and real-world use cases and anxieties about AI — from productivity gains to deepfakes, parasocial chatbot relationships, and the local politics of data centers.

Key takeaways

  • AI is a cross-cutting civilizational change that does not (yet) split neatly along partisan lines; people of different politics are independently trying to understand its personal and civic effects.
  • Journalism is already being disrupted: AI summaries reduce search-driven readership, and routine reporting/aggregation is most vulnerable, while scoops and genuine relationships with audiences may persist.
  • People will continue to seek trusted human voices and institutions even as AI-generated content proliferates; being consistently trustworthy is an increasingly valuable differentiator.
  • Practical benefits are real: professionals use AI for drafts, presentations, triage, summaries and productivity tasks — but those gains coexist with risks like cognitive atrophy, cheating, and unhealthy reliance.
  • Deepfakes and synthetic media are accelerating erosion of trust. Voters increasingly assume content may be fake and struggle with “what to believe,” which has major political implications.
  • Local political friction around AI often centers on data centers (electricity, water use, noise, siting) — a mix of legitimate NIMBY concerns, misattribution of rising bills, and anti-AI sentiments exploited by politicians.
  • Long-term upside is plausible (more abundance, shorter work weeks, better health/education), but society must proactively mitigate harms (regulation, disclosure, infrastructure investment, cultural norms).

Topics discussed

  • Comparing AI moment to past tech waves (dot-coms, internet, social media): bigger cultural/political salience now.
  • How AI is changing journalism: collapsed search traffic, hollowing out mid-tier reporting, difficulties monetizing investigative work.
  • Parasocial relationships, trust, and the role of human messengers (politicians, pundits, and journalists).
  • Everyday uses reported in focus groups: drafting emails, patient triage, summarizing reports, presentations, letters to grandkids, and workflow automation.
  • Misuse and harms: conspiracy chasing, AI “boyfriends,” therapy replacement, student cheating, suicide linked to AI interactions, scams using synthetic voices.
  • Deepfakes and political misinformation: voter fear that video/audio can’t be trusted; implications for campaign ads and persuasion.
  • Data-center politics: local electricity/water impacts, siting, and how these concerns feed into broader opposition to AI; policy levers and technological fixes (behind-the-meter generation, grid investment).
  • Policy and cultural remedies: copyright reform, disclosure/labeling, institutional trust-building, education/AI-proofing schools, and investing in energy abundance.

Notable quotes & insights

  • “AI is a civilizational change that is almost too big for a political punditry show.” — frames scale of the issue.
  • “Search has collapsed as a source of traffic…people are just seeing these AI summaries, and then they don’t feel like they need to read the article.” — on journalism economics.
  • “AI will hollow out the industry…there’s the middle that tells you what happened — that’s going to be hit hard.” — on what kinds of reporting are most at risk.
  • “AI is not your boyfriend. Public service announcement.” — succinctly captures concerns about parasocial chatbot relationships.
  • “We are heading into the very first AI election of our lifetimes.” — on political implications of synthetic media.
  • “Be more trustworthy. Be the person who doesn’t BS people.” — Megan’s pragmatic prescription for individuals and institutions in a post-production environment.

Focus-group snapshots (how older voters are using and reacting to AI)

  • Productive uses: drafting tense workplace messages, patient triage summaries, slide decks for presentations, summarizing long reports, automating spreadsheet analysis (from hours to minutes), writing letters/short books for grandchildren.
  • Personal/less healthy uses: some treat chatbots like confidants/partners; others use AI to pursue conspiracy content; some report over-reliance (not reading, losing mental arithmetic).
  • Worries and examples: fake George Carlin clip used politically, AI-generated crying images in ICE reporting, robocalls/fake voices for scams, convincing deepfake videos (Brad Pitt/Tom Cruise example) — many voters say they “assume it’s fake until proven real.”
  • Data center concerns: higher bills, noise, groundwater risks, property values, and resentment about siting decisions; voters want “someone to regulate them” and make operators pay for grid upgrades.

Implications for politics, journalism, education, and infrastructure

  • Politics: campaigns will use AI for persuasion and attacks; synthetic content may erode the persuasive power of raw video and personal appeals; trusted messengers and institutions gain relative importance.
  • Journalism: outlets need new business models to fund original reporting; success may hinge on trust, verification, and providing source material that AI tools can’t easily replicate (exclusives, field reporting).
  • Education: schools should consider “AI-proofing” curricula and testing; younger students may be especially harmed by outsourcing learning to AI.
  • Infrastructure & climate: scaling AI demands compute and energy; public concern about data-center impacts can become a political wedge — policy solutions include grid investment, siting rules, and requiring operators to internalize upgrade costs.
  • National security: long-term advantage in AI depends not just on software but also on abundant, clean energy and chip/compute capabilities.

Recommended actions and policy ideas (from the discussion)

  • Strengthen trusted institutions and support journalism that produces new, verifiable reporting.
  • Require clear disclosure/labeling of AI-generated content and develop technological verification tools.
  • Update copyright and legal frameworks to address synthetic training data and attribution.
  • Invest in grid capacity, diverse energy sources (including nuclear/renewables), and policies to reduce bottlenecks so data centers don’t crowd out consumers.
  • Local/regional siting rules and permit conditions to mitigate nuisances (noise, water) and require cost-sharing for infrastructure upgrades.
  • Encourage schools to limit or carefully supervise AI use for young students; teach digital skepticism and verification skills.
  • Promote cultural norms and tools for verification: rely on trusted messengers, fact-checking institutions, and new verification standards.

Bottom line

AI is already changing how people work, consume news, campaign, and interact — with tangible productivity benefits and clear social harms. The battle over the next phase won’t be just about technology capability; it will be about trust, institutions, infrastructure, and politics. Practical mitigation (policy, education, trustworthy media) plus proactive investment (energy, grid, verification tech) are essential to steer AI toward broadly beneficial outcomes rather than deepening mistrust and inequality.

Further listening: Megan McArdle’s Reasonably Optimistic podcast; Bulwark coverage and the focus-group episodes referenced throughout the show.