How is AI shaping democracy?

Summary of How is AI shaping democracy?

by Practical AI LLC

48mJanuary 27, 2026

Overview of How is AI shaping democracy? (Practical AI Podcast)

This episode of the Practical AI Podcast (hosts Daniel Whitenack and Chris Benson) features Bruce Schneier (Berkman Klein Center, Harvard) discussing his book Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship (co-authored with Nathan Sanders). Schneier frames AI as a power‑enhancing technology that will reshape elections, legislation, administration, courts, and citizen engagement—sometimes for the better, sometimes for the worse—depending on who wields it and how society chooses to structure access, governance, and incentives.

Key points and main takeaways

  • AI is a power multiplier, not an autonomous moral actor: it amplifies the intentions and resources of its users—so it can improve or degrade democratic systems depending on deployment and governance.
  • The deepfake narrative is only one small part of AI's democratic impact. More consequential are predictive systems, agentic tools, and middleware that integrate AI into existing social and administrative processes.
  • Think “compared to what?” when evaluating AI performance. AI can be better than humans in some constrained tasks (e.g., ER transcription, large-scale scheduling), worse in others, and sometimes the only option where no human alternative exists.
  • The problem is often political, not technical: many harms AI exacerbates have known technical fixes; the barrier is political will and incentives (e.g., market concentration, corporate choices).
  • Decentralization and public AI are possible: government‑ or publicly funded core models (examples from Switzerland/ETH) show viable non‑corporate options; cheaper, specialized models and device‑side AI will further diversify the ecosystem.
  • Socioeconomic disruption is likely on the scale of the Industrial Revolution: job displacement in skilled professions (law, accounting, medicine, etc.) will require structural policy responses (retraining, social safety nets, UBI debates).

Topics discussed

Structure of the book

  • Five domains where AI intersects with democracy:
    1. AI and elections: campaign tools, AI avatars, messaging, polling, GOTV.
    2. AI in legislation: drafting, analyzing, and improving laws (examples in France, Chile).
    3. Government administration: making public services more efficient and responsive.
    4. Courts: case assignment, scheduling, workflow efficiencies—not AI judges, but management and assistance tools.
    5. Citizens: tools for organization, understanding choices, and participating in civic life.

Types of AI interactions to know

  • Predictive AI (maps, feeds) vs. generative/chat AI (chatbots).
  • Agentic systems and middleware: a lot of the value and risk sits in the software between user inputs and model outputs, and in the integrations with other systems (databases, government services, rule-based code).
  • Constrained/specialized models often outperform large general models within narrow domains and reduce hallucinations.

Global and political dynamics

  • Concentration of power in large tech firms is a political/market outcome, not an inevitable technical one.
  • Countries and regions (Switzerland, EU, Singapore, Taiwan, Brazil, Germany, Chile, Japan) are taking varied approaches—some building public models or specialized stacks to assert digital sovereignty.
  • Access inequality (haves vs. have‑nots) will shape democratic outcomes; many AI systems will be deployed on people without their choice or meaningful consent.

Notable examples & case studies

  • Japan: A young engineer (referred to in the interview as Takahiro Anno) used AI avatars and interactive tools to reach voters and later won office; his party leverages AI to keep constituents engaged.
  • Germany: A government‑backed conversational voter guide (chatbot) summarizes party positions in an interactive way—popular with younger voters and effective when constrained to narrow data.
  • CalMatters (California): A watchdog uses AI to produce a private “tip sheet” for journalists—surfacing anomalies (campaign contributions vs. voting records) for human investigation.
  • Brazil: Courts use AI for administrative tasks (case assignment, scheduling), improving throughput—though easier filing has also led to more cases.
  • Switzerland / ETH Zurich: Example of a publicly funded core model built outside major corporations, competitive with previous year’s best models and freely available.
  • Non‑chat high‑impact examples: AI for protein folding (research breakthroughs), AI-assisted ER documentation (better post‑event records than humans for chaotic scenarios).

Risks and challenges

  • Structural/incentive issues:
    • Corporate choices can make models overconfident, obsequious, or otherwise dangerous; these are design and policy choices, not technical inevitabilities.
    • Money equates to influence—existing inequalities will be amplified by AI unless political systems change.
  • Employment disruption:
    • Potential for large‑scale displacement in skilled professions and the classic “junior work done by AI” problem that undermines training pipelines.
  • Access and coercion:
    • Many AI systems will be imposed on people (health insurers using AI for claims, search engines returning AI answers by default), reducing meaningful choice.
  • Political misuse:
    • AI can aid disinformation, astroturfing, and targeted persuasion—tools that already existed but become cheaper and more scalable with AI.
  • Technical failure modes:
    • Hallucinations, brittleness outside constrained domains, and risky emergent behaviors when models are not appropriately bounded or audited.

Recommendations & action items

For builders, engineers, and product teams

  • Exercise your market and moral leverage: push back on projects that weaponize AI against public interest; organize and raise collective ethical standards where possible.
  • Favor constrained, specialized models for high‑stakes domains—“narrow is safer and better.”
  • Design systems to be honest (e.g., “I don’t know” responses), auditable, and transparent about data sources and limitations.
  • Invest in the middleware: robust integration, access controls, and explicit interfaces to non‑AI systems (math/rules engines, databases) reduce hallucinations and risk.
  • Explore and contribute to public AI efforts, and prioritize privacy‑preserving on‑device deployments where appropriate.

For policymakers and civic actors

  • Support public‑interest models and infrastructure (government funding, academic partnerships) to reduce reliance on single corporate providers.
  • Ensure access and equity: fund retraining programs, rethink social safety nets (health coverage decoupled from employment), and consider broader reforms (UBI discussions) to handle displacement.
  • Regulate transparency and accountability for AI used in public services, elections, and high‑stakes decisions (auditable logs, oversight bodies).

For citizens and journalists

  • Use AI as a tool but maintain human oversight—especially in investigative journalism, where AI tip sheets can accelerate discovery but a human should validate.
  • Advocate for public models and local digital sovereignty to ensure regional languages, legal norms, and civic values are represented.
  • Be skeptical of AI claims; ask “compared to what?” and demand clarity on how AI decisions affect rights and materially alter outcomes.

Notable quotes & pithy conclusions

  • “AI is a power‑enhancing technology.”
  • “If you like democracy, AI will help you make democracy better. If you hate democracy, AI will help you make democracy worse.”
  • “AI doesn’t cause the problems; it exacerbates existing problems—and the solutions are often political rather than purely technical.”
  • Repeated reframing question: “Compared to what?”—useful when evaluating AI utility or risk.

Final framing for listeners (builders & practitioners)

  • The current moment is both dangerous and full of opportunity: builders have considerable influence and should act as a moral compass inside organizations.
  • Technical work matters (specialized models, middleware, privacy) but so do political choices—policy, funding, public infrastructure, and civic engagement will determine whether AI enhances or harms democratic life.
  • Prepare for disruptive social change: support retraining, new safety‑net thinking, and public models to prevent concentrated tech power from undermining democratic institutions.

Episode resources mentioned: PracticalAI.fm; Bruce Schneier’s book Rewiring Democracy (co‑author Nathan Sanders); examples of country projects (Japan, Germany, Brazil, Switzerland/ETH), and civil‑society tools like CalMatters.