Overview of Making Sense Podcast — #467 — EA, AI, and the End of Work
Sam Harris interviews philosopher Will MacAskill (author of Doing Good Better, 10th‑anniversary edition). The conversation covers the current state of the effective altruism (EA) movement after recent controversies, canonical EA cause areas (global health, animal welfare, pandemic preparedness, and AI/x‑risk), measurable impacts of effective giving, philosophical debates about expanding the moral circle (nonhuman animals and future/digital minds), rapid advances in AI and their implications, and a critique about what EA’s “effectiveness” metric may miss—especially high‑leverage political/cultural interventions.
Core topics discussed
-
State of the EA movement
- Recovery after the Sam Bankman‑Fried / FTX scandal; ideas and giving have continued to grow.
- Roughly $1.5–2 billion/year is now being directed toward effective nonprofits; grassroots and major donors both contributing.
- GiveWell‑style philanthropy and Giving What We Can membership are increasing.
-
Canonical EA cause areas Will highlights
- Global health and development: high cost‑effectiveness of interventions in low‑income countries.
- Animal welfare (factory farming): huge scale of suffering (billions of farmed animals), corporate campaigns (cage‑free pledges) have large impact per dollar.
- Pandemic preparedness / biosecurity: obvious cost‑effective measures (stockpiles, sterilizing infrastructure, wastewater surveillance); growing risk from democratized biotech and frequent lab leaks.
- AI and existential risk: much faster capability progress than many expected; the key risk is AI automating its own research (recursive improvement), producing large capability jumps.
-
Philosophical scope and moral expansion
- Defending intellectual space for taking non‑intuitive moral claims seriously (e.g., moral weight of invertebrates, future digital minds), while acknowledging public receptivity limits.
- Historical analogy to Quakers/abolitionists: many progressive moral ideas looked like “weird” positions before becoming mainstream.
-
Positive goods vs. negative mitigation
- Will emphasizes not only preventing suffering but also creating radically better positive outcomes (enhancing flourishing).
- Sam warns EA’s focus on measurable suffering reduction can miss high‑leverage, hard‑to‑quantify interventions that shape politics, narrative, and cooperation capacity.
Key data and illustrative numbers
-
Donations and pledges
- EA‑aligned giving approaching ~$2 billion/year (growth continuing despite scandals).
- Sam Harris’s podcast prompted ~1,200 10% pledges and ~$30 million in donations (example of pledge psychology and impact).
-
Cost‑effectiveness examples
- GiveWell‑supported charities: estimated hundreds of thousands of lives saved, roughly $5,000 per life saved (order‑of‑magnitude contrast to typical U.S. health interventions).
- Animal welfare corporate campaigns cost tens of millions yet affected billions of chickens’ conditions via cage‑free commitments; 92% fulfillment rate quoted for some pledges.
-
Pandemic risks
- Lab leaks are frequent historically; biosecurity measures cost hundreds of millions–billions but yield high expected value by preventing catastrophic outbreaks.
-
AI progress
- Rapid, roughly exponential improvements in capability; certain benchmarks (math Olympiad, coding) arrived earlier than many experts predicted.
- Concern about automation of AI research as a tipping point for very fast capability gains.
Main takeaways
- EA’s core ideas retain momentum despite reputational setbacks; measurable giving and membership have grown.
- Some interventions (global health, top animal welfare campaigns, pandemic preparedness, AI safety) are highly cost‑effective and empirically tractable.
- There is value in preserving a philosophical “laboratory” where unusual ethical hypotheses (expanding moral concern to animals, digital minds) are rigorously explored.
- EA’s emphasis on quantifiable impact risks overlooking non‑quantifiable but high‑leverage interventions that shape political culture, media ecosystems, and cooperation—these opportunity costs can be huge.
- AI progress is faster and more predictable than many assumed; the ability for AI to accelerate AI research is the critical risk threshold to prioritize.
Notable quotes & paraphrased insights
- On EA’s resilience: “That was a huge hit, but the underlying ideas are very good…there’s been an enormous restoration of growth.”
- On measurable impact in global health: donations via GiveWell have saved hundreds of thousands of lives at roughly $5,000 per life.
- On factory farming: “Factory farming is one of the worst atrocities that humanity is committing today,” and corporate campaigns can shift billions of animals’ welfare.
- On AI: progress is “remarkably stable” and looks like exponential gains; automating AI research is a likely trigger for big leaps.
- On moral exploration: historical movements (Quakers, abolitionists) looked like “weird” moral deviants before their ideas became accepted—so intellectual rigor in “weird” ethics is important.
Actionable recommendations / what listeners who care about doing good might consider
-
If you’re persuaded by EA principles:
- Consider committing to an income pledge (e.g., Giving What We Can) to make donations predictable and psychologically salient.
- Direct donations toward high‑impact, evidence‑based global health charities (research GiveWell and similar evaluators).
- Support proven animal‑welfare interventions, including corporate campaigns that shift industry practices.
-
For those worried about catastrophic risks:
- Fund or advocate for pandemic preparedness and biosecurity initiatives (stockpiles, air sterilization, surveillance).
- Support research, policy, and organizations focused on AI safety and governance; prioritize work addressing the risks from rapid capability growth and recursive self‑improvement.
-
For funders and organizers worried about EA’s public appeal:
- Balance funding between quantifiable interventions and high‑leverage social/political projects that shape narratives, media influence, and civic cooperation—even if harder to quantify.
- Preserve intellectual space for careful moral inquiry while being mindful of public messaging and coalition‑building.
Scope note
This episode excerpt covers roughly the first half of the conversation (available publicly). The full interview (including a deeper dive into AI and related policy/ethical issues) is available on Sam Harris’s subscriber feed at samharris.org.
