Sensemaking for a plural world

Perspective Map

Longtermism and Effective Altruism: What Each Position Is Protecting

March 2026

In November 2022, Sam Bankman-Fried's cryptocurrency exchange FTX collapsed, leaving an $8 billion hole where customer funds had been. Before the collapse, Bankman-Fried was the second-largest individual donor to the Democratic Party in the 2022 election cycle, a co-founder of the Centre for Effective Altruism, and one of the most visible figures in a philanthropic movement called effective altruism, which had become, in the years since philosopher Peter Singer's 1972 essay "Famine, Affluence, and Morality," a genuine intellectual and cultural force in how a certain segment of the wealthy and the highly educated thought about their obligations to other people.

Bankman-Fried had articulated, repeatedly and publicly, a philosophy called "earning to give": the idea that taking the highest-paying job available and donating a large fraction of the proceeds was more impactful than direct service work, because the money could be directed to interventions with proven, measurable impact at scale. He also articulated a version of expected-value reasoning under which normally prohibited actions could become permissible if the expected philanthropic impact were large enough. His lawyers argued at trial that his risk tolerance was calibrated to the long-run good of humanity. He was convicted on all counts of wire fraud and conspiracy in November 2023 and sentenced to 25 years in prison.

The collapse was not what caused the debate over effective altruism and longtermism. That debate had been building for years, inside the movement and outside it. But FTX made it legible to a wider audience: here was a case where the philosophical apparatus of a movement — expected value, earning to give, the tractability of present constraints by future impact — had been used to rationalize behavior that ordinary moral intuition would have stopped immediately. The question the collapse raised was not whether Bankman-Fried was an anomaly or an inevitable product. The answer to that question depends entirely on what you think effective altruism and longtermism are protecting.

What the strong longtermist position is protecting

The argument that the number of people who could exist in the future so vastly exceeds the number of people alive today that reducing even small probabilities of civilizational collapse is among the most important moral tasks available to us — and that most of what contemporary philanthropy and politics treats as urgent is, in relative terms, a rounding error. This position is associated most closely with philosophers William MacAskill (author of What We Owe the Future, 2022), Toby Ord (author of The Precipice: Existential Risk and the Future of Humanity, 2020), and Nick Bostrom (author of Superintelligence, 2014), as well as organizations including 80,000 Hours, the Future of Humanity Institute at Oxford, and the Machine Intelligence Research Institute.

The core argument is a math problem. If humanity survives and spreads through the galaxy over the next billion years, the number of people who will ever live could plausibly be in the range of 1023 or more. If you assign even minimal moral weight to each of those potential people — if their lives would be worth living, if they matter morally at all — then a 0.001% reduction in the probability of extinction today has an expected value that dwarfs the entire present-day global health burden by many orders of magnitude. Ord estimates the probability of existential catastrophe in the next century at roughly one in six — "Russian roulette odds," he writes. MacAskill argues that this framing, if taken seriously, should reshape how we think about careers, institutions, and political priorities.

The practical implications are specific: AI safety (preventing artificial general intelligence from pursuing goals incompatible with human flourishing), pandemic preparedness and biosecurity (preventing engineered pathogens that could kill civilization), and nuclear risk reduction (lowering the probability of a civilization-ending exchange). These are the areas where longtermist funding has concentrated most heavily — through Open Philanthropy, through the FTX Foundation before its collapse, and through the direct career advice of 80,000 Hours, which for years ranked AI safety as the highest-leverage career path for people capable of doing it.

What this position is protecting: the moral seriousness of the very long run — the claim that our intuitive discounting of distant future people is not just psychologically explicable but philosophically indefensible. It is protecting the obligation to apply reason rigorously to the question of where moral concern should go, even when the answer is counterintuitive — even when it says that the child dying of malaria today matters less, in aggregate expected value terms, than reducing the probability that civilization collapses in 2150. The longtermist claim is not that present suffering doesn't matter. It is that our intuitive sense of scale is catastrophically miscalibrated, and that correcting that miscalibration is an act of moral seriousness, not callousness.

What the present-focused effective altruist position is protecting

The argument that effective giving requires rigorous measurement of actual impact — cost per life saved, dollars per quality-adjusted life year — and that the interventions with the strongest evidentiary base are overwhelmingly in global health and poverty reduction, not in speculative future-oriented programs whose impact cannot be measured. This position, closest to Peter Singer's original utilitarian case for effective giving in "Famine, Affluence, and Morality" and developed institutionally by GiveWell (founded in 2006), holds that the power of the EA framework lies precisely in its epistemic discipline: comparing interventions rigorously, demanding evidence, and directing resources to where they will do the most measurable good.

GiveWell's top charities — the Against Malaria Foundation, Helen Keller International's vitamin A supplementation program, the Malaria Consortium's seasonal malaria chemoprevention program — are recommended on the basis of randomized controlled trial evidence, independent replication, and cost-effectiveness modeling that has itself been subjected to repeated scrutiny and revision. The cost to prevent a death from malaria is in the range of $3,000–$5,000. That estimate is contestable in its details but grounded in a methodology that is transparent and replicable.

The present-focused position's critique of longtermism is epistemological before it is political. Expected value calculations for existential risk multiply very small probabilities by very large numbers. Those small probabilities — 1-in-6 odds of existential catastrophe, 10% chance of transformative AI by 2030, 0.001% annual probability of civilization-ending pandemic — are not derived from frequentist data. They are expert judgments about unprecedented scenarios. The mathematics of expected value is exact; the inputs are guesses. A calculation that looks like a rigorous optimization is, at its base, a product of intuitions dressed in quantitative clothing. If your probability estimate for AI risk is half as high as someone else's, the entire framework produces a radically different conclusion — and there is no agreed method for resolving that disagreement empirically.

What this position is protecting: the epistemic discipline that makes effective altruism effective — the demand that impact be accountable, measurable, and subject to revision based on evidence. It is protecting the claim that present-day suffering is real and legible in ways that hypothetical future suffering is not, and that directing resources away from preventable deaths toward speculative future scenarios requires a burden of proof that longtermism has not met. It is also protecting the reputational proposition that a movement calling itself effective should be able to demonstrate that its interventions work.

What the structural and political critique is protecting

The argument that both effective altruism and longtermism share a political silence — a refusal to ask why the world is structured so unequally in the first place — and that this silence is not a neutral omission but a substantive political position that forecloses more transformative alternatives. This position, articulated by philosophers including Amia Srinivasan (in "Stop the Robot Apocalypse," her 2015 essay in the London Review of Books), development economists skeptical of aid effectiveness, and advocates for systemic political change, argues that the EA/longtermist framework treats poverty and catastrophic risk as natural phenomena to be managed rather than as products of ongoing political arrangements that the movement's participants help create and maintain.

Malaria kills roughly 600,000 people per year. It is a preventable, treatable disease. The question EA asks is: given that it kills 600,000 people, how can we most cost-effectively reduce that number? The structural critique asks a prior question: why does it still kill 600,000 people when the interventions that could prevent it are well understood and affordable? The answer to that question involves trade policy that undercuts African agricultural development, intellectual property regimes that price essential medicines out of reach, debt structures imposed by the IMF and World Bank that constrain health spending in low-income countries, and colonial extraction that transferred wealth from the same places where malaria is endemic. EA's framework is not designed to address any of those causes. It is designed to work within them.

The structural critique extends to longtermism: the people most concerned about existential risk are overwhelmingly located in wealthy Western universities and tech-adjacent institutions. Their estimates of what constitutes existential risk — misaligned AI, engineered pandemics — reflect specific anxieties shaped by their location in the global economy. Climate change, which disproportionately threatens the global poor, receives a fraction of the longtermist attention that AI safety does, despite having stronger near-term probability estimates and significantly clearer causal mechanisms. The claim that longtermism is politically neutral — that it is just following the math — obscures the extent to which the choice of probability inputs, the selection of scenarios worth modeling, and the definition of what counts as civilizational catastrophe are all political acts.

What this position is protecting: the claim that charity and structural change are not the same thing and cannot substitute for each other; the proposition that the political context in which philanthropic money is generated matters morally, not just the impact at the margin; and the idea that movements that optimize for impact within the existing political economy are, by that choice, endorsing the existing political economy. The structural critique is not primarily an attack on malaria nets. It is an argument that malaria nets and structural change are different projects, and that directing philanthropic energy to the former while leaving the latter unaddressed is a choice that deserves to be owned as a choice, not presented as a neutral optimization.

What the cultural and epistemic critique is protecting

The argument that the EA and longtermist movements have developed institutional pathologies — a community culture that valorizes consequentialist overrides of ordinary moral intuition, concentrates philanthropic power in a small unaccountable network, and produces reasoning patterns that have historically enabled serious harm — that are structural features of the movement rather than incidental failures of specific individuals. This position is associated with critics including philosopher Émile Torres (who coined the acronym TESCREAL — Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, Longtermism — as a taxonomy of overlapping movements), journalists including Gideon Lewis-Kraus (whose 2021 New Yorker profile of the rationalist community traced the cultural genealogy), and former EA community members who have documented the movement's internal dynamics.

The cultural critique centers on what has been called "galaxy-brained" reasoning: the phenomenon where chains of seemingly valid logical steps lead to conclusions that should trigger moral alarm but instead get accepted because each individual step feels defensible. Bankman-Fried's reasoning about fraud was not irrational by the internal logic of expected value maximization — if you genuinely believed that the expected philanthropic impact of FTX's growth justified risk-taking with customer funds, the calculation was, in a narrow sense, coherent. The critique is not that consequentialism is always wrong; it is that any framework that allows sufficiently large expected future goods to override present deontological constraints is structurally vulnerable to producing conclusions that catastrophically fail the humans they claim to benefit. The evidence that the movement has this vulnerability is not only the FTX collapse; it includes documented cases of sexual harassment and abuse within EA organizations that were handled through in-group negotiation rather than external accountability, and a broader pattern of treating ordinary ethical norms as obstacles that sufficiently important work could override.

The concentration of power critique runs parallel. Open Philanthropy, the primary philanthropic vehicle associated with EA, directed roughly $750 million in grants in 2023, primarily toward AI safety, biosecurity, and global health. That money has reshaped entire academic fields — AI safety as a research area barely existed before EA funding made it possible to hire full-time researchers. The people making grant decisions are a small network connected by shared intellectual background, shared institutional affiliation, and often personal relationships. Longtermism worries explicitly about power concentration as an existential risk; it is, critics argue, less reflexive about the power concentration it is itself creating. MacAskill's What We Owe the Future explicitly warns against any group, including longtermist groups, seizing permanent control over the future. Whether the movement's current grant-making architecture is consistent with that warning is a question its critics take more seriously than many of its members.

What this position is protecting: the idea that the internal culture and power structure of a philanthropic movement is itself a moral object, not just a delivery mechanism for impact; the claim that universalist philosophical frameworks can mask deeply parochial assumptions about whose future matters and who gets to model it; and the concern that a community whose epistemics are optimized for internal consistency rather than external accountability will tend, under pressure, to rationalize its own interests as the good of humanity. The cultural critique is not an argument that the questions EA and longtermism ask are unimportant. It is an argument that the way a movement pursues good things is itself a moral question, and that getting the answer wrong at scale produces damage that the movement's own frameworks will be poorly designed to notice.

What cuts across all four positions
  • The measurement-vs.-scope trade-off cannot be resolved empirically. Present-focused EA has strong evidentiary grounds but bounded scope — it can demonstrate impact on existing people but cannot account for people who don't yet exist. Longtermism has vast scope — if the probability estimates and population projections are right, the expected value calculations are decisive — but the inputs are not measurable in the way that malaria bed-net distribution costs are. This is not a gap that can be closed by more research. It reflects a categorical difference between empirical evidence about present effects and philosophical arguments about future population ethics. The two positions are not in the same evidentiary framework, which means arguments between them frequently talk past each other.
  • The galaxy-brain problem is a general problem in consequentialism, not specific to EA. The same reasoning structure that produces the longtermist conclusion — sufficiently large expected future goods can override present deontological constraints — produced Bankman-Fried's fraud rationalization, some effective altruists' willingness to accept professional misconduct in pursuit of important work, and historically has been used to justify colonial violence, technocratic authoritarianism, and revolution. The critique is not that consequentialism is always wrong; it is that any framework that systematically permits overriding common-sense moral intuitions in pursuit of calculated future good is structurally vulnerable to rationalization, and that the EA community has not developed adequate external checking mechanisms for this vulnerability. Deontological constraints — don't lie, don't steal, don't harm — function partly as safeguards against exactly this failure mode.
  • The structural critique and the EA/longtermist frameworks are not debating the same question. EA asks: given the world as it is, how can resources be directed to do the most measurable good? The structural critique asks: what political arrangements produced the world as it is, and what would it take to change them? These are different projects. They can coexist, and many people pursue both. But EA's framework cannot be used to answer the structural question, because its tools — cost-effectiveness measurement, counterfactual analysis, impact estimation — require taking the political-economic background conditions as fixed. The critique that this is a conservative political choice is most accurate if EA presents itself as a complete theory of how to do good, rather than one tool among many. Whether EA makes that presentation is a question that different observers answer differently.
  • The FTX collapse is both over-weighted and under-weighted in the debate. Over-weighted because Bankman-Fried's fraud does not demonstrate that effective altruism's core epistemics are wrong; many people have done straightforwardly harmful things while adhering to ethical frameworks that are not themselves invalid. Under-weighted because the question the collapse raised — whether EA's community culture actively enabled this outcome by suppressing external accountability and rewarding consequentialist overrides of ordinary moral norms — has not been answered satisfactorily. MacAskill's public statement at the time acknowledged being deceived; the harder question is whether the movement's practices made the deception easier to sustain than it should have been.
  • Longtermism's concern about power concentration applies to itself. One of longtermism's core premises is that a world "locked in" to a particular set of values by a sufficiently powerful actor would be catastrophic — including a world locked in to longtermist values by longtermist actors. MacAskill writes explicitly that longtermists should be among the most concerned about longtermists gaining disproportionate power. Whether the current architecture of EA philanthropy is consistent with this concern — given the degree to which Open Philanthropy has reshaped AI safety and biosecurity research according to a specific philosophical framework — is a question the movement's critics raise more consistently than its members answer.

See also

  • What is a life worth? — the framing essay for the counting problem underneath longtermism: how present lives, future lives, statistical lives, and institutionally invisible lives get weighed when a moral framework tries to reason at civilization scale.
  • AI Safety and Existential Risk — the longtermist position's primary strategic focus; the debate over whether artificial general intelligence poses a civilization-ending risk and what institutional responses are adequate
  • Global Health Governance — the debate over how global health decisions are made, who funds them, and whether the current architecture of global health institutions serves the populations most affected
  • Wealth Inequality — the background condition: the debate over whether philanthropy can address structural inequality or whether it is itself a product of structures that require political change
  • Corporate Governance and the Purpose of the Firm — the adjacent question about whether corporations have obligations beyond shareholder return; relevant to "earning to give" and the relationship between how money is made and what it is used for
  • Foreign Aid and Development — the empirical debate over whether financial transfers to low-income countries produce development or dependency; directly relevant to the evidentiary basis for present-focused EA's interventions
  • Progress and Declinism — the underlying cultural debate about whether humanity is on a trajectory toward flourishing or catastrophe; longtermism and progress studies share assumptions about the positive-sum potential of civilization that the declinist position contests
  • Bioweapons Governance — one of longtermism's three primary strategic domains; the debate over how to prevent engineered pandemics connects directly to how longtermist philanthropists have shaped biosecurity research

References and further reading

  • Peter Singer, "Famine, Affluence, and Morality," Philosophy & Public Affairs, Vol. 1, No. 3 (Spring 1972) — the foundational text; argues that geographic distance and personal causation are morally irrelevant; if you can prevent something bad from happening without sacrificing anything of comparable moral significance, you are obligated to do it; the philosophical root from which effective altruism grew
  • William MacAskill, What We Owe the Future (Basic Books, 2022) — the primary popular statement of strong longtermism; argues that future generations matter morally, that their numbers are vast, and that the most important thing we can do is prevent civilizational catastrophe and avoid locking in bad values; the book that brought longtermism to mainstream attention
  • Toby Ord, The Precipice: Existential Risk and the Future of Humanity (Hachette Books, 2020) — rigorous philosophical and empirical case for prioritizing existential risk reduction; estimates a one-in-six probability of civilizational catastrophe in the next century; most careful treatment of the probability estimates underlying longtermist calculations
  • Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014) — the foundational text for AI safety as a longtermist priority; argues that sufficiently capable artificial intelligence would pose an existential threat unless specifically designed to pursue human-compatible goals; launched the field that longtermist funding subsequently built
  • Peter Singer, The Life You Can Save: How to Do Your Part to End World Poverty (Random House, 2009) — the popular version of the present-focused EA case; argues for a specific giving norm (a percentage of income) and focuses on global health and poverty interventions with strong evidentiary support; closest statement of the present-focused position
  • Amia Srinivasan, "Stop the Robot Apocalypse," London Review of Books, Vol. 37, No. 18 (September 2015) — the most cited early critical essay; argues that effective altruism is structurally conservative, that its framework cannot address the political causes of poverty, and that its apparent neutrality is itself a political position; anticipated many later structural critiques
  • Gideon Lewis-Kraus, "The Reluctant Prophet of Effective Altruism," The New Yorker, August 8, 2022 — the most careful journalistic account of EA's intellectual and cultural history; traces the relationship between MacAskill, Bankman-Fried, and the movement's philosophical development in the years leading up to the FTX collapse
  • Émile Torres, "The Dangerous Ideas of 'Longtermism' and 'Existential Risk,'" Current Affairs, July 2021; and "Against Longtermism," Aeon, October 2021 — the most sustained philosophical critique of longtermism; argues that it is both empirically unfounded and structurally prone to justifying present harm; coined the TESCREAL taxonomy
  • Holden Karnofsky, "Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity," Open Philanthropy (2016) — the internal case for shifting EA focus from global health toward existential-risk-adjacent priorities; important document for understanding how the longtermist turn happened within EA institutions; Karnofsky is co-CEO of Open Philanthropy
  • GiveWell, "How We Work" (updated regularly) — the methodology document behind GiveWell's recommendations; the most detailed public articulation of the present-focused EA epistemics, including how cost-effectiveness estimates are built, what uncertainty is acknowledged, and how evidentiary standards are applied
  • Angus Deaton, The Great Escape: Health, Wealth, and the Origins of Inequality (Princeton University Press, 2013) — the structural critique of development aid from a Nobel economist; argues that external aid can undermine the institutional development and political accountability that produce sustainable prosperity; skeptical of the EA premise that well-directed external transfers can substitute for internal political development
  • Phil Torres, Human Extinction: A History of the Science and Ethics of Annihilation (Routledge, 2021) — extended historical and philosophical critique of existential risk thinking; examines how ideas about human extinction have been used politically across history; skeptical of the longtermist movement's ability to distinguish genuine x-risk reduction from ideology
  • Leila Neti Janaway and Timnit Gebru, "Effective Altruism Is Pushing a Dangerous Brand of 'AI Safety,'" Wired, July 2023 — critiques the longtermist-adjacent AI safety movement from an AI ethics perspective; argues that funding for speculative long-run AI risk crowds out research on present-day harms of deployed AI systems affecting marginalized communities
  • William MacAskill, Krister Bykvist, and Toby Ord, Moral Uncertainty (Oxford University Press, 2020) — the philosophical treatment of how to make decisions under moral uncertainty; relevant to the debate over whether longtermism's probability estimates are epistemically legitimate and how much weight to give speculative frameworks