Everybody's talking about Sam Bankman-Fried, effective altruism (EA), and the ideology known as "longtermism" that many effective altruists, including Bankman-Fried, accept. The tidal wave of bad press triggered by the catastrophic collapse of Bankman-Fried's cryptocurrency exchange platform FTX comes at the worst time for the longtermist community: William MacAskill, the poster boy of longtermism and a moral "adviser" to Bankman-Fried, went on a media blitz after his book "What We Owe the Future" came out last summer, even making an appearance on "The Daily Show." The reputational damage to longtermism caused by recent events has been significant, and it's unclear whether the movement, which had become immensely powerful over the past few years, can bounce back.
Critics of longtermism, like myself, saw this coming from miles away. Not, specifically, the implosion of Bankman-Fried's empire, but something very bad — something that would cause serious harm to real people — in the name of longtermism. For years, I have been warning that longtermism could "justify" actions much worse than fraud, which Bankman-Fried appears to have committed in his effort to "get filthy rich, for charity's sake." Even some within or adjacent to the longtermist community have noted the ideology's potential dangers, yet none of the community's leaders have taken such warnings seriously. To the contrary, critics have been habitually dismissed as attacking a "straw man," or of putting forward their critiques in "bad faith." One hopes the FTX debacle will prompt some serious reflection on why, and how, the longtermist ideology is playing with fire.
It's useful to distinguish, right off the bat, between "moderate" and "radical" longtermism. Moderate longtermism is what MacAskill defends in his book, while radical longtermism is what one finds in all the founding documents of the ideology, including multiple papers by Nick Bostrom and the PhD dissertation of Nick Beckstead. The latter is also what MacAskill claims he's most "sympathetic" with, and thinks is "probably right." Why, then, does MacAskill's book focus on the moderate version? As a previous Salon article of mine explains in detail, the answer is quite simply marketing. Radical longtermism is such an implausible view that trying to persuade the public that it's true would be a losing game. The marketing strategy was thus to present it in more moderate form, which Alexander Zaitchik of the New Republic aptly describes as a "gateway drug" to the more radical position
Over and over again throughout history, the combination of utopianism and the utilitarian mode of moral reasoning — the belief that ends justify means — has been disastrous.
If taken literally by those in power, radical longtermism could be profoundly dangerous. The reason — and this is something that every politician and journalist needs to understand — is that it combines what can only be described as a techno-utopian vision of the future, in which humanity creates astronomical amounts of value by colonizing space and simulating vast numbers of digital people, with a broadly utilitarian mode of moral reasoning. Over and over again throughout history, the combination of these two ingredients — utopianism and the belief that ends justify the means — has been disastrous. As Steven Pinker, who appears somewhat aligned with effective altruism (MacAskill even gave a guest lecture on longtermism in one of Pinker's classes), writes in "The Better Angels of Our Nature":
Utopian ideologies invite genocide for two reasons. One is that they set up a pernicious utilitarian calculus. In a utopia, everyone is happy forever, so its moral value is infinite. Most of us agree that it is ethically permissible to divert a runaway trolley that threatens to kill five people onto a side track where it would kill only one. But suppose it were a hundred million lives one could save by diverting the trolley, or a billion, or — projecting into the indefinite future — infinitely many. How many people would it be permissible to sacrifice to attain that infinite good? A few million can seem like a pretty good bargain.
Is longtermism really utopian? Yes, in a couple of senses. On the one hand, many of its foundational texts explicitly imagine a future in which our descendants use advanced technologies to radically enhance themselves, thus creating a superior race of "posthumans." Such beings may be immortal, superintelligent and have perfect control over their emotions. An example comes from Bostrom's "Letter from Utopia," in which he writes, pretending to be a posthuman from the future: "How can I tell you about Utopia and not leave you mystified? With what words could I convey the wonder? My pen, I fear, is as unequal to the task as if I had tried to use it against a charging war elephant." From there, the "letter" takes readers through a phantasmagoria of wonders, describing our posthuman progeny as living in "surpassing bliss and delight."
Other leading longtermists like Toby Ord share this general vision. In his 2020 book "The Precipice," Ord waxes poetic about how reengineering the human organism could enable us to transform "existing human capacities — empathy, intelligence, memory, concentration, imagination." It could even augment our sensorium by adding new modalities like echolocation and magnetoreception. "Such uncharted experiences exist in minds much less sophisticated than our own," he declares. "What experiences, possibly of immense value, could be accessible, then, to minds much greater?" Furthermore, reengineering human beings isn't just something we might consider doing in the future — it may be integral to fully realizing our "vast and glorious" longterm "potential" in the universe. In his words: "Rising to our full potential for flourishing would likely involve us being transformed into something beyond the humanity of today." Similarly, he declares that "forever preserving humanity as it is now may also squander our legacy, relinquishing the greater part of our potential."
MacAskill also touches on this idea in "What We Owe the Future," writing that "eutopia," which translates as "good place," is "a future that, with enough patience and wisdom, our descendants could actually build — if we pave the way for them." While he doesn't claim that "a wonderful future is likely," he does contend that it's "not just a fantasy, either."
On the other hand, radical longtermists imagine our descendants colonizing space and creating huge computer simulations in which trillions upon trillions of digital people "live rich and happy lives while interacting with one another in virtual environments," quoting Bostrom. Why would these people be happy? No one explains. Maybe they'll have access to digital Zoloft. In the longtermist view, the more "happy" people who exist in the future, the greater the amount of "value," and the more value, the better the universe will become. This is why longtermists are obsessed with calculating how large the posthuman population could be. For example, Bostrom estimates some 1058 digital people in the future — that's a 1 followed by 58 zeros — while MacAskill and his longtermist colleague Hilary Greaves note that there could be 1045 in the Milky Way galaxy alone. It follows that there could be literally astronomical amounts of value in the far future — amounts that utterly dwarf the value humanity has so far created, or that exists right now.
Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.
So what lies ahead, if we play our cards right, is a techno-utopian world among the heavens full of unfathomable quantities of goodness. The stakes are thus absolutely enormous. This is where the idea of "existential risk" enters the picture. Longtermists define this as, essentially, any event that would prevent us from realizing this glorious future, and this is why radical longtermism implies that the single most important task for humanity is reducing existential risk. As Bostrom writes, "for standard utilitarians, priority number one, two, three, and four should … be to reduce existential risk," the fifth being to colonize space. Here it's worth noting that the EA community is overwhelmingly utilitarian — at least 70 percent or so — and its leading luminaries, such as MacAskill and Ord, "describe themselves as having more credence in utilitarianism than any other positive moral view." Bankman-Fried is also a utilitarian who, in his own words, aimed "to maximize every cent I can and aggregate net happiness in the world."
Having outlined the longtermist ideology, its danger can be understood as twofold: first, it leads adherents to ignore, neglect and minimize current-day suffering. If a problem doesn't pose an existential risk, then it shouldn't be one of our top four (or five) global priorities. Second, it could end up justifying, in the eyes of true believers, harmful actions for the sake of the greater cosmic good — namely, creating a multi-galactic civilization full of 1058 posthumans in vast computer simulations spread throughout the universe. Let's consider these in turn.
In one of the foundational texts of longtermism, published in 2013, Bostrom writes that
unrestricted altruism is not so common that we can afford to fritter it away on a plethora of feel-good projects of suboptimal efficacy. If benefiting humanity by increasing existential safety achieves expected good on a scale many orders of magnitude greater than that of alternative contributions, we would do well to focus on this most efficient philanthropy.
What are these "feel-good projects" that we must not "fritter away" our resources on? As Peter Singer, who considers himself an effective altruist, notes, this would include charitable causes like "donating to help the global poor" and reducing "animal suffering." This was, in fact, made explicit by Greaves in an interview on the longtermist philosophy. Quoting her in full:
There's a clear case for transferring resources from the affluent Western world to the global poor. But longtermist lines of thought suggest that something else would be better still. There are a lot of candidates for potentially very high value longtermist interventions. … The most clear-cut one, I think, is reducing risks of premature human extinction. … Even if we can do anything that reduces the probability of premature human extinction by a tiny amount, in expected value terms, that is, when you average across your uncertainty, the contribution of that kind of intervention could be massive — much greater, even, than the best things we can do in the area of global poverty.
What does Greaves mean by "premature extinction"? This refers to any extinction event that occurs before we've created Utopia and flooded the universe with "value." On the longtermist view, then, we shouldn't spend money on global poverty: Those resources should instead go to ensuring that we realize our "longterm potential." After all, when one takes a truly cosmic perspective on our place in a universe that could remain habitable for literally trillions of years to come, all the suffering caused by global poverty becomes virtually imperceptible.
In the longtermist view, we shouldn't spend money on global poverty: When one takes a truly cosmic perspective on our place in the universe, the suffering caused by poverty becomes virtually imperceptible.
This is why Bostrom writes the following about the worst horrors in human history, including the two world wars: "Tragic as such events are to the people immediately affected, in the big picture of things — from the perspective of humankind as a whole — even the worst of these catastrophes are mere ripples on the surface of the great sea of life." Why? Because "they haven't significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species." Elsewhere, he describes "an all-out nuclear war between Russia and the United States" — which studies show could literally kill more than 5 billion people — as "a giant massacre for man, a small misstep for mankind," so long as it doesn't cause our complete extinction.
It's this perspective — the cosmic vantage point — that yields a profoundly callous view of current-day suffering. Sure, global poverty is bad, but it's not going to prevent our descendants from becoming radically enhanced posthumans, colonizing space and simulating enormous numbers of digital people, which is what really matters. Or consider climate change, which the longtermist Jaan Tallinn, who co-founded the Future of Life Institute, says "is not going to be an existential risk unless there's a runaway scenario." A runaway scenario, which would cause our extinction, is very unlikely.
The clear implication of Tallinn's statement is that we shouldn't be too concerned about non-runaway climate change. Of course it will cause profound harm, especially to people in the global South, but in the grand scheme of things the suffering of such people will amount to nothing more than "mere ripples." John Halstead, another longtermist, echoes this idea in an unpublished document that's been highly influential among longtermists. "It's hard to come up with ways in which climate change could be a direct ex risk," he concludes ("ex risk" is short for "existential risk"). So let's not get too worked up about it: There are bigger fish to fry. As I have noted elsewhere, it is impossible to read the longtermist literature and not come away with a rosy picture of the climate crisis.
The longtermist perspective on humanity's vast future is also what leads Nick Beckstead to the astonishing claim that we should prioritize saving the lives of people in rich countries over saving the lives of those in poor countries. Because this conclusion is so shocking, I will quote the passage in full:
[S]aving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards — at least by ordinary enlightened humanitarian standards — saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
As this shows, one of the dangers of radical longtermism is that it inclines advocates to ignore the cries of the most disadvantaged people. Their suffering is bad, for sure, but the failure to create a much, much bigger amount of value in the very far future would be orders of magnitude worse. To make this more concrete, consider that there are 1.3 billion people today in multidimensional poverty. Now compare this with the 1058 digital people who could exist in the future. For longtermists, could exist implies should exist, assuming that such lives would be better than miserable. If you crunch the numbers, the better thing to do would be to focus on all these future people, not those struggling to survive today. What matters most, Beckstead argues, is that we focus on the trajectory of civilization over "the coming millions, billions, and trillions of years."
The second danger is that true believers of radical longtermism could be inclined to commit harms for the sake of creating a techno-utopian paradise among the stars full of astronomical value. What exactly is off the table given longtermism's vision of the future? What means cannot be justified by such a "vast and glorious" end? Bostrom himself has written that we should keep preemptive violence as a "last-resort" option to neutralize threats that could prevent a posthuman civilization from existing. He's also dabbled with the idea of implementing a mass global surveillance system to monitor the actions of everyone on the planet, in order to prevent "civilizational destruction," which could pose an existential risk. This was published in a journal specifically for policymakers. Or consider a real-world scenario outlined by Olle Häggström, who is generally sympathetic with longtermism. Referencing Bostrom's claim that minuscule reductions in existential risk are morally equivalent to literally billions of actual human lives, Häggström writes:
I feel extremely uneasy about the prospect that [Bostrom's claim] might become recognised among politicians and decision-makers as a guide to policy worth taking literally. It is simply too reminiscent of the old saying "If you want to make an omelette, you must be willing to break a few eggs," which has typically been used to explain that a bit of genocide or so might be a good thing, if it can contribute to the goal of creating a future utopia. Imagine a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom's argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.
To my knowledge, there are no radical longtermists out there today actively calling for something like this. But as Singer explains in his critique of longtermism,
Marx's vision of communism as the goal of all human history provided Lenin and Stalin with a justification for their crimes, and the goal of a "Thousand-Year Reich" was, in the eyes of the Nazis, sufficient reason for exterminating or enslaving those deemed racially inferior. … I am not suggesting that any present exponents of the hinge of history idea would countenance atrocities. But then, Marx, too, never contemplated that a regime governing in his name would terrorize its people.
The crucial point here is that all the ingredients needed to "justify" atrocities are present in the longtermist ideology — indeed, they lie at its very core. These ideas and assertions, arguments and conclusions, are right there in the canonical longtermist literature. All that's missing is a situation in which extreme actions appear necessary to safeguard our posthuman future of astronomical value, and someone who, finding themselves in this situation, takes the core claims of the literature seriously. It is entirely possible that such a situation will arise in the future, and that such a person will find themselves in it, spellbound and driven by fantastical visions of Utopia among the stars.
All the ingredients needed to "justify" atrocities are present in longtermist ideology. All that's missing is a situation in which extreme actions appear necessary to safeguard our posthuman future of astronomical value.
This is why I've become increasingly alarmed by the clout and influence that longtermism has acquired over the past five years. Elon Musk calls it "a close match for my philosophy." A UN Dispatch article reports that "the foreign policy community in general and the United Nations in particular are beginning to embrace longtermism." The ideology is pervasive in the tech industry, motivating much of the research on how to create superintelligent computers that might someday replace us. It's the worldview that Bankman-Fried was passionate about, and which may have led him to believe that a little fraud — assuming he committed fraud, which, again, seems probable — might be OK, since it's for the greater cosmic good. In fact, Bankman-Fried told the New Yorker earlier this year that he was never interested in helping the global poor; his "all-in commitment" was to longtermism. He thus established the FTX Future Fund, which included MacAskill and Beckstead on its team, to support longtermist research projects.
These are the two primary reasons that radical longtermism is so worrisome: It minimizes all sub-existential problems facing humanity and it could potentially inspire acts of terror and violence in the name of the greater cosmic good. Singer nicely summarizes these dual concerns in writing that
the dangers of treating extinction risk as humanity's overriding concern should be obvious. Viewing current problems through the lens of existential risk to our species can shrink those problems to almost nothing, while justifying almost anything that increases our odds of surviving long enough to spread beyond Earth.
It is of paramount importance that journalists, politicians, policymakers, businesspeople and the voting public understand this worldview. Much of my article here has outlined the core ideas of longtermism using the movement's own words — this was, of course, on purpose, because critics are often dismissed as exaggerating what radical longtermists really think. We aren't exaggerating: The radical longtermist worldview really is this bizarre, philosophically dubious and potentially dangerous.
The collapse of FTX has caused immense harm to many people — some, apparently, have lost more than half their wealth — and seriously damaged the "brand" of longtermism. What surprises me most isn't that a cryptocurrency Ponzi scheme run by a utilitarian longtermist imploded, but that the first major blunder involving longtermism wasn't even worse. If this ignominious debacle doesn't take the longtermist ideology down entirely, it should at least provoke an extended reflection by the movement's leaders on whether they should have listened to its critics long ago.
Read more
from Émile P. Torres on philosophy and the future
Shares