Blog
Technocene

Long-termism: the other name for techno-fanaticism

By
S.C
08
October
2023
Share this article

Long-termism is an ideology born from the connection between transhumanism and effective altruism (“a concept of thought that consists in thinking about doing good in the most rational and efficient way possible”).[1] ” according to a survey by the newspaper Le Monde). Led by influential people such as the philosopher Nick Bostrom, the astrophysicist Martin Rees or the billionaire entrepreneurs Peter Thiel and Elon Musk, the long-term ideology is beginning to infuse international institutions. We offer you this translation of a long article by the philosopher Emile P. Torres published in the magazine Aeon In 2022[2]. The latter for a time espoused long-term ideas before seceding.

The points to remember:

  • Long-termists aim to emancipate their biological matrix and dream of seeing humanity “downloaded” in computer simulations sent into space.
  • Long-termists believe that population density is driving technological progress, so they want the human population to continue to grow on Earth.
  • For the long-termists, humanity — or the robots that will replace it — must absolutely leave Earth before the Sun's heat makes it uninhabitable (which is expected in 1 to 3 billion years...)
  • Long-termists believe that everything must be done at the political level to prevent a decrease in the technological power of industrial civilization
  • Long-termism serves to justify the continuation and acceleration of technological progress without any consideration for the ecological and human disasters that it is already causing and that it will certainly cause in the future.
  • For proponents of long-termism, genocides, mass wars, colonialism, colonialism, biosphere devastation, industrial accidents, species eradication, and climate change weigh nothing morally in the face of their moral imperative of the quest for power and glory.
  • For his part, the philosopher Emile P. Torres (author of the article) believes that only the slowdown or stopping of technological progress can prevent a global disaster from happening in the coming decades.

Illustrative image: in the center the billionaire Elon Musk, left Toby Ord (top) and Nick Beckstead (bottom), right Nick Bostrom (top) and William MacAskill (bottom). They all have a central role in the dissemination of the delusional ideas of long-term philosophy.

Against long-termism (by Emile P. Torres)

It is increasingly recognized that the “end of time” is approaching. Worrisome predictions of future disasters are everywhere. Our social media feed is full of videos showing giant forest fires, devastating floods, and hospitals overwhelmed by the number of COVID-19 patients. Extinction Rebellion activists are blocking roads in a desperate attempt to save the world. A survey even revealed that more than half of those questioned about the future of humanity “assessed the risk of our way of life ending in the next 100 years at 50% or more.[3] ”.

The “myth of the Apocalypse”, i.e. the belief in the impending end of time [which is now presented as a science with collapsology, NdT], is of course not new. For thousands of years, people have been warning that the end is near. And many New Testament experts think that Jesus himself expected the world to end during his lifetime. But the current situation is fundamentally different from that of the past. The “eschatological” scenarios currently being debated are not based on the revelations of religious prophets or on secular meta-narratives of human history (as in the case of Marxism), but on solid scientific conclusions defended by leading experts in fields such as climatology, ecology, epidemiology, etc.

For example, we know that climate change is a serious threat to civilization. We know that biodiversity loss and the sixth mass extinction could precipitate sudden, irreversible, and catastrophic changes in the global ecosystem. A thermonuclear conflict could spread a cloud hiding the Sun for years or even decades, causing the collapse of agriculture all over the world. And it doesn't matter if SARS-CoV-2 came out of a laboratory in Wuhan or was concocted by nature (the latter hypothesis seems more likely right now).[4]), synthetic biology will soon allow malicious actors [States, groups or individuals, NdT] to design pathogens that are far more deadly and contagious than anything Darwinian evolution could invent[5]. Some philosophers and scientists have also begun to sound the alarm about the “emerging threats” associated with artificial intelligence, nanotechnology, and geoengineering.[6]. Threats that are just as daunting.

According to an article by Stephen Hawking in Guardian In 2016, these considerations led many researchers to recognize that “we have arrived at the most dangerous moment in the development of mankind.” For example, Lord Martin Rees estimates that civilization has a one in two chance of maintaining itself until 2100.[7]. Noam Chomsky says that the risk of annihilation is currently “unprecedented in the history ofHomo sapiens[8] ”. And Max Tegmark says that “it is probably during our lives [...] that we will either self-destruct or get our act together.” In accordance with these grim statements, the Bulletin of the Atomic Scientists In 2020, set its iconic Apocalypse clock to 100 seconds before midnight (midnight meaning Apocalypse), the closest figure since the clock was created in 1947[9]. In 2020, more than 11,000 scientists from around the world signed an article that states “clearly and unequivocally that planet Earth is facing a climate emergency”, and that without “a considerable increase in efforts to conserve our biosphere [we risk] enduring untold suffering as a result of the climate crisis.”[10] ”. As the young climate activist Xiye Bastida summed up in an interview with Teen Vogue in 2019, the aim is to “make sure we are not the last generation”. This scenario now seems entirely likely.

Given the unprecedented dangers that humanity faces today, one would expect philosophers to write a lot of ink about the ethical implications of our extinction, or similar scenarios such as the final collapse of civilization. How morally wrong (or good) would our demise be, and for what reasons? Would it be wrong to prevent future generations from existing? Does the value of past sacrifices, struggles, and efforts depend on whether humanity continues to exist as long as Earth — or more generally the Universe — remains habitable?

Despite this reality, the theme of our extinction has only recently caught the attention of philosophers, and it still remains on the fringes of philosophical discussions and debates today. Overall, thinkers have been preoccupied with other issues. However, there is one notable exception to this rule. Over the past two decades, a small group of theorists primarily based in Oxford have worked to develop a new moral vision of the world called “long-termism.” This ideology focuses on how our actions affect the future of the universe in the very long term — thousands, millions, billions, even trillions of years from now. This vision originated in the work of Nick Bostrom, founder in 2005 of an institute with a grandiose name, the Future of Humanity Institute (FHI). The other father of long-term thinking is Nick Beckstead, a research associate at FHI and program manager for the Open Philanthropy Foundation[11]. It is the philosopher Toby Ord, author of The Precipice: Existential Risk and the Future of Humanity (2020), which has taken the most public positions in favor of this vision of the future. Long-term research is the main focus of research at the Global Priorities Institute (GPI), an organization linked to the FHI and led by Hilary Greaves. It is also a central subject of study for the Forethought Foundation led by William MacAskill, who also holds positions at the FHI and the GPI. To add to the jumble of titles, names, institutes, and acronyms, long-termism is one of the main “causes” of the so-called Effective Altruism (AE) movement. This movement was launched by Toby Ord around 2011 and now boasts an astonishing amount of pledges of $46 billion.[12].

It's hard to overstate the influence of long-term ideology. In 1845, Karl Marx declared that the aim of philosophy was not simply to interpret the world but to change it. This is exactly what long-term advocates are working for, and they are achieving extraordinary success. Elon Musk, who cites and endorses Bostrom's work, donated $1.5 million to the FHI through his sister organization with an even grander name, the Future of Life Institute (FLI). This organization was co-founded by multi-millionaire tech entrepreneur Jaan Tallinn, as I noticed recently.[13]. He does not believe that climate change represents an “existential risk” for humanity because of his adherence to the long-term ideology.

Meanwhile, libertarian billionaire and Donald Trump supporter Peter Thiel, who has already delivered the keynote speech at a conference on effective altruism, has donated large amounts of money to the Machine Intelligence Research Institute. Its mission to save humanity from superintelligent machines is deeply linked to long-term values. Other organizations, such as GPI and the Forethought Foundation, fund essay competitions and scholarships in order to attract young people to the community. And it's an open secret that the Center for Security and Emerging Technologies (CSET) in Washington aims to place adherents of long-termism into senior positions in the American government in order to shape national policy. In fact, the CSET was created by Jason Matheny, a former research assistant at FHI who is now a deputy assistant to US President Joe Biden on technology and national security issues. For his part, Ord has “advised the World Health Organization, the World Bank, the World Bank, the World Economic Forum, the World Economic Forum, the American National Intelligence Council, the British Prime Minister's Office, the Cabinet Office and the Government Office for Science”, and he recently contributed to a report by the United Nations Secretary-General that specifically mentions “long-termism”[14] ”. This is surprising for a philosopher.

Apart from the most elite universities and Silicon Valley, long-term thinking is probably one of the most influential and at the same time the least known ideologies. I think that needs to change. As a former adherent of long-termism who published an entire book in his defense[15], I have come to consider this world view to be probably the most dangerous secular belief system that exists. But to understand the nature of the thing, we must first dissect it by examining its anatomical and physiological characteristics.

The first thing to note is that long-term thinking, as proposed by Bostrom and Beckstead, is not the same as “caring about the long term” or “valuing the well-being of future generations.” Long-termists go far beyond that. Their ideology is based on a simple — although imperfect in my opinion — analogy between individuals and humanity as a whole. To illustrate this idea, take the case of Frank Ramsey, an academic at the University of Cambridge, widely regarded by his peers as one of the greatest minds of his generation. “He shared similarities with Newton,” the writer Lytton Strachey once said. G E Moore talked about Ramsey's “exceptional abilities.” And John Maynard Keynes described a Ramsey paper as “one of the most remarkable contributions to mathematical economics ever made.”

But Ramsey met a sad fate. On January 19, 1930, he died in a London hospital following surgery. The probable cause of death is believed to be a liver infection due to a swim in the Cam River that winds through the city of Cambridge. Ramsey was only 26.

One could argue that this outcome is tragic for two distinct reasons. The first is the most obvious. It cut Ramsey's life short, depriving him of everything he could have experienced had he survived — joy and happiness, love and friendship: everything that makes life worth living. In that sense, Ramsey's untimely demise was a tragedy. But, second, his death also deprived the world of an intellectual superstar apparently destined to make even more extraordinary contributions to human knowledge. “The number of leads Ramsey explored is remarkable,” writes Sir Partha Dasgupta. But how many other paths could he have opened up? “It's nerve-wracking to think about the loss your generation has suffered,” laments Strachey, “a big light has gone out.” This makes us wonder how the intellectual history of the West could have been different if Ramsey had lived longer. From this perspective, while the tragedy of Ramsey's death is truly terrible, the loss of his immense potential to change the world makes the second tragedy even worse. In other words, the severity of his death derives mainly — or even essentially — from his squandered potential rather than from his personal suffering.

Long-term advocates transpose these claims and conclusions to humanity itself. It is a bit as if humanity were an individual with its own “potential” that it could choose to squander or exploit during its “lifetime.” So, on the one hand, a disaster that would reduce the human population to zero would be tragic because of all the suffering it would inflict on the people who were alive at that time. Imagine the horror of starvation under a freezing climate and dark skies, a few years or decades after a thermonuclear war. It is the first tragedy, a tragedy that directly affects people. But according to the followers of long-termism, there is a second tragedy that is infinitely more serious than the first. For example, our extinction would mean the final end of what could have been an extremely long and prosperous story over ~10 years.100 next few years (at that time, “heat death” will make life impossible). In doing so, extinction would irreversibly destroy humanity's “vast and glorious” long-term potential. In the almost religious language of Toby Ord — wasting such an enormous “potential”, considering the size of the Universe and the time left before reaching thermodynamic equilibrium, would make the first tragedy completely paltry compared to the second.

This immediately suggests another parallel between individuals and humanity: death is not the only way to misexploit a person's potential. Let's say Ramsey didn't die young but instead of studying, writing, and publishing scientific articles, he preferred to spend his days in the local bar playing pool and drinking. Same result, different failure mode. Applying this to humanity, proponents of long-termism would say that there are modes of failure that could leave our potential untapped without us dying out, which I will come back to later.

To summarize these ideas, humanity has a “potential” of its own, which transcends the potentials of each individual. Not realizing this potential would be a very bad thing — in fact, as we will see, it would amount to a moral disaster of literally cosmic proportions. This is the central dogma of long-termism: nothing is more important, from an ethical point of view, than the realization of our potential as a species belonging to “intelligent terrestrial life.” This dogma is so important that adherents of long-termism have even invented the frightening terms “existential risk” for any possibility of destroying that potential, and “existential disaster” for any event that would actually destroy that potential.[16].

Why is this ideology so dangerous? To answer quickly, it can be said that elevating the realization of humanity's supposed potential above all else is likely to greatly increase the risk that real people — those living now and in the near future — will suffer extreme harm or even die. As I have noted elsewhere, the ideology of long-termism encourages its followers to adopt a carefree attitude towards climate change. Why? Because even if climate change led to the disappearance of island nations, triggered mass migrations, and killed millions of people, it would not likely compromise our long-term potential for the trillions of years to come [one trillion = one billion billion, NdT]. If we take a cosmic view of the situation, even a climate disaster that would decrease the human population by 75% over the next two millennia would be limited in the grand scheme of things to a small incident—the equivalent of a 90-year-old man having a heart attack at the age of two.

Bostrom's argument is that “a non-existential disaster causing the collapse of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback.[17]”. According to him, it would probably be a “gigantic massacre for humans”. But as long as humanity bounces back to realize its potential, it will ultimately be just a “small misstep for humanity.” Elsewhere, he writes that the most devastating natural disasters and the worst atrocities in history become almost imperceptible trivialities when placed in global history. Referring to the two world wars, AIDS and the Chernobyl nuclear accident, he states:

“as tragic as these events are for the people directly affected, in the grand scheme of things [...] even the worst of these disasters are just ripples on the surface of the ocean of life.”

This way of looking at the world, of assessing the severity of AIDS and the Holocaust, implies that future (non-existential) disasters of the same magnitude and intensity should also be classified as “simple ripples.” If they do not pose a direct existential risk, we should not be overly concerned about them, even if they are tragic for individuals. As Bostrom wrote in 2003, “priorities number one, two, three, and four should... be to reduce existential risk.[18]”. He reiterated several years later, saying that we should not “waste” our limited resources on “philanthropic projects with suboptimal effectiveness.[19]”. For him, reducing global poverty and reducing animal suffering do not threaten our long-term potential. And our long-term potential must take priority over everything else.

Toby Ord echoes this view by saying that, of all the problems facing humanity, our “first big task... is to reach a safe place — a place where the existential risk is low and remains low.” He calls it “existential security.” The most important thing is to do whatever is necessary to “preserve” and “protect” our potential by “getting rid of immediate danger” and by designing strong “safeguards that will protect humanity from long-term dangers, so that failure becomes impossible.” Although Toby Ord is referring to climate change, he also claims — based on dubious methodology — that the probability of an existential disaster as a result of climate instability is only ~1 in 1,000. This, according to Ord, is two orders of magnitude less than the probability that superintelligent machines will destroy humanity this century.

The really remarkable thing here is that the central concern is not the effect of the climate disaster on people in the real world (remember, in the grand scheme of things, in Bostrom's words, this would be a “small faux pas for humanity”). As Toby Ord mentions in The Precipice, The problem here is that climate change “poses a risk of the irrecoverable collapse of civilization or even the complete extinction of humanity” (but a risk that remains low according to long-termists). Again, the harm caused to real populations (especially those in the South) can be significant in absolute terms, but compared to our “immense” and “glorious” long-term potential in the cosmos, these harms are not worth much.

But the implications of long-term thinking are even more worrisome. If our top four priorities are to avoid an existential disaster — that is, to realize “our potential” — what do we need to achieve this goal? Take the example of the philosopher Thomas Nagel on how the notion of “maximum well-being” was used to “justify” certain atrocities (for example during war). If the end “justifies” the means, he says, and if the end is considered important enough (national security for example), then this notion “can be used to ease the conscience of those responsible for a number of charred babies.” Now imagine what could be “justified” if “maximum well-being” is no longer national security, but the cosmic potential for intelligent life of terrestrial origin over the trillions of years to come. During the Second World War, 40 million civilians died. But compare that number to 1054 people or more (according to Bostrom estimates)[20]) that could see the light of day if we manage to avoid an existential disaster. What should we do to “protect” and “preserve” this potential? To make sure that these unborn people can exist? What are the means that cannot be “justified” by this moral purpose of cosmic importance?

Bostrom himself contended that we should seriously consider setting up a global and invasive surveillance system. This system would monitor every person on the planet in real time in order to amplify the “capabilities of preventive policing” (for example to prevent terrorist attacks capable of devastating civilization). Elsewhere, he wrote that states should use violence/preventive warfare to avoid existential catastrophes. Bostrom also argued that saving billions of real people is morally equivalent to a completely ridiculous decrease in existential risk. According to him, even if there is “only a 1% chance” that 1054 people exist in the future, “the expected value of reducing existential risk by a mere billionth of a billionth of a billionth of a percentage point is worth 100 billion times more than one billion human lives.” Such fanaticism — a qualifier claimed by some long-termists[21] — led to a growing number of critics worrying about what could happen if real world political leaders took Bostrom seriously. To quote the statistician and mathematician Olle Häggström, who — in a perplexing way — tends to speak positively about long-term thinking:

“I am extremely worried that [the calculations above] could be recognized by politicians and decision makers as a political program to be taken at face value. This is too reminiscent of the old adage “you don't make an omelet without breaking eggs”, which is generally used to explain that a small genocide is a good thing if it is part of the creation of a future utopia. Let's imagine a situation where the head of the CIA explains to the American president that he has tangible evidence that somewhere in Germany, a madman is working on a weapon of mass destruction, that he intends to use it to annihilate humanity, and that this madman has a one in a million chance of succeeding. They have no further information about the identity of this madman or where he is. If the president follows Bostrom's recommendation to the letter, and if he understands how to do the math, he may conclude that it is worth carrying out a large-scale nuclear attack on Germany to kill the entire population of the country.”

So here are a few reasons why I find long-term thinking deeply dangerous. But this vision of the world poses other fundamental problems that no one, as far as I know, has yet to be pointed out in writing. For example, there are good reasons to think that the presuppositions of long-term mism are at the origin of the dangers faced by humanity. In other words, long-term thinking would be incompatible with the achievement of “existential security.” This means that the only way to really reduce the likelihood of extinction or collapse in the future is to completely abandon the ideology of long-term thinking.

To understand this argument, we must first analyze what long-term advocates mean by our “long-term potential,” an expression I've used so far without defining it. We can analyze this concept in three main components: transhumanism, spatial expansionism, and a moral vision closely associated with what philosophers call “total utilitarianism.”

The first refers to the idea that we should use advanced technology to reshape our bodies and brains in order to create a “superior” race of radically improved posthumans (long-term proponents place this superior race in the “humanity” category), which can be confusing.[22]). Bostrom may be the most prominent transhumanist today, but adherents of long-termism avoid using the term “transhumanism,” probably because of its negative associations. Susan Levin points out, for example, that contemporary transhumanism is the continuation of the Anglo-American eugenics movement and of transhumanists such as Julian Savulescu.[23]. He co-wrote the book Human Enhancement (2009) with Bostrom where they explicitly advocate for the consumption of “morality-stimulating” chemicals such as oxytocin in order to avoid existential disaster (which they call “ultimate harm”). As Savulescu writes with a colleague, “there is such an urgent need to improve humanity morally [...] that we should seek every possible way to achieve it.” Such claims are controversial and quite disturbing for a lot of people. This is why the adherents of long-termism distance themselves from these ideas while still defending their ideology.

Transhumanism claims that there are various “ways of being posthumous” that are much better than the way we are now human. For example, we could modify ourselves genetically to perfectly control our emotions, or access the Internet through neural implants, or maybe even transfer our minds to computers to achieve “digital immortality.” Toby Ord explores other possibilities in The Precipice. Imagine how impressive it would be to perceive the world by echolocation, like bats and dolphins, or by magnetoreception, like red foxes and homing pigeons. According to Toby Ord, such as yet unknown experiences exist in minds that are much less sophisticated than ours. He then asks himself: “What other experiences of possible immense value would become accessible to much greater minds? Bostrom's most fantastic exploration of these possibilities comes from his evocative “Letter from Utopia.”[24]” (2008), which depicts a techno-utopian world populated by superintelligent posthumans inundated with so much “pleasure” that, as the fictional posuman author of the letter writes, “we sprinkle our tea with it.”

What is the link with long-term thinking? According to Bostrom and Ord, failing to become posuman would prevent us from realizing our vast and glorious potential, which would be existentially catastrophic. Bostrom estimated in 2012 that “a definitive failure to transform human biological nature can in itself constitute an existential disaster”. Likewise, Ord says that “maintaining humanity as it exists today forever could also squander our heritage, giving up most of our potential.”

The second component of our potential — spatial expansionism — refers to the idea that we need to colonize our future light cone as much as possible, that is, the region of spacetime that is theoretically accessible to us. According to long-termism advocates, our future cone of light contains an enormous amount of exploitable resources, which they call our “cosmic endowment” in neguentropy (or inverse entropy). The Milky Way alone, writes Ord, is “150,000 light-years in diameter and includes over 100 billion stars, most of which have their own planets.” To realize the long-term potential of humanity, he continues, “it is enough for [us] to travel one day to a nearby star and gain a sufficient foothold there to create a thriving new society from which we can venture further afield.” By spreading “only six light years at a time”, our posuman descendants could make “almost every star in our galaxy... accessible” since “every star system, including our own, would only need to colonize the few nearest stars [for] the entire galaxy [to] eventually fill with life.” The process could become exponential, resulting in societies that are more and more “thriving” with each passing second as our descendants jump from one star to the next.

But why would we want that? What is so important about flooding the universe with new post-human civilizations? This brings us to the third component: total utilitarianism, which I'll call “utilitarianism” for short. Although some proponents of long-termism insist that they are not utilitarian, it should be noted at the outset that this is essentially a trick. Long-termists seek to divert critics from their ideology and more generally from effective altruism (AE), but in reality their philosophy is nothing but utilitarianism repackaged in another form[25]. The fact is that the AE movement is deeply utilitarian, at least in practice. Before choosing a name, early members of the movement (including Ord) seriously considered calling it the “community of effective utilitarianism.”

That said, utilitarianism is an ethical theory that specifies that our only moral obligation is to maximize the total amount of “intrinsic value” in the world. This is calculated from a disembodied, impartial and cosmic point of view called “the point of view of the Universe”. From this perspective, it doesn't matter how value—which utilitarian hedonists equate to pleasure—is distributed among individuals in space and time. All that matters is the total net sum. For example, let's say there are 1 trillion people whose lives have a value of “1,” meaning that their lives are barely worth living. This gives a total value of 1 trillion. Now consider an alternative universe in which 1 billion people each have a life worth “999", which means that their lives are extremely good. This gives a total value of 999 billion. Since 999 billion is less than 1 trillion, the first world full of lives that are hardly worth living would be morally better than the second world. Therefore, if a utilitarian was forced to choose between these two worlds, he would opt for the former. (This is called the “disgusting conclusion” that long-termists such as Ord, MacAskill, and Greaves have recently disdained.[26]. For them, the first world could really be better!).

The underlying rationale is based on the idea that people—you and me—are nothing but means to an end. We don't matter in ourselves, we don't have intrinsic value. On the contrary, people are considered valuable “containers.” This means that we only matter insofar as we “contain” value, where we contribute to the overall net amount of value in the universe between the Big Bang and heat death. Since utilitarianism tells us to maximize value, it follows that the more people (value containers) that exist with net positive quantities of value (pleasure), the better off the Universe will be, morally speaking. In a nutshell: people exist in order to maximize value; value does not exist in order to benefit people.

That's why long-termism people are obsessed with calculating how many people might exist in the future. If we colonize space and create vast computer simulations around the stars, an unfathomable number of people will be able to live positive lives in virtual reality environments. I already mentioned Bostrom's estimate of 1054 people in the future, a figure that includes a lot of these “digital people.” But in his best seller Superintelligence (2014), he put forward an even higher figure of 1058 people, almost all of whom “would live rich and happy lives by interacting with each other in virtual environments.” Greaves and MacAskill are also excited about this prospect, estimating that some 1045 Conscious beings in computer simulations could exist in the Milky Way alone.

This is what our “vast and glorious” potential consists of: a massive number of technologically enhanced digital posthumans within huge computer simulations spread across our future light cone. It is for this purpose that, in the Häggström scenario, a long-term politician would annihilate Germany. In order to achieve this goal, we must not “waste” our resources on solving problems such as global poverty. It is for this objective that we should consider implementing a global surveillance system, preventive warfare, and focusing more on superintelligent machines than on protecting populations in the South from the devastating effects of climate change (mainly caused by the North). In fact, Beckstead even argued that, in order to achieve this goal, we should prioritize the lives of people in rich countries over the lives of people in poor countries. Because influencing the long-term future is of “paramount importance”, and the former are more likely to influence the distant future than the latter. To quote a passage from Beckstead's 2013 doctoral thesis, which Ord praises as one of the most important contributions to long-term literature:

“Saving lives in poor countries can have far fewer ripple effects than saving and improving lives in rich countries. Why? Rich countries are much more innovative and their workers are much more economically productive. [Therefore] it now seems more plausible to me that saving a life in a rich country is much more important than saving a life in a poor country, all other things being equal.”

That's just the tip of the iceberg. Let us consider the implications of this conception of “our potential” for technological development and the creation of new risks. Since realizing our potential is humanity's ultimate moral goal; given that our descendants cannot become posthumans, colonize space, and create ~1058 people in computer simulations without technologies much more advanced than those that exist today, not developing more technologies would in itself be an existential disaster — a mode of failure (comparable to Ramsey neglecting his talents by spending his days playing pool and drinking), a trajectory that Bostrom calls the “ceiling.” Indeed, Bostrom places this idea at the center of his canonical definition of “existential risk.” Existential risk refers to any future event that would prevent humanity from reaching and/or maintaining a state of “technological maturity”, that is, “the obtaining of capacities that allow a level of economic productivity and control over nature that is close to the maximum achievable. Technological maturity is the keystone here, as controlling nature and increasing economic productivity to absolute physical limits are ostensibly necessary to create the maximum amount of “value” in our future light cone.

But think for a moment. How does humanity[27] Has it got bogged down in the current climate and ecological crisis? Behind the extraction and burning of fossil fuels, the decimation of ecosystems, and the extermination of species, there is the idea that nature should be controlled, subjugated, exploited, defeated, plundered, plundered, transformed, reconfigured, and manipulated. As technology theorist Langdon Winner writes in Autonomous Technology (1977), since the time of Francis Bacon, our vision of technology has been “inextricably linked to a unique conception of how power is used — the style of absolute mastery, the despotic and one-sided control of the master over the slave.” He adds:

“We rarely express reservations about the legitimate role of man in the conquest, victory and subjugation of all that is natural. It is his power and his glory. What, in other contexts, would pass for rather rude and despicable intentions, is here the most honorable of virtues. Nature is the universal prey, man can manipulate it as he pleases[28].”

This is precisely what we find in Bostrom's account of existential risks and the normative futurology associated with them: nature, the entire Universe, our “cosmic endowment” is there to be plundered, to be manipulated, transformed, and converted into “value structures, such as sentient beings living valuable lives” in vast computer simulations[29]. Yet, this Baconian and capitalist vision is one of the most fundamental causes of the unprecedented environmental crisis that is now threatening to destroy vast regions of the biosphere, indigenous communities around the world, and perhaps even Western technological civilization itself. Although other proponents of long-termism are not as explicit as Bostrom, their conception of the natural world is similar to the utilitarian conception of humans: they are means to an abstract and impersonal end, nothing more. MacAskill and one of his colleagues write, for example, that the movement of effective altruism — and therefore long-termism — is “temporarily”. Welfarist[30] in the sense that its provisional objective of doing good concerns only the promotion of well-being and not, for example, the protection of biodiversity or the conservation of natural beauty for themselves, in their own interests[31] ”.

Equally worrisome is the long-term requirement that we create ever more powerful technologies. This is a denial of the well-known fact that these same technologies are the source of serious threats to the future of the human species. According to Ord, “without a serious effort to protect humanity, there is strong reason to believe that the risk will be greater in this century, and that it will increase with each century that technological progress continues.” Likewise, in 2012, Bostrom recognized that

“most existential risks in the foreseeable future consist of existential risks that are anthropogenic, that is, arising from human activity. In particular, many of the most important existential risks seem to result from technological innovations that may be developed in the future. These could radically increase our ability to manipulate the outside world or our own biology. As our powers expand, the magnitude of their potential consequences — intended or not, positive or negative — increases.”

According to this point of view, there is only one path forward: technological development, even if it is the most dangerous path forward. But what is the meaning of such reasoning? If we want to maximize our chances of survival, we should oppose the development of dangerous new dual-use technologies. If more technology equates to more risk—as history makes clear and technological projections confirm it—then perhaps the only way to achieve a state of “existential security” is to slow or stop technological development altogether.

But the proponents of long-termism offer another answer to this problem: the “thesis of the neutrality of values”. According to this thesis, technology is a morally neutral object, that is, “a simple tool.” The NRA slogan[32] illustrates this idea perfectly: “Guns don't kill people, people kill people”. The consequences of technology, whether good or bad, beneficial or harmful, are entirely determined by the users and not by the artifacts. Bostrom said in 2002 that “we should not blame civilization or technology for imposing great existential risks.” According to him, “because of the way we have defined existential risks, a halt in the development of technological civilization would mean that we are victims of an existential disaster.”

Ord also states that “the problem is not so much an excess of technology as a lack of wisdom,” before quoting the book. Pale Blue Dot (1994) by Carl Sagan: “Many of the dangers we face indeed come from science and technology, but more fundamentally from the fact that we have gained power without acquiring equivalent wisdom.” In other words, it's our fault if we're not smarter, wiser, and more ethical. Long-term advocates think they can rectify these set of deficiencies by reorganizing our cognitive systems and our moral provisions using technology. In their twisted philosophy, everything comes down to an engineering problem and as a result, all problems stem from an inadequacy rather than an excess of technology.

We are now beginning to understand how long-term thinking could self-destruct. His “fanaticism” about the goal of realizing our long-term potential could lead people to overlook climate change (a non-existential risk) and prioritize the rich over the poor. Perhaps it could even “justify” preventive massacres and other atrocities in the name of “the greatest cosmic good.” But that is not all. It also contains the very tendencies — Baconianism, capitalism, and value neutrality — that have brought humanity to the brink of self-destruction. Long-termism tells us to maximize economic productivity, our control over nature, our presence in the universe, the number of people (technologically simulated) that will exist in the future, the total amount of impersonal “value”, etc. But to maximize, we need to develop increasingly powerful — and dangerous — technologies. Not doing so would in itself be an existential disaster. Not to worry, because technology is not responsible for making our situation worse, and the fact that most risks stem directly from technology is no reason to stop creating more. Rather, the problem is with us, which simply means that we need to create even more technology to transform ourselves into cognitively and morally enhanced poshumans.

This is leading us straight to disaster. The creation of a new breed of “wise and responsible” posthumans is implausible; and if advanced technologies continue to be developed at the current pace, the onset of a global disaster is only a matter of time. Yes, we will need advanced technology if we want to flee Earth before it is sterilized by the Sun in about a billion years[33]. But one crucial fact escapes the followers of long-termism. Technology is much more likely to cause our extinction before this distant event than to save us from it. If, like me, you value the survival and development of humanity, you should be concerned about the long term, but reject the ideology of long-term thinking. It is dangerous and deceptive; it could also increase and reinforce the dangers that threaten every individual on Earth today.

Emile P. Torres

Share this post

Footnote [1] — See this newspaper article Le Monde published in early 2023: https://www.lemonde.fr/idees/article/2023/03/17/gagner-plus-pour-donner-plus-l-altruisme-efficace-philanthropie-de-l-extreme_6165839_3232.html

Footnote [2] — https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

Footnote [3] — https://ro.uow.edu.au/cgi/viewcontent.cgi?article=1742&context=buspapers

Footnote [4] — In reality, the origin of the virus is still unknown, due in particular to “the lack of transparency at the Institute of Virology in Wuhan, China, where the pandemic started”. See this newspaper article Les Echos published this summer: https://www.lesechos.fr/monde/enjeux-internationaux/origine-du-covid-les-etats-unis-suspendent-les-subventions-de-linstitut-de-virologie-de-wuhan-1963241

Footnote [5] — https://docs.wixstatic.com/ugd/d9aaad_4d3e08f426904b8c8be516230722087a.pdf

Footnote [6] — https://docs.wixstatic.com/ugd/d9aaad_b2e7f0f56bec40a195e551dd3e8c878e.pdf

Footnote [7] — In fact, in his book Our last century?, Martin Rees believes that it is the human species and not civilization that could disappear. Civilization — the presence of a State, infrastructures, strong inequalities of power, cities — is a form of organization of very minority societies if we consider the past and present diversity of human groups (NdT).

Footnote [8] — https://global.ilmanifesto.it/chomsky-republicans-are-a-danger-to-the-human-species/

Footnote [9] — https://thebulletin.org/doomsday-clock/2020-doomsday-clock-statement/

Footnote [10] — https://hal.science/hal-02397151/document

Footnote [11] — Foundation whose main financial backer is none other than Dustin Moskovitz, one of the co-founders of Facebook.

Footnote [12] — https://80000hours.org/2021/07/effective-altruism-growing/

Footnote [13] — https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk

Footnote [14] — https://forum.effectivealtruism.org/posts/Fwu2SLKeM5h5v95ww/major-un-report-discusses-existential-risk-and-future%23Context

Footnote [15] — Phil Torres, Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks, 2017.

Footnote [16] — https://nickbostrom.com/existential/risks

Footnote [17] — https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.304.7392&rep=rep1&type=pdf

Footnote [18] — https://nickbostrom.com/astronomical/waste

Footnote [19] — https://existential-risk.com/concept

Footnote [20] — https://existential-risk.com/faq.pdf

Footnote [21] — https://globalprioritiesinstitute.org/wp-content/uploads/Hayden-Wilkinson_In-defence-of-fanaticism.pdf

Footnote [22] — Some laudators of transhumanism are clearly assuming their eugenic position by presenting humans who will refuse to increase — or who will not have access to them — as “chimpanzees of the future.” See Hélène Tordjman, Green growth versus nature.

Footnote [23] — https://blog.oup.com/2021/01/playing-to-lose-transhumanism-autonomy-and-liberal-democracy-long-read/

Footnote [24] — https://nickbostrom.com/utopia

Footnote [25] — https://blog.apaonline.org/2021/03/29/is-effective-altruism-inherently-utilitarian/

Footnote [26] — https://www.cambridge.org/core/journals/utilitas/article/what-should-we-agree-on-about-the-repugnant-conclusion/EB52C686BAFEF490CE37043A0A3DD075

Footnote [27] — It should be noted here that it is not “humanity” but a certain type of society — the techno-industrial civilization created by the West — that is at the origin of the climate and ecological crisis. This civilization eradicated most of the other human cultures that still existed a few centuries ago in order to do its deadly work (NdT).

Footnote [28] — https://mitpress.mit.edu/books/autonomous-technology

Footnote [29] — see Bostrom's essay Astronomical Waste, 2003

Footnote [30] — According to Wikipedia (after translation): “In ethics, welfarism is a theory that well-being, what is good for someone or what makes a life worth living, is the only thing that has intrinsic value. In its most general sense, it can be defined as a descriptive theory of what has value, but some philosophers also understand welfarism as a moral theory, according to which what one should do is ultimately determined by welfare considerations. The right action, policy, or rule is one that leads to maximum well-being. In this sense, welfarism is often considered to be a form of consequentialism and can take the form of utilitarianism.” (NdT)

Footnote [31] — https://philarchive.org/archive/PUMEAv3

Footnote [32] — National Rifle Association, one of the main lobbies for the promotion of firearms in the United States (NdT).

Footnote [33] — It is a bit difficult to understand what this sentence in the conclusion is doing. (NdT)

Don't miss out on any of our posts.

Subscribe to our newsletter to get the latest news.

Access the form

Join the resistance.

ATR is constantly welcoming and training new recruits determined to combat the technological system.