Blog
Technocene

Transhumanist Nick Bostrom wants a global “high-tech panopticon”

By
S.C
13
June
2023
Share this article

Experts in “existential risk” such as the transhumanist Nick Bostrom propose to transform the entire planet into a “high-tech panopticon” to increase the resilience of the techno-industrial system. Grind freedom in the name of Progress, we know the song. A great classic of the industrial age.

Transhumanism, Long-Termism, and Effective Altruism

To find out more about this project and Nick Bostrom, we have translated below an article published in February 2021 in the magazine Aeon by Swedish philosopher Nick Bostrom[1]. This text is a summary of a longer study published in the journal Global Policy[2]. Bostrom is the co-founder of the World Transhumanist Association (now Humanity +), an NGO designed to promote the ethical use of new technologies to improve human existence; he is also director of the Future of Humanity Institute at the University of Oxford, an institute member of the Partnership on AI (“collaboration around artificial intelligence”) initiative launched by Google, Amazon, Facebook, IBM and Microsoft. Within this alliance are also UNICEF, the United Nations, the very influential Think Tank Chatham House, other major companies (Apple, Samsung, Intel), media (BBC, CBC Radio Canada, New York Times) or Open AI, the artificial intelligence research laboratory co-founded by Elon Musk[3].

In 2003, Bostrom published in Philosophical Quarterly a paper entitled “Are you living in a computer simulation? ” in which he suggests that members of an advanced civilization with enormous computing power might decide to simulate their ancestors, mainly for entertainment[4]. In an interview with the newspaper The Echoes, he says that “the ultimate goal [of artificial intelligence] must be the total disappearance of work[5] ” human (which means the obsolescence of the human from the point of view of the technological system). Bostrom is also the author of the best-selling book Superintelligence (2014) where he tells how a so-called “general” or “strong” artificial intelligence could eradicate the human race.

Nick Bostrom, who is among the most influential transhumanists[6], however, does not consider that technological progress should be stopped now before such a disaster occurs. On the contrary, Bostrom supports an ideology called “long-termism,” itself a product of the “effective altruism” movement, an extreme form of utilitarianism. For example, Bostrom considers that the worst events of the modern era — the two world wars, the AIDS pandemic and the Chernobyl accident — are only accidents in the progression of humanity towards its mechanical destiny.

“As tragic as these events are for those immediately affected, by and large... even the worst of these disasters are just waves on the surface of the great sea of life[7]

Indeed, if in the distant future there could be thousands of billions of humans living in vast computer simulations powered by nanotechnological systems designed to capture all or most of the energy produced by stars, it is profitable to massacre and/or let millions of them die today[8].

We will discuss this long-term ideology in more detail in a future article.

The following text provides a better understanding of why the algorithmic video surveillance, the remote activation of telephone cameras or microphones[9], or Elon Musk's brain implant[10] constitute essential steps to ensure the sustainability of the techno-industrial system.

Technology hasn't destroyed humanity—yet (by Nick Bostrom)

We can think of human creativity as a process of drawing lots from balls arranged in a giant urn. The balls represent ideas, discoveries, and inventions. Throughout the story, we have extracted numerous balls. Most have been beneficial to humanity [to the system and not to the human primate, NdT]. The others have been various shades of gray: a mixture of good and bad things whose net benefit is difficult to estimate.

What we have not yet extracted from the urn is a black ball: a technology that is systematically destroying the civilization that invented it. We have not been particularly careful or wise when it comes to innovation. We were just lucky. But what if there is a black ball somewhere in the urn? If scientific and technological research continues, we will end up running into it, and we will not be able to put it back into the ballot box. We can invent, but uninventing is impossible. So our current strategy is to hope that the ballot box does not contain a black ball.

Luckily for us, the most destructive human technology to date — atomic weapons — is extremely difficult to master [building a nuclear bomb is actually not that difficult, according to astrophysicist Martin Rees].[11], NdT]. But by thinking about the possible consequences of a black ball, one can imagine what would happen if nuclear techniques were more accessible. In 1933, the physicist Leo Szilard conceptualized the nuclear chain reaction. Subsequent research showed that the manufacture of an atomic weapon would require several kilos of plutonium or highly enriched uranium, both of which are very difficult and expensive to produce. But imagine another scenario in which Szilard realizes that a nuclear bomb can be made using a fairly simple method — for example, over the kitchen sink, using a piece of glass, a metal object, and a battery.

Szilard would have been faced with a dilemma. Even if he didn't talk about his discovery, he couldn't prevent other scientists from arriving at the same result by chance. And if he revealed his discovery, he would encourage the spread of dangerous knowledge. Imagine that Szilard confided in his friend Albert Einstein and that together they decided to write a letter to the President of the United States, Franklin D. Roosevelt. His administration would then ban all research in nuclear physics outside of high-security government facilities. The reason for these drastic measures is said to be the subject of speculation. The scientific community would begin to question this secret threat; some scientists would eventually unravel the mystery. Careless or disgruntled employees working for government laboratories would leak information, and spies would take the secret with them to foreign capitals. Even if, by some miracle, the secret was never disclosed, scientists from other countries working in the same field would eventually discover it [the technocritical thinker Jacques Ellul recalled in 1954 in his book The Technique or the Challenge of the Century that nuclear physics research was more or less at the same level in 1939 in the United States, Nazi Germany, the USSR, Norway and France in 1939, NdT].

Perhaps the U.S. government would eliminate glass, metal, and any source of electrical current outside of a few highly guarded military depots. Such extreme measures would be met with strong opposition. But it would be enough for several major cities to be razed by atomic explosions for public opinion to resign itself to accepting the constraint. Glass, batteries, and magnets could be seized and their production prohibited, but pieces would remain scattered geographically and end up in the hands of nihilists, extortionists, or curious people who simply wanted to “see what it feels like” to trigger a nuclear weapon. In the end, many places on Earth would be destroyed or abandoned. Possession of prohibited materials should be severely punished. Human communities would be subject to strict surveillance: informant networks, security raids, detentions without time limits. All we have to do is try to reconstitute civilization as best we can without electricity and without the other essential elements that are considered too risky.

That is the optimistic scenario. In a more pessimistic scenario, law and order would completely collapse, and societies would break up into factions engaging in nuclear wars. Mutual disintegration would only end when the world had been ruined to the point where it was impossible to make new bombs. Even in this case, the dangerous nuclear expertise would remain in the memories and be passed on from generation to generation. If civilization were to rise from the ashes, theoretical knowledge would remain on the lookout, ready to materialize as soon as people started producing glass, electrical current, and metal again. Even if the knowledge was forgotten, it would be rediscovered as soon as nuclear physics research resumed.

In short, we are fortunate that the production of nuclear weapons is complex. This time we took a gray ball out of the urn. But with each new invention, humanity draws again from the urn [a more faithful image of scientific and technological development, it is a game of Russian roulette in which one would add an additional ball to the barrel at each turn until it is full..., NdT].

Suppose the creativity urn contains at least one black ball. This is what we call the “vulnerable world hypothesis.” We hypothesize that there is a certain level of technology at which civilization will almost certainly be destroyed, unless extraordinary and historically unprecedented degrees of preventive policing and/or global governance are implemented. We do not assume that the initial hypothesis is true — we consider this to be an open question, although it seems unreasonable, based on the available evidence, to believe that it is false. Rather, our objective is to show that the hypothesis is useful in helping us to bring out important considerations about the macrostrategic situation of humanity [in brackets, the collapse caused by the excessive technological complexity of a society is a historical truth and not a hypothesis or an intuition, as recalled on BBC Future Researcher Luke Kemp[12], NdT].

The scenario described above — which can be described as an “easy nuclear bomb” — illustrates a potential type of black ball. In this case, it becomes easy for individuals or small groups to cause mass destruction. Given the diversity of human characters and conditions, there will always be a fraction of humans (the “apocalyptic residue”) who choose to take a reckless, immoral, or self-destructive action, whether motivated by ideological hatred, destructive nihilism, or revenge for perceived injustices, as part of an extortion conspiracy, or because of illusions. The existence of this apocalyptic residue means that any tool of mass destruction that is easy enough to design is virtually certain to lead to the devastation of civilization. It is one of several possible types of black balls. A second type would be a technology that strongly incites powerful actors to cause mass destruction. Again, we can turn to the history of nuclear power: after the invention of the atomic bomb, an arms race began between the United States and the Soviet Union. Both countries amassed gigantic arsenals; in 1986, they together possessed over 60,000 nuclear warheads — more than enough to devastate civilization.

Fortunately, during the Cold War, the world's nuclear superpowers were not under strong pressure to trigger the Nuclear Apocalypse. However, the USSR and the United States were encouraged to engage in the strategy of the edge of the abyss [strategy] which consists in continuing a dangerous action in order to make an adversary back and achieve the most advantageous result possible for oneself.[13], NdT]. In the event of a crisis, one may be tempted to launch the offensive first to avoid a potentially neutralizing strike by the opponent. Many political scientists believe that the development of safer “second strike” capabilities by the two superpowers in the mid-1960s largely explains why the nuclear holocaust was avoided during the Cold War. The ability of both countries' arsenals to survive a nuclear attack and launch retaliatory attacks reduced the incentive to take the first step.

But let's now consider a counterfactual scenario — a “safe first strike” — in which technology would completely destroy an adversary before they could respond, leaving them unable to fight back. If such a “safe first strike” option existed, mutual fear could easily trigger an all-out war. Even if neither power wants the other to be destroyed, one might nevertheless feel obliged to strike first in order to prevent the enemy, also motivated by fear of the other side, from launching the first strike. We can make this scenario even worse by assuming that the weapons in question are easy to hide; it would then be impossible for both parties to design a reliable mutual verification system in order to reduce the number of weapons; in this situation it would be impossible to solve their security dilemma.

Climate change may illustrate a third type of black ball; let's call this scenario “maximum global warming.” In the real world, greenhouse gas emissions caused by humans are likely to cause the average temperature to rise between 3.0 and 4.5 degrees Celsius by 2100. But imagine that Earth's atmospheric sensitivity parameters were different, so that the same carbon emissions would cause much greater warming than scientists currently expect — for example, a rise of 20 degrees. To make the scenario worse, imagine that fossil fuels are even more abundant, that clean energies are more expensive and technologically more difficult to implement than they are currently [we will specify here that “clean” energy does not exist and that, in general, cleanliness is another absurd concept (again one) linked to the artificial world resulting from technological progress (nothing is “dirty” or “clean” in nature), NdT].

Unlike the “safe first strike” scenario, in which a powerful actor is strongly pressured into making decisions that are sensitive to extremely destructive consequences, the “maximum global warming” scenario does not require any such actor. All that is needed is a large number of individually insignificant actors — electricity consumers, drivers. Incentives push these agents to do things that each contribute very slightly to a global phenomenon. Cumulatively, these innumerable agents are becoming a devastating problem for civilization. In either scenario, the existence of incentives would encourage a wide range of actors to pursue normal actions that are devastating to civilization.

It would be bad news if the vulnerable world hypothesis were true. But in principle, there are several answers that could preserve civilization from a technological black ball. One of them would be to stop shooting balls from the ballot box by ceasing all technological development. But this is hardly realistic and, even if it could be done, it would be extremely expensive, to the point of constituting a disaster in itself.

Another theoretically possible response would be to fundamentally reshape human nature to remove the apocalyptic residue; for powerful actors, we could also eliminate any tendency to risk the annihilation of civilization, even when it serves the vital interests of national security. Among the masses, it would be possible to eliminate the preference for self-interest when this contributes to imperceptibly damaging the global common good. Such a global reengineering of human preferences seems very difficult to achieve and would also involve risks. It should also be noted that the partial success of such human reengineering would not necessarily result in a proportionate reduction in civilization's vulnerability. For example, succeeding in reducing the apocalyptic residue by 50% would not halve the risks associated with “easy nuclear bomb” scenarios. In many cases, an isolated individual could single-handedly devastate civilization. So we could only reduce the risk significantly if the apocalyptic residue was almost entirely eliminated everywhere around the globe.

There are therefore two options left for securing the world against the possibility that the ballot box contains a black ball: highly reliable policing capable of preventing any individual or select group from carrying out highly dangerous illegal actions; on the other hand, solid global governance that would solve the most serious problems of interstate conflicts, and would ensure robust cooperation between states — even when the latter have strong incentives to break agreements or to refuse to sign at first glance. The governance gaps that these measures address are the two Achilles heels of the contemporary global order. As long as this situation persists, civilization will remain vulnerable to a technological black ball. That said, it's easy to underestimate our exposure level before a harmful scientific discovery is made.

Let's look at what you need to do to protect yourself from these vulnerabilities.

Let's imagine that the world is in a scenario similar to that of the “easy atomic bomb.” Suppose that someone discovers a very simple way to cause mass destruction, that information about that discovery is spreading, that the materials are available everywhere and that it is impossible to quickly remove them from circulation. To avoid devastation, states would be forced to closely monitor their citizens to intercept anyone involved in the preparations for a large-scale terrorist attack. If black ball technology is sufficiently destructive and easy to use, letting even one person escape the surveillance device would be a completely unacceptable risk.

To get an idea of what a truly extreme level of surveillance could look like, consider the image of a “high-tech panopticon.” Each citizen would be equipped with a “freedom badge” (the Orwellian connotations are of course intentional, in order to remind us of all the possibilities of applying such a system). You would have to wear a freedom badge around your neck that would be equipped with multi-directional cameras and microphones. They would continuously send encrypted video and audio data to computers interpreting the stream in real time. If signs of suspicious activity were detected, the flow would be relayed to one of several “patriot surveillance stations.” There, a “freedom officer” would review the flow and determine an appropriate action, such as contacting the wearer via a speaker built into their badge — to demand an explanation or ask for a better camera angle. The freedom officer could send a rapid response unit or perhaps a police drone to investigate. If the wearer refused to give up the prohibited activity after several warnings, the authorities could decide to stop it. Citizens would not be allowed to remove the badge, except in places equipped with adequate external sensors.

In principle, such a system would include sophisticated privacy protections, and would purge identity-revealing data, such as faces and names, except when required for an investigation. Artificial intelligence and human supervision would closely supervise freedom officers to prevent them from abusing their authority. The construction of a panopticon of this type would require substantial investments. But thanks to the fall in the price of the technologies concerned, it could soon become technically feasible.

Politically, it may be more difficult to get this level of surveillance accepted. However, resistance to such measures may wear off once several major cities have been annihilated by fearsome technology. This would likely result in strong support for a policy that, in order to prevent another attack, would involve massive intrusions into privacy and civil rights violations; such as the incarceration of 100 innocent people for every authentic plotter. However, if the vulnerabilities of civilization are not preceded or accompanied by catastrophic events, the political will for such robust preventive action may never materialize.

Let's consider the “safe first strike” scenario again. Here, state actors are facing a problem generated by collective action, and not solving it means that civilization is being devastated by default. With a new black ball, this cluster action problem will almost certainly present extreme and unprecedented challenges. States have frequently failed to neutralize the threat of conflicts, as evidenced by the innumerable wars that dot human history. By default, therefore, civilization is devastated. However, with effective global governance, the solution is almost trivial: simply prohibit all states from using black ball technology in a destructive manner. (By effective global governance, we mean a global order with a single decision-making entity. This is an abstract condition that could be met by various arrangements: a global government; a sufficiently powerful leader; a very solid system of interstate cooperation. Each arrangement comes with its own challenges, and we're not taking a stand here to pick the best one.)

Some technological black balls could only be dealt with by preventive policing, while others would only require global governance; still others would require both. Take the example of a biotechnological black ball powerful enough that a single malicious use could cause a pandemic killing billions of people — an “easy atomic bomb” situation. In this scenario, it would be unacceptable, even for a single isolated state, not to put in place the mechanisms necessary for the continuous surveillance of its citizens to prevent malicious use with almost perfect reliability. A State that refused to implement the required protective measures would be considered an offender by the international community, a “failed State”. A similar situation would be found in scenarios such as “maximum global warming.” Some states may be tempted to take advantage of the costly efforts made by other states. An effective global governance institution would then be needed to compel each state to do its part.

It all seems unappealing. A system of total surveillance or an institution of global governance capable of imposing its will on each nation could have very serious consequences. Improving the means of social control would help protect despotic regimes from rebellion; and surveillance would allow a hegemonic ideology or an uncompromising majority opinion to prevail in all aspects of life. Global governance, on the other hand, could reduce beneficial forms of inter-state competition and diversity, creating a global order that is bound to fail or succeed; moreover, being so remote from individuals, such an institution would be perceived as lacking legitimacy. It would also be more susceptible to bureaucratic sclerosis or political drift opposed to the common interest.

However, as difficult as it may seem for many of us, in addition to stabilizing the vulnerabilities of civilization, stronger global surveillance and governance would have various consequences Positives. More effective methods of social control would reduce crime and alleviate the need for severe criminal sanctions. They would foster a climate of trust that would allow new and beneficial forms of social interaction to thrive. Global governance would prevent all sorts of inter-state wars, solve numerous environmental problems and other common goods problems. Over time, perhaps it would foster a broader sense of cosmopolitan solidarity [and they all lived happily in Brave New World under the benevolent eye of Big Brother, NdT]. It is clear that there are strong arguments for and against moving in one of these directions, and our role here is not to decide on the issue.

What about the calendar issue? Even if the hypothesis of a technological black ball in the ballot box is taken seriously, we may not need to establish total surveillance or global governance right now. Perhaps we could take these steps later in case the hypothetical threat clearly materializes.

However, we should question the feasibility of a wait-and-see strategy. As we have seen, throughout the duration of the cold war, both superpowers lived in constant fear of mutual nuclear annihilation that could have been triggered at any time by accident or as a result of repeated crises. This risk could have been greatly reduced simply by getting rid of all or most nuclear weapons. However, after more than half a century, disarmament remains limited. So far, the world has shown itself incapable of resolving the most obvious inter-state conflicts. This does not inspire confidence in the idea that humanity would rapidly develop an effective global governance mechanism, even if the need arose.

Even with optimism about the possibility of reaching an agreement, problems relating to international cooperation can resist a solution for a long time. It would take time to explain why such an arrangement is necessary, to negotiate an agreement and to work out the details; to put it in place as well. But the interval between a risk that becomes clearly visible and when stabilization measures need to be put in place would probably be short. It may therefore not be wise to rely on spontaneous international cooperation to save the situation as soon as serious vulnerability arises.

On the preventive police side, the situation is in some respects similar. A highly sophisticated global panopticon cannot be created overnight. It would take many years to set up such a system, not to mention the time needed to gain political support. However, the vulnerabilities we are exposed to may not offer many early warning signs. Next week, a group of university researchers may publish an article in the journal Science detailing a new innovative technique in synthetic biology. Two days later, a popular blogger wrote an article explaining how this new tool could be used by anyone to cause mass destruction. In such a scenario, massive social control should be put in place almost immediately. It is too late to start developing a surveillance architecture when the threat has already materialized.

Perhaps we could develop intrusive surveillance and real-time interception capabilities in advance, but not use these capabilities to their fullest right now. By giving civilization the ability to exercise highly effective preventive policing, we would at least have come a step closer to stability. But developing a system that offers the possibility of “turnkey totalitarianism” means taking a risk, even if no one turns the key. This risk can be mitigated by aiming for a system of “structured transparency” that incorporates protections against abuse. The system would only work with the authorization of several independent stakeholders and would only provide the specific information that a decision maker legitimately needs. There may be no fundamental impediment to building a surveillance system that is both highly effective and resistant to subversion. The probability of doing so in practice is, of course, another question.

Given the complexity of these potential global solutions to counter the risk of a technological black ball, it would be wise for leaders and policy makers to initially focus on partial solutions that are easy to put in place — by making corrections in particular areas where major risks seem most likely to occur, such as biotechnological research. Governments could strengthen the Biological Weapons Convention by increasing its funding and giving it control powers. Authorities could intensify their surveillance of biotechnological activities by developing better ways to control scientists and track potentially dangerous materials and equipment. For example, to prevent tinkering with genetic engineering, governments could require licensing and limit access to certain advanced instruments and information. Rather than allowing anyone to buy their own DNA synthesis machine, this equipment would be limited to a small number of closely monitored suppliers. Authorities could also improve alert systems to encourage the reporting of potential abuses. They could recommend that organizations that fund biological research take a broader view of the potential consequences of this work.

However, in pursuing such limited goals, it should be borne in mind that the protection offered would only be temporary and would only cover a portion of the scenarios mentioned above. If you find yourself in a position to influence the macroparameters of preventive policing or global governance, you should consider that fundamental changes in these areas may represent the only way to stabilize our civilization against emerging technological vulnerabilities.

Nick Bostrom

Commentary and translation: ATR

Share this post

Don't miss out on any of our posts.

Subscribe to our newsletter to get the latest news.

Access the form

Join the resistance.

ATR is constantly welcoming and training new recruits determined to combat the technological system.