Blog
Technocene

AI could eradicate life. It must be stopped.

By
S.W
21
March
2024
Share this article

Translation of a item by Eliezer Yukdowsky, specialist in artificial intelligence, published in the magazine Time on March 29, 2023.

Image: Agent Smith in the movie The Matrix (1999). Reality is catching up with fiction.

Eliezer Yukdowsky is an American decision theorist who directs research at the Machine Intelligence Research Institute. He has been working on the control of artificial general intelligence (GAI) since 2001 and is widely recognized as a pioneer in this field.

An open letter published today calls on “all AI laboratories to pause the development of systems greater than GPT-4 for at least 6 months.”

A moratorium of 6 months is better than no moratorium at all. I have respect for everyone who stepped up to the plate and signed it. This is a step forward.

However, I chose not to sign this letter because I think it underestimates the seriousness of the problem and offers insufficient solutions to solve it.

The major problem is not the creation of intelligence “comparable to human intelligence” (as the open letter puts it), but it is what will happen when AI reaches an intelligence greater than that of the human being. The decisive steps may not be obvious: we cannot of course determine what exactly will happen and when it will happen, but we can think that at this stage, a laboratory can cross critical thresholds without realizing it.

Many researchers dealing with these issues, including myself, believe that, under current circumstances, the likely consequence of the advent of a superhuman AI would simply mean the death of the entire Earth's population. Not as in “maybe there is a slight chance”, but as in “it's the most obvious thing that could happen.” In principle, it would be possible to survive the creation of something much smarter than us, but it would require rigorous preparation and new scientific perspectives, knowing that, in fact, AI systems are made up of sets of fractional numbers that are as gigantic as they are impenetrable.

Without this rigorous preparatory work, the most likely outcome is that AI will not do what we want and will have no consideration for us or sensitive life in general. An AI Could in principle be endowed with this capacity, but We are not ready and we don't know how to do it yet. And without that ability, you'll get an AI that doesn't hate or love us, but an AI that sees us as a bunch of atoms that it could use for something else.

If humanity must one day confront super-human intelligence, it will suffer a crushing defeat. This could be compared to a ten-year-old child trying to beat the Stockfish 15 engine in chess, to an 11th-century war of men against those of the 21st century, or to a fight between Australopithecus and Homo sapiens.

To imagine a hostile superhuman AI, don't imagine a powerful brain with a great literary culture that would send malicious e-mails from the Internet. Instead, imagine an entire extraterrestrial civilization, capable of thinking millions of times faster than humans and initially limited to computers, in a world where living beings would, from their point of view, be very stupid and very slow. However, a sufficiently intelligent AI will not remain confined to computers for very long. In our time, it is possible to send DNA sequences by email to laboratories that will produce proteins on demand, allowing an AI initially limited to the Internet to create artificial life forms or to directly start the manufacture of post-biological molecules.

What I think is that the creation of an AI that is overpowered under the current circumstances will quickly and inevitably lead to the end of all life on Earth.

There is no waybill that would allow us to both do it and survive it. OpenAI has openly declared its intention to manufacture an AI to ensure the work of controlling the AI for us. If that's their plan, this information should be enough to make any sane person freak out. As for the other largest AI lab, DeepMind, it doesn't have a plan at all.

Moreover, this risk does not depend on whether the AI has consciousness or not: it is intrinsic to any powerful cognitive system responsible for optimizing resources and anticipating results based on complex criteria. That being said, I would be failing in my moral duties as a human if I omitted to mention that we don't know how to determine if AI systems have consciousness, since we don't have the faintest idea of how to decode what's going on inside these huge black boxes. So we could be inadvertently creating truly conscious digital minds, and who should therefore have rights and not be exploited.

The rule that most people aware of these issues 50 years ago would have followed is that if an AI system expresses itself fluently, claims to be aware of itself, and asks for human rights, that should be enough to immediately stop the use of AIs by those who own them. We have already far exceeded this limit. Nonetheless, it was probably valid : I agree that, for the moment, AIs are surely just reproducing the discourse on consciousness coming from their programming data. But I insist: given the little knowledge we have of the inner workings of these systems, in reality, we We don't know anything about it.

If this is our state of ignorance about GPT-4, and GPT-5 is a giant step on the same scale as the move from GPT-3 to GPT-4, I think we will no longer be able to rightly say “probably not self-aware” if we let people make GPT-5. It will simply be “I don't know; nobody knows.” If you can't be sure you're creating a self-aware AI, that's alarming. Not only because of the moral implications of the “self-aware” part, but also because being unsure means you have no idea what you're doing, that it's dangerous, and that you should stop.

On February 7, Microsoft CEO Satya Nadekka publicly boasted that the new Bing search engine was dominating Google. “I want people to know that we're leading them by the end of the nose,” he said.

If we lived in a sane world, the CEO of Microsoft wouldn't say that. This shows the huge gap between the level of seriousness with which we are dealing with the problem, and the level of seriousness with which we should have been in the last 30 years.

We will not close this gap in six months.

Since the first formulation of the concept and the first research, it took more than 60 years to reach the capabilities of today's artificial intelligence. Solving the question of Safety of superhuman intelligence (not perfect safety, but safety in the sense that it literally won't kill everyone) could take at least thirty years. And the problem with trying that with superhuman intelligence is that failing at the first attempt won't allow us to learn from our mistakes, because we'll all be dead. Humanity will not be able to rise from the ashes as we have faced other challenges in history, because we will all have disappeared.

In the world of science or engineering, succeed whatsoever From the first attempt it is an achievement. And we are a long way from using the right method to do that. If we approached even part of the emerging discipline of General Artificial Intelligence (GAI) with at least the rigor needed to design a bridge capable of supporting one or two thousand cars, we would do well to stop all research in this area immediately.

We are not ready. We will never be ready in a reasonable amount of time. There is no plan. The capabilities of AI are progressing much, much faster than controlling it or even understanding what is going on inside these systems. If we continue like this, we will all die.

Many researchers working on these systems believe that we are on the brink of disaster, with most of them daring to say this only in private rather than in public. But they don't think they can avoid the fall, under the pretext that if they resigned, others would take their place. So we might as well continue. It is an absurd state of affairs and a very sad end for Earth. It is time for the rest of humanity to step in and help this sector solve the problem that it itself created.

Some of my friends recently told me that people who don't work in this field and who hear about the risk of extinction associated with artificial general intelligence for the first time say, “So maybe we don't have to create IAG.”

Hearing this gave me a glimmer of hope because it's a more instinctive, more sensible, and frankly healthier reaction than what I've been hearing for 20 years, whenever I try to get anyone in this industry to take this issue seriously. These sane people deserve to know how serious the situation is, not to be told that a six-month moratorium will solve the problem.

On March 16, my partner sent me this email (and then gave me permission to share it here):

“Nina lost a tooth! Like all children her age, and not by accident! Seeing GPT4 pulverize these standardized tests on the same day that Nina reached an important childhood milestone caused a surge of emotions that made me lose my footing for a minute. It's going too fast. I am afraid that sharing this with you will increase your sorrow, but I would rather that you know that than that we each suffer on our own.”

When a private conversation is about the sadness of seeing your daughter lose her first tooth thinking that she won't have the chance to grow up, I think we no longer have the means to make low-level politics by debating whether or not to adopt a six-month moratorium.

If we had a plan to save the planet after the adoption of such a moratorium, I would support that plan. But we don't have any.

Here's what should be done:

The moratorium on new AI training work must be adopted until further notice and respected by all countries in the world. There can be no exceptions, including for governments and armies. If this law is passed in the United States, China will have to recognize that the United States is not seeking to take any advantage, but to prevent the advent of an excruciatingly dangerous technology that cannot be controlled by anyone and that will kill the entire population, in the United States, in China and on Earth. If I had total freedom to write laws, maybe I would add an exception for AIs created only to solve problems in biology and biotechnology, but not from texts taken from the Internet and not to a level where they would be able to speak and make predictions. And if this were to worsen the situation, I would immediately abandon this project and call for it to be stopped for good.

Shut down all GPU clusters (these big ones) datacenters where we refine the most powerful AIs). Cut off all the big AI training jobs. Set a ceiling on the computing power allowed to train an AI system, and lower it over the years to compensate for the development of more efficient training algorithms. No exceptions for governments and armies. Sign international agreements immediately to prevent illicit activities from being conducted elsewhere. Track all GPUs sold. If intelligence services discover that a country that is not a signatory to the agreement is building a GPU cluster, fear less armed conflict between nations than non-compliance with the moratorium: do not hesitate to order an airstrike to destroy everything datacenter illegal.

Be clear that this is not a conflict between national interests and make it clear that calling for an arms race would be madness. We will all live or die together: that is a fact, and it has nothing to do with politics. In diplomacy, make clear the priority of preventing AI-induced extinction scenarios rather than avoiding nuclear conflict: all countries bound by nuclear agreements must be prepared to risk nuclear conflict if necessary to avoid major AI training campaigns.

This is the kind of public policy change that would allow my partner and I to hug each other and tell each other that a miracle has happened and that there is now a chance that Nina will live. Sensible people who discover the problem and rationally say to themselves “we should avoid doing it” deserve to be told honestly what to do to make their wishes come true. And when the demand made to the legislators is so great, the only way to see it fulfilled is for them to realize that, if they do as usual and choose the easy way, their children will die too.

Stop everything.

We are not ready. We are not on track to be significantly more ready in the foreseeable future. If we continue down this path, everyone will die, including children who did not choose this path and did nothing wrong.

Stop everything.

Eliezer Yukdowsky

Share this post

Don't miss out on any of our posts.

Subscribe to our newsletter to get the latest news.

Access the form

Join the resistance.

ATR is constantly welcoming and training new recruits determined to combat the technological system.