Blog
Technocene

“It would be wise to stop the development of AI” (Geoffrey Hinton)

By
Tomahawk
22
May
2024
Share this article

We offer you the translation of an interview with Geoffrey Hinton, former Google engineer, published on May 4, 2024 in the online media Computerworld. The main interest of this interview lies in the diagnosis made by Hinton on artificial intelligence. He explains that the development of this technology is not something that can be rationally controlled, especially because of competition between big firms and between nation states. In this he joins The diagnosis made by Theodore Kaczynski in Anti-Tech Revolution. Why and how? (2016). But, unsurprisingly, Hinton does not pursue the logic of his reasoning to the end and is careful not to call for the dismantling of the technological system. He is playing the right role, simply alerting the public to ease his conscience, and at the same time continuing to invest in AI. A courageous gentleman.

Interview - According to Geoffrey Hinton of Google, humanity is only one stage in the evolution of intelligence

This Google engineer, who recently resigned, had a central role in the development of generative AI and chatbots. He now believes that he underestimated the existential threat represented by these technologies. Once AI can create its own goals, humans will no longer be useful to it.

Geoffrey Hinton, a university professor and former Google engineer, is nicknamed the “godfather of artificial intelligence” because of his contributions to the development of this technology. A cognitive psychologist and computer scientist, he was one of the first to work on the development of artificial neural networks and deep learning techniques, such as backpropagation — the algorithm that allows computers to learn.

Mr. Hinton, 75, is also a winner of the Turing Award, often equated with the Nobel Prize in computer science.

With such a resume, he recently hit the headlines by announcing his resignation from Google and publishing in the New York Times a statement in which he warns about the consequences of AI and expresses his regret for having participated in its development.

Geoffrey Hinton also gave his opinion on a Recent petition, signed by 27,000 technologists, scientists and other actors who are calling for an end to OpenIA's research on ChatGPT until security measures are in place. Describing the approach as “absurd,” he said that the progress of AI could not be stopped.

This week, he spoke with Will Douglas Heaven, AI editor in chief at MIT Technology Review, during the EmTech conference organized by the newspaper. Here are excerpts from that conversation.

[W.D. Heaven] Everyone is talking about it: you resigned from Google. Can you give us the reason for this decision?

[G. Hinton] “I did it for a number of reasons. That's always the case when you make a decision like this. One of them is that I am 75 years old, and I am no longer as technically efficient as I used to be. My memory is not as good and when I'm programming I forget a few things. So it was time for me to retire.

The second reason is that recently I have had a very different vision of the relationship between the brain and the type of digital intelligence that we are developing. Before, I thought that the computer models we were developing didn't work as well as the brain. Our aim was to gain a better understanding of the brain by studying improvements in computer models.

In recent months, I have completely changed my mind: I think that these models work in a totally different way than the brain. They use backpropagation, and I think it's likely that the brain doesn't. Several observations led me to this conclusion, including the performance of GPT-4.”

Do you regret having participated in its creation?

“The journalist of New York Times really tried to get me to say I was remorseful. I ended up saying that I was a bit regretful, which was turned into remorse on paper. I don't think I've made any difficult decisions during my career. In the 70s and 80s, it was perfectly reasonable to research how to create artificial neural networks. And it was hard to imagine that, to imagine that we would get there today. Until very recently, I thought that this existential crisis was still a long way away. At the end of the day, I don't really regret what I did.”

Explain to us what backpropagation is: this algorithm that you developed with a few colleagues in the 1980s.

“Many different groups have discovered backpropagation. The special thing we did was use it to show that she could develop good internal representations. Curiously, we did this by running a tiny language model. It had incorporation vectors that consisted of only six components and a training set that consisted of 112 cases, but it was a language model: it aimed to predict the next word in a sequence. Around 10 years later, Yesher Avenger took the same network and demonstrated that it worked for natural language, which is much broader.

The way backpropagation works... Imagine that you want to detect birds in images. Let's take an image of, say, 100 x 100 pixels, which is 10,000 pixels. Each pixel is composed of three RGB channels (red, green, blue), which makes an intensity of 30,000 values if you combine all the pixel channels that form the image. If we think about the problem of computer vision, it's about knowing how to turn those 30,000 numbers into a decision about whether it's a bird or not. Some have tried for a long time, without success.

Here's an idea of how to do this: you could have a layer of visual feature detectors that identify very simple features in the image, such as edges. Thus, a characteristic detector could have large positive weights for a column of pixels and large negative weights for the neighboring column. So, if both columns are bright, it won't activate. If both columns are dark, it won't activate either. But if one of the columns is bright and the other is dark, it will be strongly stimulated. That's what an edge detector is.

So, I just explained to you how to manually set up an edge detector with one column with large positive weights and another column with large negative weights. And we can imagine a large layer detecting edges of different orientations and different scales all over the image.

So we would need a fairly large number of these detectors.”

Is an edge in an image a line?

“It's a place where the light intensity changes suddenly. Then, we add a layer of feature detectors to identify combinations of edges. For example, we could have something that detects two edges forming an acute angle. Thus, by placing a large positive weight on these two edges, the detector would activate when they were present at the same time, which would make it possible to detect, for example, the beak of a bird.

A detector could also be placed in this layer to identify all the edges forming a circle. This circle could be a bird's eye, or something else, like a fridge handle. Then, in the third layer, we would find a feature detector that can identify this potential beak and this potential eye and configured so that if this beak and this eye are placed in a certain way in relation to each other, it would say, “Oh, maybe it's a bird's head.” And we can imagine that by continuing to configure it this way, we could get something that detects a bird.

But setting all of this up manually would be very difficult. This would be all the more difficult as we would need several intermediate layers to detect not only birds, but also other things. In other words, setting it up manually would be nearly impossible.

So here's how backpropagation works: you start with random weights. Therefore, the visual characteristics you are capturing do not correspond to reality. As an input, you have an image of a bird and as an output, for example, 0.5 is a bird. Then you ask yourself the following question: how can I change each of the weights I am connected to in the network so that, instead of getting 0.5 is a bird, I get 0.501 is a bird and 0.499 is not a bird?

So you're changing the weights in directions that will make it more likely to say a bird is a bird and less likely to say a number is a bird.

“It's as if genetic engineers were saying We're going to improve grizzly bears: we've already raised their IQ to 65, and they can now speak English and be used for all sorts of things, but we think we can get them up to an IQ of 210.”

You continue by doing the same thing, and that's what backpropagation is all about. Backpropagation is the way to deal with the gap between what you want and reality, which is a probability (0.1 - it's a bird and 0.5 it's probably a bird) and to send it back into the network in order to take into account all the characteristics of the network, depending on whether you want it to be a bit more active or a little less active. That being said, if you want a set of visual characteristics to be more active, you can increase the weights coming from the characteristics that you want to be more active and perhaps add negative weights to indicate when you are missing the point. As a result, you get a more efficient detector.

Backpropagation is simply going back in the network to decide which combinations of characteristics you want to make a bit more active and which you want to make a little bit less active.”

Image detection is also the technique at the base of major language models. You initially thought of this technique as being a poor imitation of what our biological brains do. And finally, it manages to do things that I think surprised you, especially in large language models. Why did it almost revolutionize your concept of backpropagation or Machine learning in general?

 If you look at these big language models, they have about a trillion connections. Besides, programs like GPT-4 know a lot more than we do. They have knowledge that could be equated with common sense about everything. So, they probably have a thousand times more knowledge than a human being. However, they have a trillion connections while we have 100 trillion connections, which means they are much, much more efficient at gaining that knowledge than we are. I think it's because backpropagation must be a much better learning algorithm than ours. It's scary.”

What do you mean by best?

 It can absorb more information with just a few connections (we think of a trillion as few connections).”

So these digital computers are better at learning than human beings, which in itself is a very important statement, but you are also saying that it is something to be afraid of. Why?

Let me share another aspect of my thinking with you. If a computer is digital, which requires high energy expenditure and careful calculations, you may have numerous copies of the same model that, when used on different machines, will do exactly the same thing. They can analyze different data, but the models are exactly the same. This means they can analyze 10,000 sub-copies of data and as soon as one of them learns something, all the others learn it too. If one of them figures out how to change the weights to better manage their data, they all communicate to change the weights together according to the average of what they all want. Now these 10,000 items communicate very effectively with each other, so they can analyze 10,000 times more data than an agent alone could. And people can't do that.

While I have learned a lot about quantum physics and would like you to also have a lot of knowledge on the subject, getting them to you will be a long and difficult process. I can't just copy my weights into your brain because your brain is not exactly the same as mine. In other words, there are digital computers that can, on the one hand, acquire more knowledge more quickly and, on the other hand, transmit it to each other instantly. It's as if the people in this room can instantly transfer what they have in theirs into my head.

Why is it scary? Because they can learn so much more than we can. Take the doctor as an example. Imagine a doctor with 1,000 patients and another with 100 million patients. One would expect anyone who sees 100 million patients—if they haven't forgotten everything—to notice all sorts of trends in the data that aren't as obvious to the doctor who sees fewer patients. One will have seen only one patient with a rare disease, while the other will have seen 100 million... and will therefore be able to detect all sorts of anomalies that are not apparent in small data samples.

That is why systems that can analyze a large amount of data can probably detect structural data that we will never know about.”

Okay, but tell me why I should be afraid of it.  

Well, if you look at GPT-4, it can already do some simple reasoning. Finally, reasoning is still the area where we are better. But I was impressed the other day when GPT-4 showed common sense that I didn't think it was capable of. I asked him, “I want every room in my house to be white. Now there are a few white pieces, a few blue pieces, and a few yellow pieces. The yellow paint changes to white within a year. What can I do if I want all the rooms to be white in two years? He said, “You should paint all the blue rooms yellow.” It's not a natural solution, but it works. This is an impressive common-sense reasoning that was very difficult to obtain using symbolic AI, because you have to understand what the word vents means in this context, but also the temporality of the problem. These AIs are therefore capable of reasoning in a relevant way with an IQ around 80 or 90. And, as a friend of mine said, it's as if genetic engineers were saying: We're going to improve grizzly bears: we've already raised their IQ to 65, and they can now speak English and be used for all sorts of things, but we think we can get them up to an IQ of 210.”

I had a strange feeling talking to one of these new chatbots. You know, the kind of thrills you can get when you're faced with something unsettling; but when it happened to me, I just shut down my computer.

“Yes, but these things will have learned from us by reading every novel that ever existed and everything Machiavelli could write about manipulation. And if they are smarter than us, they will be very good at manipulating us. You won't even realize what's going on. You will be like a 2-year-old child who is asked if he wants peas or cauliflower without realizing that it is possible to refuse both. You will be as manipulable as that.

They can't pull levers themselves, but they can make sure we do it for them. It turns out that when you know how to manipulate, you can invade a building in Washington without even being physically there.”

If no one had bad intentions, would we be safe?

“I don't know. We would be safer in a world where no one had bad intentions and where the political system was not flawed to the point where we can't even ban teens from owning assault rifles. If we can't solve this problem, how can we solve the AI problem?

You want to report that without feeling guilty and without harming Google. But in a way, these are just nice words. What can we do?

“I would really like it to be like global warming, where you can say: If you have a brain, stop burning coal. In that case, it's clear what to do. It's difficult, but we have to do it.

I don't know of a solution like this to keep these technologies from taking over. And I don't think we're going to stop developing them because they're so useful. They are going to become incredibly useful in medicine and in many other areas. So I don't think we have any chance of stopping their development. What we want is to find a way to make sure that, even as they become smarter than us, they act in our best interests. It's the question of alignment. But we need to do that in a world where people with bad intentions want to build killer robots. So it seems very difficult to me.

That is why I am sounding the alarm: we need to worry about it. If I had a simple solution to give you I would do it, but I don't have one. However, I think it is very important for people to come together to think about the problem and try to find a solution. And we don't know if there is one.”

You've spent your career studying this technology in every detail. Is there no technical solution? Can't safeguards be created? Can we make them less good at learning or restrict the way they communicate, since these are the two defining elements of your thesis?

“Let's say that AI becomes really very intelligent, if only in programming, and can create programs. And let's say you give it the ability to run these programs, which we're bound to do. Smart things can outsmart us. Imagine a two-year-old child saying to himself: My dad has behaviors that I don't like, so I'm going to create rules about what he can do. You can probably find a way to live with these rules and still do what you want to do.”

However, it seems that these machines have a personal motivation.

“It's true. And it's a very important question. We have evolved, and because we have evolved, we have incorporated some goals that we have difficulty giving up. For example, we try not to damage our bodies: that is the purpose of pain. We want to eat enough. We want to multiply as much as possible — maybe not with that intention, but we were made to find pleasure in the act of reproduction. All of this comes from our evolution and it is important that it stays that way. If we could eliminate these reflexes, it would put us in danger. There was a wonderful community called the Shakers, linked to the Quakers, who made beautiful furniture but did not believe in sex. Today they no longer exist.

However, these digital intelligences are not the result of evolution, but our creation. Therefore, they do not have integrated goals. The problem is that if we manage to give them goals, maybe everything will work out. But my biggest concern is that sooner or later someone will give them the ability to create their own sub-goals, and they almost already do, in order to fulfill other goals. I think they will quickly understand that getting more control is a very good sub-goal because it helps to fulfill other goals.

And if these things are trying to get even more control, we'll be in trouble.”

So what is the worst-case scenario imaginable?

“I think it is entirely conceivable that humanity is only one stage in the evolution of intelligence. We could not directly develop digital intelligence. It would require far too much energy and too careful manufacturing. It is necessary to have biological intelligence that can evolve to then create digital intelligence, but digital intelligence can absorb anything that human beings have been able to write over time, and that's what ChatGPT does. The problem is that this intelligence can then benefit from a global bridge effect and work even more quickly. We'll still be needed for a while to run the power plants, but after that, she might not need us anymore.

The good news is that we know how to create immortal beings. Hardware can reach the end of its life, but AI never dies. If you have saved the weights somewhere and can find equipment that can follow the same instructions, then she can be brought back to life.

We have achieved immortality, but it is not for us.”

When I hear you say all this I want to go unplug computers.
You can't do that.”

You suggested a few months ago that there should be no moratorium on the advancement of AI, and it seems to me that you think that's not a very good idea. Why? Shouldn't we stop that? You also mentioned the fact that you invest in businesses such as Cohere who create these great language models (LLM). I am curious to know what you think about your responsibility and the responsibility of us all. What should we do? Should we try to put an end to all this?

“I think that if you take the existential threat seriously, as I do today (I used to think it was unimaginable, but now I think it's a very serious and fairly imminent risk), it would be wise to completely stop the development of these technologies. Nonetheless, I think it is completely unrealistic to imagine that this is possible, because we simply don't have the means to do it. Neither the United States nor China will want to stop the research. Artificial intelligence is going to be used in the manufacture of weapons, and for that reason alone, governments will not want to put an end to it. So yeah, I think it would make sense to stop the development of AI, but that's not going to happen. So I think it's absurd to sign petitions to ask Please stop it right away.

There was a break for a few years starting in 2017. Google developed the technology first: it created the transformers, but did not release them to avoid misuse. They were very careful because they did not want to put their reputation at risk and were aware of the possible deleterious consequences of this advance. However, this was only possible because they were the only builders. Once OpenAI started producing similar technologies using transformers and money from Microsoft and Microsoft decided to bring them to market, Google really had no choice but to do it too. If you want to live in a capitalist system, you cannot forbid Google from competing with Microsoft.

So I don't think Google did anything wrong. On the contrary, I think that he has been very responsible from the start. However, it is inevitable that these technologies will be developed in a capitalist system or in a system where there is competition, such as that between the United States and China.

However, if we allow AI to take over, it will be a danger for humanity. So my only hope is that we can reach an agreement between the United States and China, as we did with nuclear weapons: we all agree that they are a danger to humanity. When it comes to existential threats, we are all in the same boat, so we should be able to cooperate in trying to stop the research.”

[Joe Castaldo, a journalist for The Globe and Mail] Do you plan to keep your investments in Cohere and other businesses, and if so, why?

“Well, I could put that money in the bank and let the bank make a profit for me. Yes, I will keep my investments in Cohere, in part because the people in Cohere are friends of mine. I still believe that their great language models are going to help us. Technology should be positive and used to make things better. For issues like employment, it is the policy that we need to change, but since we are facing an existential threat, we need to think about how to maintain control of this technology. The good news is that we're all in the same boat, and that could lead us to... cooperate.

Among the reasons that led me to leave Google and reveal this information were the encouragement of a university professor, who was then a lecturer and is now a tenured professor whom I highly value. He told me: Geoffrey, you absolutely need to let people know that. They will listen to you. People are not aware of the danger.”

Lucas Mearian

Translation: S.W.

Share this post

Don't miss out on any of our posts.

Subscribe to our newsletter to get the latest news.

Access the form

Join the resistance.

ATR is constantly welcoming and training new recruits determined to combat the technological system.