Blog
Technocene

Why the future doesn't need us

By
S.C
07
November
2023
Share this article

We translated a fairly well-known text by Bill Joy (1954), computer engineer and co-founder of the Sun Microsystems group (since acquired by Oracle Corporation, a software giant). In this article published in the magazine Wired in 2000, he was worried about advances in nanotechnology and biotechnologies that were potentially cataclysmic for life on Earth and for humanity.

Feu de camp

My involvement in the development of new technologies has always been accompanied by ethical concerns, and has been since day one. However, it was not until the fall of 1998 that I became aware of the serious dangers posed to us by the XXI.E century. The first symptoms of this profound discomfort appeared the day I met Ray Kurzweil, the inventor, with legitimate success, of many extraordinary things, including the first device that allowed the blind to read.

He and I were both attending George Gilder's “Telecosm” conference. After our respective speeches, I met him by chance at the hotel bar, where I was in conversation with John Searle, a Berkeley philosopher specializing in consciousness. It was then that Ray approached, and a conversation began around a theme that has haunted me ever since.

I missed Ray's speech and the following forum, which included him and John as guests. Now they were resuming the debate where they left off. Ray said that technological progress was going to accelerate faster and that we were destined to become robots, merge with our machines, or something similar. Arguing that an automaton has no conscience, John rejected this idea.

While this kind of discourse was relatively familiar to me, robots with sensations were still in the realm of science fiction for me. Now, in front of my eyes, an individual who presented every guarantee of seriousness affirmed with great conviction that such a perspective was imminent. I was taken aback. Especially considering Ray's proven ability to not only imagine the future, but also to invent it in a concrete way. That it had then become possible to remake the world by relying on new technologies such as genetic engineering or nanotechnology, it was not a surprise for me. On the other hand, setting up a realistic scenario of “intelligent” robots in the near future was perplexing to me.

These kinds of major discoveries quickly lose their spice. Almost every day, a newsletter informs us of an additional advance in one area or another of technology or science. However, in this specific case, the prediction stood out from the crowd. There, in this hotel bar, Ray gave me a set of tests taken from his book, The Age of Spiritual Machines [“The Age of Minded Machines”, unpublished in French], then about to be published. In this book, he outlined the main lines of a visionary utopia: a utopia according to which, by joining forces with robotic technology, the human being would become an almost immortal creature. As the pages went by, my sense of unease grew: not only was Ray certainly minimizing the dangers of such a path, but he was also reducing the magnitude of its potential devastating effects.

That's when I was overwhelmed by a passage detailing a dystopian scenario:

A new challenge for Luddites

“First: let's assume that computer scientists manage to create intelligent machines that are capable of being more efficient than humans in all areas. In this case, it is likely that all tasks will be performed by vast, highly organized networks of machines and that no human effort will be required. Two cases could then arise: either the machines will make decisions independently; or human control will continue to be exerted on them.

While machines are allowed to decide for themselves, no hypothesis about the future is possible without being able to guess their behavior. Let's just note that humanity will be at the mercy of machines. Some will object that the human species will never be crazy enough to entrust all power to the machines; but we are not suggesting that it will abandon it to them voluntarily, or that the machines will take it deliberately. What we are suggesting is that the human species could easily find itself in a state of dependence on machines, with no choice but to accept all their decisions. As society and its problems become more complex and machines more intelligent, people will let them make more and more decisions for them, simply because they will have better results than them. Finally, decisions about how the system works may become too complex to be made sensibly by human beings. The machines will then have effective control and can no longer be switched off — humans having become so dependent that it would amount to suicide.

Nevertheless, human control could continue to be exerted over the machines. In this case, the average man will maintain control over his domestic machines, like his car or his computer, but control of the large networks of machines will go to a small elite — like today, but with two differences. As techniques are perfected, the elite will have more control over the masses, and as human labor is no longer needed by the system, the masses will become an unnecessary and superfluous burden. If the elite proved to be ruthless, they could simply decide to exterminate them. If she were more charitable, she could, in order to keep the world to herself, use propaganda or biological and psychological techniques to reduce births until the majority of humanity died out. Finally, an elite composed of soft-hearted liberals could take on the role of good shepherd for the rest of the human species: ensuring the physical needs of everyone, educating children according to principles of mental hygiene, providing everyone with healthy occupations, and submitting the dissatisfied to “treatment” to cure their “problem.” Of course, life will then be so empty of meaning that people will have to be programmed biologically or psychologically to annihilate their power process or “enhance” it through harmless pastimes. These reprogrammed humans may be happy in such a society, but certainly not free. They will have been reduced to domestic animals.”

The author of this passage is none other than Theodore Kaczynski, alias Unabomber [§172 to 174 of Industrial society and its future], but it's only discovered on the next page. Far be it from me to extol its merits. In seventeen years of a terrorist campaign, its bombs have killed three people and injured a multitude of others. One of them seriously affected my friend David Gelernter, one of the most brilliant computer scientists of our time and a true visionary. Also, like many of my colleagues, I had the feeling that I could easily be his next target.

In my opinion, Kaczynski's criminal acts bear the mark of murderous madness. We are clearly in the presence of a “luddite”. However, this simple observation does not dismiss his arguments. It costs me, but I must admit it: in this specific passage, his reasoning deserves attention. I felt the urge to take the bull by the horns.

Kaczynski's dystopian vision exposes the phenomenon of unintended consequences, a well-known problem that goes hand in hand with the creation and use of any technology. This phenomenon refers directly to Murphy's law, under which “anything that can malfunction will malfunction” (this is actually Finagle's law, an assertion that, by nature, proves the author right from the start). The excessive use of antibiotics has created a problem that, among all others of this type, is perhaps the most serious: the appearance of “antibiotic-resistant” bacteria, which are infinitely more dangerous. Similar effects were observed when DDT was used to eliminate the malaria mosquito, as a result of which the animal became resistant to the product intended to destroy them. In addition, the parasites linked to this disease have developed genes (multi-resistant). The cause of so many unexpected events seems clear: the systems that come into play are complex, require interaction between them and need feedback from the many parties involved. The slightest change in such a system causes a shock wave whose repercussions are impossible to predict. This is all the more true as humans are involved in the process.

I started having my friends read the passage by Kaczynski quoted in The Age of Spiritual Machines ; I handed them Kurzweil's book, let them read the excerpt, then watched their reaction once they discovered the author. Around the same time, I discovered the book by Hans Moravec, Robot: Mere Machine to Transcendent Mind [“Robot: from a simple machine to a transcendent spirit”, unpublished in French]. Moravec, an eminent figure in robotics research, participated in the creation of one of the world's largest programs in this field, at Carnegie Mellon University. Robot provided me with additional equipment to test my friends. This was surprisingly consistent with Kaczynski's theses. This for example:

Short-term (early 2000s)

“Only very rarely does a biological species survive an encounter with a rival species with a higher degree of evolution. Ten million years ago, North America and South America were separated by a submerged Isthmus of Panama. Like Australia today, South America was populated by marsupial mammals, especially the equivalents of rats, deer, and tigers, all equipped with a ventral pouch. When the isthmus connecting the two Americas was raised, a few thousand years were enough for placental species from the North, with slightly more efficient metabolisms and reproductive and nervous systems, to move and eliminate almost all of the marsupials in the South.

In a context of unfettered liberalism, robots with a higher degree of evolution are sure to modify humans, in the same way that the placentals of North America modified the marsupials of South America (and that man himself has affected a large number of species). The robotics industries would compete fiercely in a race for matter, energy and space, raising their prices in the process to reach levels inaccessible to humans. As a result, unable to meet his needs, biological humans would find themselves pushed out of existence.

We no doubt have a supply of oxygen left, insofar as we do not live in a context of unfettered liberalism. The government forces us to certain collective behaviors, primarily through taxation. With such regulation, exercised wisely, human populations could benefit greatly from the work of robots, and for a long time.”

A classic example of dystopia, and again: Moravec is just getting hot. Further on, he explains how, in XXIE century, our main task will be to “ensure the unwavering cooperation of the robotics industries” by passing laws requiring them to remain “kind.” In addition, he recalls how, “once modified into an unrestrained superintelligent robot”, humans can be extremely dangerous.

Moravec's thesis is that robots will eventually succeed us: for him, humanity is clearly doomed to disappear.

It was said: a conversation with my friend Danny Hillis was needed. Danny Hillis became famous as the co-founder of the Thinking Machines Corporation, which built an extremely powerful parallel supercomputer. Despite my current title as Scientific Director at Sun Microsystems, I am more of a computer architect than a scientist in the strict sense, and the respect I have for Danny for his knowledge of information and the physical sciences is unparalleled. In addition, Danny is a respected futurologist, someone who sees the long term; four years ago, he created the Long Now Foundation, which is currently working on a clock built to last ten thousand years. The aim is to draw attention to the propensity of our society to examine events only over a dismal number of years.

So I jumped on a plane to Los Angeles specially to go have dinner with Danny and his wife, Pati. According to a routine that is now very well-established, I debated the ideas and passages that I found so disturbing. Danny's response — a clear reference to the scenario of man merging with the machine imagined by Kurzweil — sprang up quickly, and rather surprised me: the changes would come gradually, he just said, and we were going to get used to it.

But in the end, it did not surprise me more than that. In Kurzweil's book, I noted a quote from Danny who said, “I like my body, like everyone else, but if a silicone body allows me to live to be 200, I'm in.” Neither the process itself nor the risks involved seemed to worry him in the least. Contrary to me

After talking about Kurzweil, Kaczynski and Moravec and turning their ideas around in my head, I remembered a novel that I had read nearly twenty years ago, The White Death by Frank Herbert, in which a molecular biology researcher falls into madness following the senseless murder of his family. To take revenge, he makes and spreads the bacilli of an unknown and highly contagious plague that kills on a large scale, but electively (luckily, Kaczynski was a mathematician and not a researcher in molecular biology). Another memory also came back to me, that of the Borg from “Star Trek”: a swarm of half-biological, half-robotic creatures that are distinguished by a clear propensity to destroy. So, since Borg disasters are a sci-fi classic, why wasn't I worried about these kinds of robotic dystopias earlier? And why weren't others more worried about these nightmarish scenarios?

The answer to this question undoubtedly lies in our attitude to what is new, that is, in our tendency for immediate familiarity and unconditional acceptance of things. Although technological advances are in our eyes nothing more than routine events, or almost, we will have to resolve to face it: the most essential technologies of the 21st century.E century — robotics, genetic engineering, and nanotechnology — represent a threat that is different from previous technologies. In concrete terms, robots, genetically modified organisms and “nanorobots” have in common a multiplying factor: they have the ability to reproduce themselves. A bomb only explodes once; a robot, on the other hand, can proliferate and quickly get out of control.

For twenty-five years, my work has focused on computer networks, where the sending and receiving of messages creates the possibility of uncontrolled reproduction. If, in a computer or a computer network, duplication can cause damage, the ultimate consequence will be, in the worst case, a decommissioning of the device or the network, or a blocking of access to this network. However, uncontrolled self-reproduction in the field of these more recent technologies places us in a much more serious danger: that of substantial degradation of the physical world.

Moreover, each of these technologies gives us its secret promise, and what moves us is none other than the vision of virtual immortality present in Kurzweil's robot dreams. Genetic engineering will soon make it possible to find suitable treatments to treat or even eradicate most diseases; finally, nanotechnology and nanomedicine will make it possible to treat even others. Combined together, they could extend our life expectancy and improve its quality significantly. However, when it comes to these various technologies, a sequence of small steps — sensible, when taken in isolation — leads to a massive accumulation of power and, as a result, to a daunting danger.

What is the difference with the xxE century? Certainly, weapons of mass destruction (WMD) technologies — nuclear, biological, and chemical (NBC) — were powerful, and the arsenal posed an extreme threat to us. However, the manufacture of atomic devices required, at least for a time, access to rare — and even inaccessible — materials, as well as to highly confidential information. Moreover, biological and chemical weapons programs often required large-scale activities.

XXI technologiesE century — genetics, nanotechnology and robotics (GNR) — are so powerful that they have the capacity to generate entire classes of accidents and abuses that are completely new. As an aggravating circumstance, for the first time, these accidents and abuses are largely within the reach of isolated individuals or small groups. Indeed, these technologies do not require access to large-scale installations or to rare materials; the only condition for using them is to be in possession of the required knowledge.

As a result, the threat we find ourselves under today is no longer limited to the problem of weapons of mass destruction alone. Added to this is the acquisition of knowledge which, in itself, allows this destruction on a very large scale. In addition, the potential for annihilation is multiplied by self-reproduction.

It does not seem unreasonable to me to say that having reached the peak of absolute evil, we are preparing to push its limits even further. Surprising and dreadful, this evil extends far beyond a devastating arsenal that would remain the preserve of nation states, to fall today into the hands of isolated extremists.

Nothing, in the way in which I found myself involved in the computer world, suggested that such challenges would one day present themselves to me.

My engine has always been an acute need to ask questions and find answers. At the age of three, as I was already reading, my father enrolled me in elementary school, where, sitting on the director's lap, I read stories to him. I started school early, skipped a class, and finally got away into books. I had an incredible thirst for learning. I asked lots of questions, often confusing the minds of adults.

As a teenager, I was very interested in science and technology. I had the idea of becoming an amateur radio, but I did not have enough money to pay for the equipment. The amateur radio station was the Internet of the time: very compulsive, and rather solitary. Apart from financial considerations, my mother stopped short: there was no question of me going into this — I was already quite asocial as it was.

Close friends weren't hustling at the gate, but I was full of ideas. As early as high school, I discovered the great science fiction authors. In particular, I remember the Young man and space by Heinlein, and Robots by Asimov, with his “three laws of robotics”. The descriptions of space travel enchanted me. I dreamed of a telescope to observe the stars, but not having enough money to buy one or to make one myself, I went through practical books on how to do it for me as a solace. I was soaring, but in thought.

Thursday night it was bowling. My parents were going to play their games and we kids were staying home alone. It was the day of “Star Trek”, by Gene Roddenberry, whose original episodes were at the time. This television series left a deep impression on me. I came to accept his idea that man had a future in space, Western style, with its invincible heroes and extraordinary adventures. Roddenberry's vision for centuries to come was based on solid moral values, expressed in codes of conduct like the “first directive”: not to interfere in the development of less technologically advanced civilizations. This fascinated me without limits; at the controls of this future, we found not robots, but human beings, with ethics. And I partially made Roddenberry's dream my own.

In high school, my math level was excellent, and when I went to the University of Michigan to study for my engineering degree, I immediately enrolled in higher mathematics. Solving mathematical problems was a nice challenge, but with computers, I discovered something much more interesting: a machine in which you could put a program that tried to solve the problem, after which the machine quickly checked whether that solution was good. The computer had a clear idea of what was accurate or inaccurate, what was true or false. Were my ideas accurate? The machine could tell me that. It was all very appealing.

Luckily, I managed to find a job programming the first supercomputers, and I discovered the extraordinary capabilities of powerful units that make it possible, through numerical simulation, to develop high-tech concepts. Arriving at UC Berkeley in the mid-70s for my postgraduate studies, I started going to the heart of the machines to invent new worlds, going to bed late on the days I went to bed. To solve problems. To write the codes that were desperate to be written.

In The Burning Life of Michelangelo, a fictionalized biography, Irving Stone describes with striking realism how the sculptor, “discovering the secret” of the stone, let his visions guide his scissors to free the statues from their mineral matrix. In the same way, in my most intense moments of euphoria, it's as if the software emerged from the depths of the computer. Once finalized in my mind, I had the feeling that he was sitting in the machine, just waiting for the moment of his release. From this perspective, not turning a blind eye all night seemed like a very low price to pay to give him freedom, for my ideas to take shape.

After a few years in Berkeley, I started sending some of the software I had designed — a Pascal instruction system, Unix utilities, and a text editor called vi (which, to my surprise, is still in use twenty years later) — to people who also had small PDP-IIs and VAX minicomputers. These adventures in the land of software finally gave birth to the Berkeley version of the Unix operating system, which, on a personal level, ended in a “disastrous success”: the demand was so strong that I was never able to complete my PhD. On the other hand, I was recruited by Darpa to put the Berkeley version of the Unix system on the Internet, and fix it to make it something reliable and capable of running large-scale research applications as well. All of this amused me wildly and was very rewarding.

And, frankly, I couldn't see the shadow of a robot anywhere. Not here or nearby.

However, at the beginning of the 80s, I was overwhelmed. The Unix versions were very successful, and soon, what started as a small personal project was financed and staffed. However, as always in Berkeley, the problem was less money than square meters. Given the size of the project and the number of staff required, space was lacking. That's why when the other founding members of Sun Microsystems emerged, I jumped at the chance to join forces. At Sun, the endless days were daily until the first generations of workstations and PCs, and I had the pleasure of being part of the development of cutting-edge microprocessor technologies and Internet technologies such as Java and Jini.

Feu de camp

All this, it seems to me, makes it clear: I am not a “Luddite”. Quite the opposite. I have always held the quest for scientific truth in the highest regard, and have always been deeply convinced of the ability of great engineering to bring about material progress. The Industrial Revolution has improved the lives of all of us in extraordinary ways over the past two centuries, and as far as my career is concerned, my desire has always been to produce useful solutions to real problems, managing them one by one.

I was not disappointed. My work has had quite unexpected repercussions, and the large-scale use that has been made of it has, in addition, exceeded my wildest dreams. For twenty years now, I have been racking my brains to manufacture computers that are reliable enough in my opinion (we are still far from achieving this goal), and to try to increase their ease of use (an objective even further from being achieved to date). However, while some progress has been made, the problems that remain seem even more disheartening.

However, while I was aware of the moral dilemmas associated with the consequences of certain technologies in areas such as arms research, far from me the idea that they could one day arise in my own sector. Or at least not so prematurely.

Caught up in the vortex of a transformation, it is undoubtedly always difficult to glimpse the real impact of things. Whether in the fields of science or technology, the inability to grasp the consequences of their inventions seems to be a widespread defect among researchers, all in the thrill of discovery and innovation. Inherent in the scientific quest, the natural desire to know has been burning in us for so long that we neglect to pause and take note of this: the progress that gives rise to ever more innovative and ever more powerful technologies can escape us and trigger an autonomous process.

I realized a long time ago that it is neither to the work of computer researchers, nor to that of computer designers or engineers that we owe significant advances in the field of information technology, but to that of researchers in physics. In the early 1980s, physicists Stephen Wolfram and Brosl Hasslacher introduced me to chaos theory and nonlinear systems. During the 90s, conversations with Danny Hillis, the biologist Stuart Kauffman, the Nobel Prize in Physics Murray Gell-Mann, and others allowed me to discover complex systems. More recently, Hasslacher and Mark Reed, an engineer and chip physicist, enlightened me about the extraordinary possibilities of molecular electronics.

As part of my own work, being the associate designer of three microprocessor architectures — Sparc, PicoJava, and MAJC — and, in addition, of several of their implementations, I am in a prime position to verify, personally and relentlessly, the validity of Moore's Law. For decades, this law has allowed us to accurately estimate the exponential rate of improvement in semiconductor technologies. Until last year, I was convinced that around 2010, some limits would eventually be physically reached and that, as a result, that growth rate would drop. In my eyes, there was no clear sign that new technology would emerge early enough to maintain a steady pace.

But due to the recent, striking and rapid progress in the field of molecular electronics — where isolated atoms and molecules replace lithographed transistors — and in the related “nano” technology sector, everything indicates that we should maintain or increase the growth rate predicted by Moore's Law for another thirty years. Thus, by 2030, we should be in a position to produce, in quantity, units that are a million times more powerful than today's personal computers. Clearly, powerful enough to make Kurzweil and Moravec's dreams come true.

The combination of this tremendous computing power, on the one hand, with advances in manipulation in the field of physical sciences, on the other hand, with recent crucial discoveries in the field of genetics, will result in unleashing a wave whose transformative power is phenomenal. These accumulations make it possible to consider a complete redistribution of cards, for better or for worse. The processes of duplication and development, hitherto limited to the physical world, are now within the reach of man.

When creating software and microprocessors, I never had the feeling that I was developing a single “smart” machine. Given the great fragility of software as well as hardware and the clearly zero “thinking” capacities that a machine shows, I have always referred this to the very distant future — even as a simple possibility.

But today, in the perspective of computing power catching up with that of human capacities by 2030, I feel that a new idea is emerging: that, perhaps, I am working on the development of tools capable of producing a technology that could replace our species. On this point, how do I feel? That of profound discomfort. Having fought throughout my career to make reliable software, the possibility of a future much less rosy than some would like to imagine seems more than likely to me today. Based on my personal experience, we tend to overestimate our abilities as designers.

Given the tremendous power of these new technologies, shouldn't we ask ourselves about the best ways to coexist with them? And if, in the long run, their development can or should spell the death knell of our species, shouldn't we move forward with the greatest caution?

The dream of robotics is, first, to have “intelligent” machines do the work for us, so that, reconnecting with the lost Eden, we can live a life of idleness. Still, in his version of Darwin Among the Machines [“Darwin among the machines”, unpublished in French], George Dyson warns us: “In the game of life and evolution, three players are sitting at the table: human beings, nature and machines. I am clearly on the side of nature. But nature, I am afraid, is on the side of the machines.” As we have seen, Moravec agrees with him on this point, since he declares himself convinced of our slim chances of survival in the event of an encounter with the superior species of robots.

At what horizon could an “intelligent” robot of this type emerge? The upcoming leap in computing capabilities suggests that it could be in 2030. However, once a first intelligent robot has been developed, all that remains is to take a small step to create an entire species, in other words to create an intelligent robot capable of duplicating itself, of making elaborate copies of itself.

A second dream of robotics is that little by little, our robotic technology will replace us, and that, thanks to the transfer of consciousness, we will achieve virtual immortality. It is precisely this process that, according to Danny Hillis, we are going to get used to, and it is also this process that Ray Kurzweil explains in detail and with distinction in The Age of Spiritual Machines.

But if we become extensions of our technologies, what are our chances of remaining ourselves and, even, of remaining human beings? It seems more than obvious to me that a robot existence would be incommensurate with the existence of a human being in the sense in which we understand it, whatever it may be, that robots would under no circumstances be our children, and that, on this path, our humanity could well be lost.

Genetic engineering promises to revolutionize agriculture by combining increased harvests with the reduction of the use of pesticides; to create tens of thousands of new species of bacteria, plants, viruses, and animals; to replace, or at least to complete, reproduction through cloning; to produce cures for countless diseases; to increase our life expectancy and our quality of life; and many, many other things. Today, we are fully aware of it: these profound changes in the field of biological sciences will take place incessantly, and will question all the notions we have of life.

Technologies such as human cloning, in particular, have made us aware of the fundamental ethical and moral issues that arise. Putting genetic engineering at the service of the restructuring of the human race into several distinct and unequal species, for example, would jeopardize the concept of equality, itself an essential component of our democracy.

Given the tremendous power of genetic engineering, it is not surprising that fundamental security issues limit its use. My friend Amory Lovins recently authored, in collaboration with Hunter Lovins, an editorial that provides an ecological perspective on a number of these dangers. Among their concerns is that “the new botany should align the development of plants with their prosperity, no longer in terms of evolution, but in terms of economic profitability” (see A Tale of Two Botanies, p.247). Amory Lovins has been working on the energy-resource relationship for a long time by studying systems created by man using the so-called “whole-system view” method, a global approach that often makes it possible to find simple and intelligent solutions to problems that, when examined from a different angle, may appear delicate. In this case, this method also proves to be conclusive.

Some time after reading the Les Lovins editorial, I noted, in the November 19, 1999 edition of the New York Times, a post about genetically modified organisms in agriculture, by Gregg Easterbrook under the title “Food of the Future: One day, vitamin A integrated into your rice grain. Except in case of victory for the Luddites.”

Are Amory and Hunter Lovins “Luddites”? Obviously no. No one doubts, I imagine, the possible benefits of golden rice, with its integrated vitamin A, insofar as it is developed in accordance with the potential dangers arising from crossing barriers between species.

As the editorial in Les Lovins attests, vigilance in the face of the dangers inherent in genetic engineering is beginning to increase. To a very large extent, the population is aware of foods made from genetically modified organisms and feels uncomfortable with them; they seem opposed to the idea of their circulation without adequate labelling.

But genetic engineering has already come a long way. As the Lovins point out, the USDA has already approved the unlimited sale of some 50 genetically modified agricultural products. Thus, today, more than half of soybeans and a third of global maize contain transferred genes, resulting from interbreeding with other forms of life.

While, in this context, there is no shortage of major challenges, my main fear in the field of genetic engineering is more targeted: that this technology could give the power to trigger a “white plague” — militarily, accidentally or through a deliberate terrorist act.

The many wonders of “nanotechnology” were originally imagined by the Nobel laureate in physics Richard Feynman, in a speech he gave in 1959, later published under the title “There's plenty of room at the bottom”. In the mid-1980s, a book made a strong impression on me: it's aboutEngines of creation: The advent of nanotechnology, by Eric Drexler. In this book, the author describes in vibrant terms how the manipulation of matter at the atomic level could make it possible to build a utopian future of profusion of material goods, in which almost everything could be produced at a ridiculous cost, and where, thanks to nanotechnology and artificial intelligence, almost any disease or physical problem could be solved.

In the wake, a book, this time co-authored by Drexler under the title Unbounding the Future: The Nanotechnology Revolution [“Unraveling the Future: The Nanotechnology Revolution”, unpublished in French], imagined some of the changes likely to occur in a world with “assemblers” at the molecular level. Thanks to these microassemblers, and for incredibly low prices, it became possible to produce solar energy, to strengthen the immune system's capacities to treat diseases, from cancer to the common cold, to clean the environment from top to bottom, or to put on the market pocket supercomputers at ridiculous prices. Concretely, these assemblers had the capacity to mass-produce any product for a price that did not exceed that of wood, to make space travel more affordable than transoceanic cruises are today, or even to restore extinct species.

I remember the favorable impression that readingCreation engines left me with nanotechnology. When I closed this book, as a tech man, I was won by a sense of peace, in the sense that they heralded phenomenal progress. This progress was not only possible, but perhaps even inevitable. If nanotechnology was our future, then finding, right away, a solution to the multitude of problems that faced me was no longer of the same urgency. I'll get to Drexler's utopian future in due time: while I'm doing, I might as well enjoy life a bit, here and now. Given his vision, working tirelessly day and night no longer made sense.

Drexler's vision was also a source of genuine laughs for me. More than once, I have surprised myself praising the extraordinary benefits of nanotechnology to those who had never heard of it. After having thrilled them with Drexler's descriptions, I assigned them a small mission of my own: “By using nanotechnology methods, produce a vampire. To score extra points, create the antidote.”

Feu de camp

The wonders in question carried real dangers, and I was acutely aware of these dangers. As I said in 1989 at a conference, “we cannot limit ourselves to our discipline without considering these ethical questions.”

But my subsequent conversations with physicists convinced me: in all likelihood, nanotechnology would remain a dream — or at least not likely to be operational any time soon. Shortly after, I moved to Colorado, where I had set up a high-tech design team, before my interest focused on software for the Internet, primarily on ideas that would become the Java language and the Jini protocol.

And then, last summer, Brosl Hasslacher told me: molecular electronics at the “nano” scale had become a reality. This time, we could really talk about a dramatic event — at least for me, but for many others as well, I think — and this information radically changed my point of view on the subject of nanotechnology. She sent me back to Creation engines. Looking back at Drexler's work more than ten years later, I was dismayed at how little I paid attention to a very long section of the book entitled “Promises and Perils,” which included a debate on the topic of nanotechnology as potential “devices of destruction.”

In fact, rereading these warnings today, I am struck by Drexler's apparent naivety in some of his preventive proposals; I now consider the risks to be infinitely more serious than he was at the time in this book (Drexler, having anticipated and expounded many technical and political problems related to nanotechnology, launched the Foresight Institute in the late 1980s, to help society prepare for advanced technologies — in in particular, nanotechnology).

In all probability, the major discovery that will lead to assemblers will take place in the next twenty years. Molecular electronics, the most recent field of nanotechnology, where isolated molecules are elements of the circuit, is expected to rapidly advance and generate colossal benefits over the coming decade, thus triggering a massive and growing investment in all nanotechnologies.

Unfortunately, as with nuclear technology, the use of nanotechnology for destruction purposes is significantly easier than its use for constructive purposes. These have very clear military and terrorist applications. Moreover, it is not necessary to be motivated by suicidal impulses to release a “nanodevice” of mass destruction: its vocation may be selective destruction, with the sole target, for example, a specific geographical area or a group of genetically distinct individuals.

One of the immediate consequences of the Faustian trade, which gives access to the immense power that nanotechnologies confer, is the formidable threat they pose to us: that of the possible destruction of the biosphere, which is essential for all life.

Drexler explains it in the following terms:

“Plants” with “leaves” that are not much more effective than our current solar collectors could defeat real plants and invade the biosphere with inedible foliage. Resistant omnivorous “bacteria” could defeat real bacteria, spread through the air like pollen, reproduce rapidly, and, in the space of a few days, destroy the biosphere to nothing. Dangerous “replicators” could easily be too tough, too small, and too quick to reproduce to be stopped - at least if we don't take the lead. We are already having a hard time controlling viruses and fruit flies. Nanotechnology experts have nicknamed this threat “grey molasses.” While there is no evidence that swarms of uncontrolled replicators would automatically form a gray and slimy mass, this name highlights that replicators with such devastating power would, I believe, prove to be less engaging than a simple weed species. Certainly, they may prove to be superior in terms of their degree of evolution, but that is not enough to make them useful. The gray molasses problem makes one thing clear: when it comes to assemblers and their reproduction, we cannot afford certain accidents.”

To end up stuck in a gray and viscous mass would certainly be a depressing end to our adventure on Earth — far worse than just fire or ice. Also, it could happen as a result of a simple, oops! , laboratory incident.

Above all, it is the destructive power of self-reproduction in the field of genetics, nanotechnology and robotics (GNR) that should encourage us to pause. Self-reproduction is the Modus operandi genetic engineering, which uses the cell's machinery to duplicate its own structures, and is the number one “gray molasses” risk underlying nanotechnology. The scenarios, in the vein of Borg, madly robots reproducing themselves or mutating to avoid the ethical constraints imposed by their designer are today classics of science fiction books and movies. Moreover, it is not excluded that, in the end, the phenomenon of self-reproduction will prove to be more fundamental than we thought, and that, as a result, mastering it will be more difficult, if not impossible. A recent article by Stuart Kauffman published in Nature, entitled “Autoreproduction: even peptides get started”, focuses on a discovery establishing that a polypeptide of 32 amino acids has the ability to “autocatalyze its own synthesis”. To date, even if the exact scope of such a capability is poorly evaluated, Kauffman observes that it could open “a path of self-reproductive molecular systems on a significantly more extensive basis than established by the Watson-Crick base pairs.”

In fact, for years we have been warned of the dangers inherent in popularizing GNRs, knowledge that alone makes mass destruction possible. But these warnings have not been relayed, and the public debates have not been up to par. There is no benefit to be expected from raising public awareness.

Nuclear, biological and chemical (NBC) technologies used in 20th century weapons of mass destructionE century, which were and remain primarily military in nature, are developed in state laboratories. Violent contrast, the GNR technologies of the XXIE century are distinguished by clearly commercial uses and are almost exclusively developed by companies in the private sector. In the age of triumphant business, technology—backed by science, in the role of the handmaid—is producing a whole range of inventions that are, so to speak, magical, and generating enormous benefits that are incomparable to what we have known so far. We are aggressively pursuing the dreams of these new technologies within a system that is now uncontested, with multiple financial motivations and competitive pressures: that of global capitalism.

“This is the first time in the history of our planet that a species, regardless of its nature, has become a peril to itself — and to a very large number of others — through its own deliberate actions. This could well be a classic progression, inherent in many worlds: a planet, just formed, orbits placidly around its star; life slowly appears; a kaleidoscopic procession of creatures evolves; intelligence emerges, significantly increasing the ability to survive — at least up to a certain point; and then, technology is invented. The idea is emerging that there are certain things such as the laws of nature, that it is possible to verify these laws through experimentation, and that the intelligence of these laws makes it possible to create life as well as to eliminate it, at unprecedented scales in both cases. Science, they recognize, confers immense powers. In a flash, they invented machines that could change the face of the world. Some civilizations see further ahead, determine what is good to do and what not to do, and successfully navigate the time of peril. Others, less fortunate or less careful, perish.”

We owe these lines to Carl Sagan, describing in 1994 in Pale Blue Dot [“A pale blue dot”, unpublished in French] his vision of the future of the human species in space. It is only today that I realize how penetrating his views were, and how much I miss and will continue to miss his voice sorely. Of all his contributions, served by praiseworthy eloquence, common sense was not the least; a quality that, like humility, seems to be lacking in many eminent promoters of 21st century technologies.E century.

As a child, I remember my grandmother saying she was strongly opposed to the systematic use of antibiotics. A nurse since before the First World War, she was distinguished by a common-sense attitude that made her say that, unless absolutely necessary, antibiotics were bad for your health.

Not that she was opposed to progress. In almost seventy years spent at the bedside, she has witnessed many advances; my grandfather, a diabetic, benefited greatly from the improvement of treatments that emerged during his lifetime. But, I am sure, like a large number of well-balanced individuals, she would no doubt now see evidence of rare arrogance in our attempts to design a robotic “surrogate species”, when, obviously, we already have a lot of trouble making relatively simple things work, and so much difficulty in managing ourselves — when not in understanding ourselves.

I now realize that she was aware of the order of the living, and of the need to comply with it by respecting it. Along with this respect comes the necessary humility, humility of which, with our presumption of the beginning of XXIE century, we are missing for our greatest peril. Grounded in such respect, the point of view of common sense is often right, and that is before it is scientifically established. The obvious fragility and the shortcomings of systems created by man should encourage us to pause; the fragility of the systems on which I have personally worked reminds me, in fact, of this duty of humility.

We should have learned a lesson from the manufacture of the first atomic bomb and the resulting arms race. We did not stand out at the time, and the parallels with the current situation are troubling.

The effort that would lead to the manufacture of the first atomic bomb was led by a brilliant physicist named Julius Robert Oppenheimer. Although naturally disinclined to politics, this man became keenly aware of what he considered to be a serious threat: the threat posed to Western civilization by the Third Reich. This danger was all the more legitimate as Hitler was perhaps in a position to obtain a nuclear arsenal. Galvanized by this concern, with his brilliant mind, his passion for physics and his charisma as a leader, Oppenheimer flew to Los Alamos and led a quick and successful effort at the head of an incredible group of great minds, an effort that, in a short time, made it possible to invent the bomb.

What is striking is that this effort continued so naturally as soon as the original impulse was no longer there. In a meeting held shortly after the Allies' victory in Europe, and attended by some physicists who believed that this effort might need to be stopped, Oppenheimer, on the contrary, declared himself in favor of its continuation. The reason given was somewhat curious: it was not the fear of the terrible losses that the conquest of Japan would cause, but the fact that it would give the United Nations, then about to be created, a head start when it came to nuclear weapons. More likely, the continuation of the project is explained by the dynamic acceleration of the process: the first atomic test, Trinity, was within reach.

We know that, in preparing for this first test, physicists ignored a large number of possible dangers. Their initial fear, based on a calculation by Edward Teller, was that an atomic explosion would set the atmosphere ablaze. A revised and corrected estimate then reduced this threat of annihilation of the planet to a probability of one in three million. (Teller claims that later, he was able to completely eliminate the risk of an ignition in the atmosphere.) However, Oppenheimer was worried enough about the impact of Trinity to develop a possible evacuation of the southeastern part of New Mexico. And, of course, there was the threat of a nuclear arms race.

Less than a month after this first successful test, two atomic bombs destroyed Hiroshima and Nagasaki. Some scientists had suggested, rather than actually dropping the bomb on Japanese cities, to simply demonstrate it — but to no avail. They argued that once the war was over, the chances of arms control would be increased. With the memory of the Pearl Harbor tragedy still vivid in American memories, it would have been very difficult for President Truman to order a simple demonstration of the power of arms, rather than actually using them as he did; the desire to quickly end the war and to spare the lives that an invasion of Japan would have inevitably taken was too burning. However, the overriding factor was probably quite simple: as the physicist Freeman Dyson later said, “the reason why she was let go is simple: no one was brave enough, or wise enough, to say no.”

It is important to realize the state of shock in which the physicists found themselves the day after the bombing of Hiroshima on August 6, 1945. They report successive waves of emotions: first, a feeling of accomplishment at the fact that the bomb worked; second, a feeling of horror at the number of people killed; and finally, the acute feeling that under no circumstances should another bomb be dropped. Nevertheless, a second airdrop took place, on Nagasaki, and that was only three days after Hiroshima.

In November 1945, three months after the nuclear bombings, Oppenheimer, resolutely aligned with the positions of the scientific community, expressed himself as follows: “No one can claim to be a scientist if he is not convinced of the intrinsically beneficial nature of the knowledge of the world for humanity, nor of the power it confers. You only use this power to disclose that knowledge, and you need to be prepared to fully assume the consequences.”

Subsequently, along with others, Oppenheimer worked on the Acheson-Lilienthal Report, which, in the words of Richard Rhodes in his recent book Visions of Technology, “made it possible to prevent a clandestine nuclear arms race, without recourse to an armed world government”; their suggestion consisted in a way that nation states renounce nuclear weapons in favor of a supranational body.

This proposal led to the Baruch Plan, which was submitted to the United Nations in June 1946, but was never adopted (perhaps because, as Rhodes suggests, Bernard Baruch had “insisted on burdening the plan with coercive measures”, dooming it therefore to an inevitable failure, even if, “in any case, it would have almost been rejected by Stalinist Russia”). Other attempts to promote significant steps in the globalization of atomic power, aimed at avoiding an arms race, were met with the joint hostility of both political circles and American citizens, who were suspicious, as well as from the Soviets, who were just as suspicious. That is how, very quickly, the chance to avoid the arms race fell by the wayside.

Two years later, it seems that Oppenheimer has reached a new level. In 1948, he said: “In the strictest sense of the word, which no triviality, humor, or subtext can ever completely dispel, physicists have sinned. And that sin will remain etched in them forever.”

In 1949, the Soviets exploded an atomic bomb. As early as 1955, the United States conducted atmospheric tests of hydrogen bombs that could be dropped from an airplane. The Soviet Union is doing the same. The arms race has begun.

Almost twenty years ago, in the documentary entitled The Day After Trinity, Freeman Dyson summarized the attitude of some scientists that led us to the nuclear abyss:

“I felt it personally. This fascination with atomic weapons. For a researcher, their power of attraction is irresistible. To feel this energy that ignites the stars, right there, at your fingertips, to release it and to feel the master of the world. To work miracles, to catapult tons of rocks into space, by the millions. This is something that gives you the illusion of unlimited power, and in a way, all of our ills stem from it. This thing, let's call it technological arrogance, is stronger than you. You see the incredible power of the mind, and it's irresistible.”

Today, as then, we are the inventors of new technologies and the proponents of an imagined future, driven this time, in a context of global competition, by the prospect of considerable financial gains, and this in spite of obvious dangers, uninterested in considering what an attempt to live could look like in a world that is none other than the realistic outcome of what we are inventing and imagining.

Since 1947, The Bulletin of Atomic Scientists features a “Last Judgement Clock” on its cover. This barometer, which reflects the variations in the international situation, has been providing an estimate of the relative nuclear threat that weighs on us for more than fifty years. Fifteen times, the hands of this clock have moved. Set to midnight to nine, they now indicate a continuing and real threat from the atomic arsenal. The recent entry of India and Pakistan into the club of nuclear powers is a severe blow to the objective of non-proliferation, as underlined by the movement of the hands, which, in 1998, moved closer to the decisive hour.

To date, just how serious is this danger facing us, not exclusively in terms of atomic weapons, but in view of all of these technologies? What are, in concrete terms, the risks of extinction that threaten us?

The philosopher John Leslie, who has studied the question, estimates the minimum risk of extinction of the human species at 30%. Ray Kurzweil, for his part, believes that “our chance of getting out of it is above average”, specifying in passing that he has “always been criticized for being an optimist”. Not only are such estimates unappealing, but they rule out the multiple and horrific events that would prelude to extinction.

Faced with such assertions, some trustworthy individuals simply suggest redeploying away from Earth as soon as possible. We would colonize the galaxy using von Neumann's space probes, which, leaping from one star system to another, reproduce themselves by leaving the area. Taking this step will be an unavoidable imperative in the next five billion years (or even sooner, if our solar system were to suffer the cataclysmic impact of the collision of our galaxy with that of Andromeda, expected within three billion years); but if we take Kurzweil and Moravec at their word, this migration could prove necessary by the middle of the century.

Feu de camp

What are the moral implications in play here? If, for the survival of the species, we have to leave Earth in such a near future, who will take responsibility for all those who stay at the dock (most of us, in fact)? And even if we were scattered among the stars, is it not likely that we would take our problems with us, or that we would later find out that they followed us? The fate of our species on Earth seems to be inextricably correlated with our destiny in the galaxy.

Another idea is to build a series of preventive shields against various risky technologies. The strategic defense initiative proposed by the Reagan administration was intended to be an attempt at such a shield to ward off the threat of a nuclear attack by the Soviet Union. But, as Arthur C. Clarke observed, in the secrecy of the discussions surrounding the project, “while it is conceivable, at enormous costs, to build local defense systems that would allow “only” a few hundredths of ballistic missiles to pass, the solitary idea of an umbrella covering the entire United States was essentially nonsense. Luis Alvarez, perhaps the greatest researcher in experimental physics of this century, pointed out to me that the promoters of projects of this type were “extremely brilliant individuals, but devoid of common sense.”

“When I read in my crystal ball, which is often quite opaque,” continues Arthur C. Clarke, “I do not rule out the possibility that a comprehensive defense could be developed within a century or two. But the technology that this would require would generate by-products so dreadful that, from then on, no one would dream of wasting their time with things as primitive as ballistic missiles.”

In Creation engines, Eric Drexler proposed the construction of an active nanotechnological shield — a kind of immune system for the biosphere — protecting us from dangerous “replicators” of all kinds, likely to escape laboratories, or to the birth of possible malicious inventions. But the shield he proposes is in itself extremely dangerous: nothing, in fact, could prevent him from developing “autoimmune” problems and from attacking the biosphere himself.

Similar challenges go hand in hand with building shields to protect us from robotics and genetic engineering. These technologies are too powerful to be protected in a timely manner; in addition, even if the deployment of defensive shields is possible, the collateral effects would be at least as terrible as the technologies they were supposed to guarantee us.

As a result, all of these possibilities are either undesirable or unfeasible, or both at the same time. The only realistic alternative, in my opinion, is to give it up, to restrict research in the field of technologies that are too dangerous, by setting limits on our quest for certain types of knowledge.

Yes, I know, knowing is beneficial, and the same is true when it comes to the search for new truths. Aristotle opens Metaphysics with this simple observation: “All people naturally want to know.” For a long time, we have recognized free access to information as a fundamental value of our society, and agreed that problems arise when we try to limit access to it and hinder its development. Lately, we have come to the point where scientific knowledge is placed on a pedestal.

But if, now, despite established historical precedents, free access and the unlimited development of knowledge clearly pose a threat of extinction to all of us, then common sense requires that these beliefs, however fundamental and entrenched, be re-examined.

Nietzsche, at the end of the 19th centuryE century, not only warned us that “God is dead”, but also that:

“[...] the belief in science, which undoubtedly exists, could not have originated in such a calculation of utility; it was born rather in Despite the fact that the uselessness and the danger of the “will for truth”, of “truth at any cost” are constantly being demonstrated”

(Gai Savoir, 344)

It is precisely this danger — the consequences of our search for truth — that threatens us today with all its weight. The truth that science seeks can unquestionably be considered a dangerous substitute for God if it is likely to lead to our extinction.

If, as a species, we could agree on our aspirations, on what we are going towards, and on the nature of our motivations, then we would be building a significantly less dangerous future. Then we could understand what it is not only possible, but desirable, to give up. Otherwise, it is easy to imagine an arms race starting around RNG technologies, as happened in the 20th centuryE century around NBC technologies. Perhaps the greatest danger lies there, in that, once the machine is started, it is very difficult to stop it. This time — unlike the Manhattan Project era — we are not at war, facing an implacable enemy posing a threat to our civilization; this time, we are driven by our habits, our desires, our economic system and by the race for knowledge.

I believe that we would all like our path to be inspired by collective, ethical and moral values. If, over the past few millennia, we had acquired deeper collective wisdom, then engaging in a dialogue to that end would be easier, and this tremendous power that was about to unfold would be far from being as worrisome.

One might think that the instinct for self-preservation would lead us to such a dialogue. While, as individuals, we clearly demonstrate this desire, on the other hand, our collective behavior as a species seems to work against us. In composing the nuclear threat, we have often behaved dishonestly, both to ourselves and to each other, thus greatly multiplying the risks. Political reasons, a deliberate choice not to see further, or behavior motivated by irrational fears arising from the serious threats that were then facing us, I do not know, but that does not bode well.

That the new Pandora's boxes, genetics, nanotechnology and robotics, are ajar, no one seems to be worried. You don't close the lid on ideas; unlike uranium or plutonium, an idea doesn't need to be extracted or enriched, and you can duplicate it freely. Once let go, you can't stop it anymore. Churchill, in an ambiguous compliment that has remained famous, observed that Americans and their leaders “always end up acting honorably, once they have carefully considered each of the other options.” However, in this specific case, it is necessary to intervene earlier, insofar as acting honorably only as a last resort could well condemn us.

Thoreau said, “We don't get on the train; he gets on us.” And that's what's at stake now. In fact, the real question is which will dominate the other, and whether we will survive our technologies.

We are propelled into this new century without maps, without control, without brakes. Have we already gone too far along this path to correct our course? I don't think so; however, no effort has yet been made in this direction, and our last chances of regaining control, i.e. our point of no return, are rapidly approaching. We already have our first synthetic domestic animals, and some genetic engineering techniques are now available on the market; as for our “nano” scale techniques, they are progressing rapidly.

While their development involves a certain number of steps, the final stage of a demonstration is not necessarily something as huge and as difficult as the Manhattan Project or the Trinity test. The crucial discovery of the capacity for uncontrolled self-reproduction in the field of robotics, genetic engineering or nanotechnology could come suddenly, renewing the surprise effect of the day when the news of the cloning of a mammal fell.

Nevertheless, I believe that we still have solid and powerful reasons for hope. The efforts made to address the issue of weapons of mass destruction over the past century provide a striking example of renunciation that deserves attention: the unilateral and unconditional abandonment by the United States of the development of biological weapons. This disengagement is the result of a twofold observation: on the one hand, considerable effort must be made to develop these formidable weapons; on the other hand, they can easily be duplicated and fall into the hands of belligerent nations or terrorist groups.

All of this made it clear that developing these weapons would only add new threats, and that renouncing them would increase our security.

This solemn commitment to forbid the use of bacteriological and chemical weapons was enshrined in the Convention on the Prohibition of Biological Weapons (CABT) in 1972 and in the Convention on the Prohibition of Chemical Weapons (CIAC) in 1993.

As for the persistent and quite considerable threat of atomic weapons, under the weight of which we have been living today for over fifty years, it is clear from the recent rejection by the United States Senate of the Comprehensive Nuclear Test Ban Treaty that disengaging from atomic weapons will not be a politically easy task. But the end of the Cold War offers us an exceptional opportunity to prevent a multipolar arms race. In the wake of the CABT and the CIAC, succeeding in abolishing atomic weapons could encourage us to abandon dangerous technologies (in fact, starting by getting rid of one hundred atomic weapons scattered around the world — approximately, the total destructive power of the Second World War; a considerably less burdensome task) — would suffice to eliminate this threat of extinction.

Verifying the reality of the disengagement will be a delicate problem, but not an unsolvable one. Fortunately, a great deal of similar work has already been done in the context of the CABT and other treaties. Our core task will be to apply this to technologies that, by nature, are decidedly more commercial than military. Here, there is a substantial need for transparency, insofar as the difficulty of verification is directly proportional to the difficulty of distinguishing abandoned activity from legitimate activity.

I honestly think that in 1945, the situation was simpler than the one we are facing today: it was relatively simple to draw the line between commercial and military nuclear technologies. In addition, control was facilitated by the very nature of atomic tests and the ease with which the degree of radioactivity could be measured. Research for military applications could be conducted in government laboratories such as the one in Los Alamos, and the results kept secret for as long as possible.

GNR technologies do not clearly fall into two distinct families, the military and the commercial; given their market potential, it is difficult to imagine that their development could remain confined to state laboratories. In the context of their large-scale commercial development, monitoring the effectiveness of the disengagement will require the establishment of a verification regime similar to that of biological weapons, but on a scale unprecedented to date.

Inevitably, the gap will widen: on the one hand, the desire to protect one's privacy and certain confidential data, and on the other, the need for this same information to remain accessible in the interest of all. Faced with this invasion of our privacy and our margin of manoeuvre, we will undoubtedly encounter strong resistance.

Controlling the effective stopping of certain GNR technologies will have to take place on both virtual and physical sites. In a world of confidential data, the crucial challenge will be to make the necessary transparency acceptable, presumably by producing renewed forms of intellectual property protection.

Verifying such compliance will also require scientists and engineers to adopt a rigorous code of ethics, similar to the Hippocratic Oath, and the courage to sound the alarm when necessary, even if they have to pay a high personal price. This would respond — fifty years after Hiroshima — to the call made by Nobel laureate Hans Bethe, one of the most venerable members of the Manhattan Project still alive, calling on scientists to “cease and desist all activities of design, development, improvement, and manufacture of nuclear weapons and other weapons with the potential for mass destruction.” In the 21st centuryE century, this will require vigilance and personal responsibility on the part of those who could work on both NBC and GNR technologies, to prevent the deployment of weapons and engineering of mass destruction accessible through knowledge alone.

Thoreau also said that we “estimate a man's wealth by the number of things he can afford to leave behind.” Each of us aspires to happiness, but is it reasonable to run such a high risk of total destruction in order to accumulate even more knowledge, and even more possessions? Common sense says that there is a limit to our material needs. Some knowledge is decidedly too dangerous: it is better to give it up.

Nor should we dream of near immortality without first estimating the costs, and without taking into account the growing risk of extinction. Immortality may be the original utopia; however, it is certainly not the only one.

I recently had the privilege of meeting the distinguished writer and scholar Jacques Attali, whose book Lines of horizons partly inspired the Java and Jini approach to the perverse effects of computer technology in the years to come. In his latest book, Fraternities, Attali explains how, over time, our utopias have been transformed:

“At the dawn of societies, men, knowing that perfection belonged only to their gods, saw their passage on Earth only as a labyrinth of pain at the end of which was a door opening, via death, to the company of the gods and to Eternity. With the Hebrews and then with the Greeks, men dared to free themselves from theological requirements and dream of an ideal City where Freedom would flourish. Others, watching the evolution of market society, understood that the freedom of some would lead to the alienation of others, and they sought Equality.”

Jacques Attali allowed me to understand how these three utopian goals exist in tension in our current society. He continues with the presentation of a fourth utopia, the Fraternity, whose basis is altruism. In itself, the Brotherhood combines individual happiness with the happiness of others, offering the promise of autonomous growth.

This crystallized in me the problem I had with Kurzweil's dream. A technological approach to Eternity — the virtual immortality that robotics promises us — is not necessarily the most desirable utopia. In addition, caressing this kind of dream has obvious dangers. Perhaps we should reconsider our utopian choices.

What should we turn to in order to find a new ethical basis that can guide us? I found the ideas put forward by the Dalai Lama in Ancient wisdom, modern world very useful in this regard. As is widely accepted but not widely practiced, the Dalai Lama claims that the most important thing for us is to live our lives with love and compassion for others, and that our societies need to develop a stronger concept of universal responsibility and interdependence; he proposes a practical principle of ethical conduct for both individuals and societies, which is in keeping with the utopia of Fraternity of Attali.

Moreover, underlines the Dalai Lama, we need to understand what makes man happy, and to face the facts: the key is neither material progress nor the search for the power that knowledge confers. Clearly, there are limits to what science and scientific research alone can achieve.

Our Western notion of happiness seems to come to us from the Greeks, who gave as a definition: “To live with all your strength, guided by criteria of excellence, a life that allows them to develop.”

Of course, we need to find meaningful issues and continue to explore new paths if, no matter what, we want to find happiness. Still, I believe, we need to find new outlets for our creative forces, and get out of the culture of perpetual growth. While this growth has filled us with benefits for centuries, it has not brought us perfect happiness. The time has come to understand it: unlimited and wild growth through science and technology is inevitably accompanied by considerable dangers.

Feu de camp

It has now been over a year since I first met Ray Kurzweil and John Searle. Around me, I find reason for hope in the voices that are being raised in favor of the principle of precaution and disengagement, and in those individuals who, like me, are worried about the difficult situation in which we find ourselves. I too feel a greater sense of personal responsibility — not for the work that has been done so far, but for what I may have left to do, at the confluence of science.

However, many of those who are aware of the dangers seem to be staying strangely quiet. When pressed, they respond with “this is nothing new” — as if one could be satisfied with the mere awareness of the latent danger. “Universities are full of bioethicists who look at this stuff all day long,” they tell me. Or, “all of this has already been said and written, and by experts.” And finally: “These fears and these arguments are deja vu”, they complain.

I don't know where these people hide their fears. As an architect of complex systems, I enter this arena with the eyes of a scholar. But should I be less alarmed? I am aware that a lot has been written, said and taught about this subject, and with what panache. But did it reach people? Does that mean we can ignore the dangers that are knocking on our door today?

It is not enough to know; it is also necessary to act. Knowledge has become a weapon that we are turning against ourselves. Can we still doubt it?

The experiences of nuclear researchers clearly show that it is time to take full responsibility for our actions, that things can get out of hand, and that a process can escape our control and become autonomous. It is possible that, like them, without even having time to notice it, we may trigger insurmountable problems. The time to act is now if we do not want to be surprised and shocked, like them, by the consequences of our inventions.

I have always worked tirelessly to improve the reliability of my software. Software is a tool; therefore, being a tool manufacturer, I have to fight against certain uses of the tools I make. My belief has always been that, given their multiple uses, producing more reliable software would contribute to a better and safer world. If I came to the opposite conviction, then I would see myself in a moral obligation to stop my activity. Today, I no longer rule out such a perspective.

All of this does not leave me angry, just a bit melancholic. From now on, progress will be bittersweet and sour for me.

Do you remember the wonderful penultimate scene in manhattan Where can we see Woody Allen, lying on his couch, talking into the microphone of his tape recorder? He is currently writing a short story about these people who invent useless, neurotic problems for themselves, because it saves them from facing even more intractable and terrifying problems concerning the universe.

He comes to ask himself the question “What makes life worth living? ”, and to review the things that, in his case, help him: Groucho Marx, Willie Mays, the 2E Symphonie movement jupiter, the Potatoe Head Blues by Louis Armstrong, Swedish cinema, Emotional Education by Flaubert, Marlon Brando, Frank Sinatra, Frank Sinatra, Cézanne's apples and pears, Cézanne's apples and pears, and finally, the highlight: the face of his girlfriend Tracy.

Each of us loves certain things above all else, and this disposition for others is nothing but the substratum of our humanity. In the final analysis, it is because of this undeniable ability that I remain confident: I am sure that we will meet the daunting challenges that the future throws at us.

My immediate hope is to participate in a much larger discussion dealing with the issues raised here, with individuals from diverse backgrounds, and in a spirit that escapes both the fear and the idolatry of technology, in the name of vested interests.

As a preliminary step, I have twice raised many of these questions at events sponsored by the Aspen Institute and further proposed that the American Academy of Arts and Sciences incorporate them into its Pugwash conference activities. Since 1957, the latter have devoted themselves to arms control, especially nuclear arms, and have formulated realistic recommendations.

What we can regret is that they were only initiated well after the nuclear engineer escaped from his bottle — say, about fifteen years too late. In the same way, we are very late in starting a thorough reflection on the challenges raised by 21st century technologies.E century, and primarily the prevention of mass destruction engineering accessible through knowledge alone. Delaying the launch any further would be unacceptable.

So I am continuing to explore; there is still a lot to learn. Are we called to succeed or to fail, to survive or to fall under the blows of these technologies? It is not written yet.

That's it, I'm up again late; it's almost six in the morning. I try to imagine more appropriate answers, and to “unravel the secret” of the stone to release them.

Bill Joy

Share this post

Don't miss out on any of our posts.

Subscribe to our newsletter to get the latest news.

Access the form

Join the resistance.

ATR is constantly welcoming and training new recruits determined to combat the technological system.