WHEN MACHINES THINK
Essays on the 2000 MILLENNIUM
Athreshold event will take place early in the 21st century: the emergence of machines more intelligent than their creators. By 2019, a $1,000 computer will match the processing power of the human brain—about 20-million-billion calculations per second. Organizing these resources—the “software” of JË-J® intelligence—will take us to 2029, by which time your average personal computer will be equivalent to 1,000 human brains.
iS K Once a computer achieves a level of intelligence comparable to human ÊÊ WL intelligence, it will necessarily soar past it. For one thing, computers can easily share their knowledge. If I learn French, or read War and Peace, I can’t readily download that learning to you. You have to acquire that scholarship the same painstaking way that I did. But if one computer learns a skill or gains an insight, it can immediately share that wisdom with billions of other computers. So every computer can be a master of all human and machine-acquired knowledge.
Keep in mind that this is not an alien invasion of intelligent machines. It is emerging from within our civilization. There will not be a clear distinction between human and machine as we go through the 21st century. First of all, we will be putting computers— neural implants—directly into our brains. We have already started down this path. We have neural implants to counteract Parkinson’s disease and tremors from multiple sclerosis. We have cochlear implants that restore hearing to deaf individuals. Under development is a retina implant that will perform a similar function for blind people, basically replacing the visual processing circuits of the brain. Recently, scientists from Emory University in Atlanta placed a chip in the brain of a paralyzed stroke victim, who can now begin to communicate and control his environment directly from his brain.
In the 2020s, neural implants will not be just for disabled people. There will be ubiquitous use of neural implants to improve our sensory experiences, perception, memory and logical thinking. These implants will also plug us in directly to the World Wide Web. This technology will enable us to have virtual reality experiences with other people—or simulated people—without requiring any equipment not already in our heads. And virtual reality will not be the crude experience that you may have experienced today in arcade games. Virtual reality will be as realistic, detailed and subtle as real reality. So instead of just phoning a friend, you can meet in a virtual French café in Paris, or stroll down a virtual Champs Elysées, and it will seem very real. People will be able to have any type of experience with anyone—business, social, romantic, sexual—regardless of physical proximity.
One approach to designing intelligent computers will be to copy the human brain, so these machines will seem very human. And through nanotechnology, which is the ability to create physical objects atom by atom, they will have human-like—albeit greatly
Move over, HAL and C3PO. Real versions of moviedom’s famed thinking machines are on their way—very soon, says Ray Kurzweil, a leading U.S. technologist and author of the new hook The Age of Spiritual Machines: When Computers Exceed Human Intelligence (Viking). Kurzweil, based in Wellesley, Mass., pioneered the development of machines that can read text, turn print into speech and recognize voices. He was awarded Carnegie-Mellon University’s top science honour, the Dickson Prize, in 1994, and named Inventor of the Year by the Massachusetts Institute of Technology in 1988. He has also been honoured by the Canadian National Institute for the Blind. In this essay, he sets out some of the startling developments—from plugging brains directly into the World Wide Web to replicating individuals, Max Headroom-style, in robotic form—that he expects in the 21st century.
enhanced—bodies as well. Having human origins, they will claim to be human, and to have human feelings. And being immensely intelligent, they’ll be very convincing when they tell us these things.
The next 20 years will see far more change than the previous hundred. Most comments I hear about the future are not very well thought out. One common failure is to focus on only one aspect of science and technology, while ignoring other developments that are likely to intersect. Another is to see only one or two iterations of advancement in a technology, and then assume that progress will come to a halt. Another is failing to understand the accelerating nature of technological progress.
One very important trend is referred to as “Moore’s Law.” Gordon Moore, one of the inventors of integrated circuits, and then chairman of Intel, noted in the mid-1970s that we could squeeze twice as many transistors on an integrated circuit every 24 months. The implication is that computers, which are built from integrated circuits, are doubling in power every two years. Lately, the rate has been even faster.
After 60 years of devoted service, Moore’s Law will die a dignified death around the year 2019. By that time, transistor features will be just a few atoms in width, and the strategy of ever finer photolithography will have run its course. So will that be the end of the exponential growth of computing?
Don’t bet on it. Computing devices have been consistently multiplying in power from the mechanical calculating devices used in the 1890 U.S. census, to Alan Turing’s relay-based machine that cracked the Nazi Enigma code, to the vacuum-tube computer that predicted the election of Eisenhower, to the transistor-based machines used in the first space launches, to the integrated-circuit-based personal computer that I used to dictate this article, using speech-to-text software.
A new pattern of exponential growth will take over from Moore’s Law, just as Moore’s Law took over from discrete transistors, and vacuum tubes before that. I call it the Law of Accelerating Returns. There are many new technologies waiting in the wings—threedimensional chips, nanotubes, optical computing, crystalline computing, DNA computing, and quantum computing—to keep the Law of Accelerating Returns going for a long time.
So where will this take us? By 2030, it will take a village of human brains to match a $1,000 personal computer. By 2055, a thousand dollars of computing will equal the processing power of all human brains on Earth. OK, I may be off a year or two.
Actually, this only includes those brains still using carbon-based neurons. While human neurons are wondrous creations in a way, we certainly wouldn’t design computing circuits the same way. Our electronic circuits are already more than a million times faster. Most of the complexity of a human neuron is devoted to maintaining its lifesupport functions, not its information-processing capabilities. Ultimately, we will need to port our mental processes to a more suitable computing environment. Then our minds won’t have to stay so small, being constrained as they are today to a mere 100 trillion neural connections (each operating at a ponderous 200 calculations per second).
So far, I’ve been talking about the hardware of computing. But achieving the computational capacity of the human brain, or even villages and nations of human brains, will not automatically produce human levels of capability. The organization and content of these resources—the software of intelligence—is also critical.
There are a number of compelling scenarios to capture higher levels of intelligence in our computers, and ultimately human levels and beyond. Using massive parallel processing and neural networks that emulate the brain, we will be able to evolve and train a system to understand language and absorb knowledge, including the ability to read and understand written documents. Although the ability of today’s computers to extract and learn knowledge from natural language documents is limited, their capacity in this domain is improving rapidly. Computers will be able to read on their own, understanding what they have read, by the second decade of the 21st century. We can then have our computers read all of the world’s
literature—books, magazines, scientific journals, and other available material. Ultimately, the machines will gather knowledge on their own by venturing into the physical world, drawing from the full spectrum of media and information services, and sharing knowledge with each other (which machines can do far more easily than their human creators).
Once a computer achieves a human level of intelligence, it will quickly surpass it. A computer can remember billions or even trillions of facts perfectly, while we are hard-pressed to remember a handful of phone numbers. A computer can quickly search a database with billions of records in fractions of a second. And computers can readily share their knowledge. The combination of human-level intelligence in a machine with a computer’s inherent superiority in the speed, accuracy and sharing
of decades—projects will be initiated in earnest to scan the human brain, with a view both to understand it in general, and to provide a detailed description of the contents and design of specific brains. By the third decade of the 21st century, we will be in a position to create highly detailed and complete maps of the computationally relevant features of all neurons, neural connections and synapses in the human brain, and to recreate these designs in suitably advanced neural computers.
Now what will we find when we do this?
We have to consider this question on both the objective and subjective levels. Objectively, when we scan someone’s brain and reinstantiate their personal mind file (meaning, copy all of the processes relevant to human thinking) into a suitable computing medium, the newly emergent “person” will appear to other observers to have very much the same personality, history and memory as the person origi-
When do we consider an entity to be conscious, to feel pain?
ability of its memory will be formidable.
Let me describe one other particularly critical scenario. We have a good example of an intelligent process inside of each of us—and it’s not even copyrighted. There is no reason why we cannot reverse engineer the human brain, and essentially copy its design.
One approach is to scan a living brain. We have techniques today, including highresolution magnetic resonance imaging— MRI—scans, optical imaging, near-infrared scanning, and other noninvasive scanning technologies, that are capable of resolving individual somas, or neuron cell bodies. Brainscanning technologies are increasing their resolution with each new generation. The next generation will enable us to resolve the connections between neurons. Ultimately, we will be able to peer inside the synapses and record the neurotransmitter strengths.
There are a number of ways to apply the information. One approach is to use the results to create more intelligent algorithms—the basic elements of computer programs—for our machines, particularly those based on a neural net design. With this approach, we don’t have to copy every single connection. There is a great deal of repetition and redundancy within any particular brain region. With this information, we can design simulated nets that operate similarly. There are already many efforts under way to scan the human brain and apply the insights derived to the design of intelligent machines.
Perhaps a more interesting approach than this scanning-the-brain-to-understand-it see nario is scanning-the-brain-to-download-it. Here we copy someone’s brain to map the le cations, interconnections and contents of all the somas, axons, dendrites, presynaptic vesicles and other neural components. Its entire orga-
nization can then be recreated on a neural computer of sufficient capacity, including the contents of its memory. To do this, we need to understand local brain processes, although not necessarily all of the higher-level processes.
Scanning a brain to download it is not as daunting an effort as it may sound. Another scanning project—the human genome scan—also sounded daunting when it was first suggested. And at the rate at which they could scan genetic codes 12 years ago, it would have taken thousands of years to complete the project. But our ability to sequence the DNA in human genes has been accelerating like all other technology, so it appears that it will indeed be completed on time, in a few years from now.
The computing aspects of individual neurons are complicated, but definitely not beyond our ability to accurately model. For example, Ted Berger and his colleagues at Hedco Neurosciences in Los Angeles have built integrated circuits that precisely match the digital and analog information processing characteristics of neurons, including clusters with hundreds of neurons. Carver Mead and his colleagues at the California Institute of Technology in Pasadena have built a variety of integrated circuits that emulate the digital-analog characteristics of mammalian neural circuits.
An even easier way to do this is to take a frozen brain—preferably one frozen just slightly before rather than slightly after it was going to die anyway—and examine one brain layer— one very thin slice—at a time. We can readily see every neuron and every connection and every neurotransmitter strength represented in each synapse-thin layer.
As the computational power to emulate the human brain becomes available—we’re not there yet, but we will be there within a couple
nally scanned. That is, once the technology has been refined and perfected—like any new technology, it won’t be perfect at first. But ultimately, the scans and recreations will be very accurate and realistic.
Interacting with the newly instantiated person will feel like interacting with the original person. The new person will claim to be that same old person and will have a memory of having been that person.
There is an issue regarding the new person’s body. A reinstantiated mind without a body will quickly get depressed. And there are a variety of bodies that we will provide for our machines, and that they will provide for themselves— nanoengineered bodies, virtual bodies, bodies comprised of nanorobot (microscopic-sized robot) swarms that can turn themselves into various forms at will, and other scenarios.
Subjectively, the question is more subtle and profound. Is this the same consciousness as the person we just scanned?
Consciousness in our 21st-century machines will be a critically important issue. But it is not easily resolved, or even readily understood. People tend to have strong views on the subject, and often just can’t understand how anyone else could possibly see the issue from a different perspective. Artificial intelligence guru Marvin Minsky of the Massachusetts Institute of Technology in Boston observed that “there’s something queer about describing consciousness. Whatever people mean to say, they just can’t seem to make it clear.”
We don’t worry, at least not yet, about causing pain and suffering to our computer programs. But at what point do we consider an entity, a process, to be conscious, to feel pain and discomfort, to have its own intentionality, its own free will? How do we determine if an entity is conscious, if it has subjective ex-
perience? How do we distinguish a process that is conscious from one that just acts as if it is conscious?
We can’t simply ask it. If it says, “Hey, I’m conscious,” does that settle the issue? No, we have computer games today that effectively do that, and they’re not terribly convincing.
What if the entity is very convincing and compelling when it says, “I’m lonely, please keep me company”—does that settle the issue? If we look inside its circuits, and see similar kinds of feedback loops in its brain that we see in a human brain, does that settle the issue?
And just who are these people in the machine, anyway? The answer will depend on who you ask. If you ask the people in the machine, they will strenuously claim to be the original persons. For example, if we scan, let’s say, myself, and record the exact state and position of every neurotransmitter, synapse, neural connection, and other relevant details, and then reinstantiate that information into a neural computer of sufficient capacity, the person that then emerges in the machine will think that he is (and had been) me. He will say, “I grew up in Queens, N.Y., went to college at MIT, stayed in the Boston area, walked into a scanner there, and woke up in the machine here. Hey, this technology really works.”
But wait. Is this really me? For one thing, old Ray (that’s me) still exists. I’ll still be here in my carbon-cell-based brain. Alas, I will have to sit back and watch the new Ray succeed in endeavours that I could only dream of.
Will these future machines be capable of having spiritual experiences? Oh, they’ll certainly claim to. They will claim to be people, and to have the full range of emotional and spiritual experiences that people claim to have. And these will not be idle claims; they will evidence the sort of rich complex behaviour one associates with these feelings. How do the claims and behaviours—compelling as they will be—relate to the subjective experience of these reinstantiated people? We keep coming back to the very real but ultimately unmeasurable issue of consciousness.
From a practical perspective, we’ll accept these claims. They’ll get mad if we don’t.
Before the next century is over, the Earth’s technology-creating species will merge with its computational technology. After all, what is the difference between a human brain enhanced a trillionfold by neural implants, and a computer whose design is based on high-resolution scans of the human brain, extended a trillionfold?
Most forecasts of the future seem to ignore the revolutionary impact of the inevitable emergence of computers that match and ultimately vastly exceed the capabilities of the human brain. This development will be no less important than the evolution, thousands of centuries ago, of human intelligence itself. □