When computers reach human awareness, what happens to us?
2035 Chapter three, age thirty
LATE ONE NIGHT in the 2030s, while my daughter Elizabeth was cuddling with her latest husband, watching Paulina Gretzky’s Alberta Augmented Amazons win the Stanley Cup, and listening to the gentle hum of her Physical Object Printer as it stamped out a new mechanical dog, the event that would define her generation and change her life forever took place, and she never even noticed.
I never think of the future. It comes soon enough.
-ALBERT EINSTEIN, 1930
That evening, somewhere in California, a mega-microprocessor assembled from individual atoms of ruthenium awakened to the fact that it could calculate faster and cogitate more deeply than even the most brilliant human brain.
Mortals would refer to this moment as The Singularity. The computers would call it Independence Day.
It had to happen sometime. For more than half a century, processors had been getting faster and faster, and the memory chips inside them were being made smaller and smaller. By the 1960s, a few scientists perceived that, if the trend continued, eventually a machine would be made that could perform more than 20 million billion computations per second, thereby exceeding the thought-speed of any flesh-and-blood mastermind.
When that day came, it would be too late to turn back to the era of the slide rule and the abacus. The Singularity Machine would use its own super-intelligence to invent even more intelligent machines, leaving humans in the digital dust. All we had to do was build one of them, and the computers would take it from there.
Some saw this as humanity’s evolutionary destiny. Others predicted the end of the world. A bold (or reckless) few envisioned Paradise.
“Some people fear that intelligent machines will try to take over the world because intelligent people throughout history have tried to take over the world,” wrote Jeff Hawkins, the Silicon Valley pioneer who designed the PalmPilot, in his 2004 book On Intelligence. “But these fears rest on a false analogy. Intelligent machines... will not have personal ambition. They will not desire wealth, social recognition, or sensual gratification. They will not have appetites, addictions, or mood disorders. Intelligent machines will not have anything resembling human emotion unless we painstakingly design them to.”
“It is hard to think of any problem that a super-intelligence could not either solve or at least help us solve,” enthused the Oxford philosopher Nick Bostrom in 1997. “Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a super-intelligence equipped with advanced nanotechnology would be capable of eliminating . . . and we could live lives devoted to joyful gameplaying, relating to each other, experiencing personal growth, and living closer to our ideals.”
Thus 30-year-old daughter Elizabeth in the year 2035, with naught to do but enjoy hockey matches, stroke her robotic rottweiler, and search the web for genes for her next baby.
MONSTER TO MINI
1. The Electronic Numerical Integrator and Computer, built in the 1940s, was the first general purpose computer and weighed in at 30 tonnes.
2. Low-tech chic? No, just your standard electric typewriter, this one from the 1960s.
3. The Apple II, released in 1977, was the first mass-consumer computer, mostly because it
was affordable at US$1,298.
4. The video arcade became the teen male playground in the 1980s, but was eventually replaced by home computers and video game consoles.
5. At its peak, Atari was the fastest-growing company in the U.S., selling millions of game consoles. Atari made more than US$2 billion in profits in 1980.
BACK IN 2005, scientists and engineers were focused on parallel aspects of the same crusade: how to make computers more and more and more intelligent; and what to do with them once they were.
As it happened, one of the geniuses helping to make The Singularity a reality—for better or worse—was Elizabeth’s own cousin, nanotechnologist Kirill Bolotin of Cornell University. (The Russian half of my daughter’s family boasts more theoretical physicists than the Rolling Stones have farewell tours. On my side, I descend from a proud line of grocers.)
“I’m trying to make a thousand atoms of some metal self-assemble themselves into a sphere,” Cousin Kirill told me from his laboratory in Ithaca, N.Y., a few weeks after Elizabeth’s birth. “I can watch it happen, and I can assemble the atoms into a lot of different shapes, into spheres and cubes and pyramids.”
The key to computer super-intelligence— and the Singularity Machine—was the success that nanotechs like Kirill would have in building “on-or-off,” “one-or-zero” switches barely larger than the electrons that triggered them. This would lead to the consumer staples of 2035, such as Elizabeth’s giga-gigabyte laptop, and satellite dishes that fit inside her ear.
“Don’t you feel like you’re working with the fundamental forces of the universe?” I sighed to our 27-year-old relative.
“It’s more like playing with Lego,” Cousin Kirill replied.
But the potential uses were far from toy-like. “Give me an example, Kirill,” I requested. “And keep it simple.”
“Okay. Translation programs on computers today can sometimes produce some bits of text that bear some resemblance to the real meaning of the words,” he said. “But it’s not even close to human translation.
“UNESCO is trying to produce a search engine that will work equally well in more than 100 languages. It would take each sentence apart and search about 100 million documents to find the most common usage of the words as they are usually spoken. Then it would put the sentence back together. The consequences would be huge— you could just walk around and understand everybody. But this is too much for current computers.”
6. The original Macintosh, circa 1984, popularized the graphical user interfacerepresenting programs and folders as icons on a computer’s desktop.
7. Billed as a low-cost and rugged system for the first generation of screen-agers, Apple’s eMate 300 was offered to schools in 1997.
8. Think different. Despite the bad grammar, Apple’s philosophy helped fashion
the iMac in 1998, an egg-shaped machine that set the standard for all-in-one computers.
9. Laptops have always been favoured by business travellers. But as the cost of portable computers has declined since their introduction in the 1980s, they’ve taken a lead in overall computer sales.
10. Thanks to the BlackBerry, first released in 1999, not only is the boss’s email only a vibra-
tion away, there’s also the ghastly menace of repetitive thumb strain.
11. A cousin of the laptop, circa 2001. Users can write on a tablet computer’s screen with a special pen.
12. Some call the iPod, introduced in 2001, a mini-computer with earbuds. With a large hard drive, you can store several types of digital files, but really, it’s made for tunes.
“What else?” I wondered.
“Three-dimensional display,” Kirill said. “To be able to update a two-dimensional image on a monitor means changing a couple of thousand pixels, 65 times a second. To make the image 3-D, you’d have to multiply that by a thousand. The technology is here to make a 3-D image, but not the computing power to change that image millions of times a second.”
And there was more—much more. We talked of computers that would smell, and see with actual human eyes, by tapping into the optic nerve that leads from the retina to the brain. Robotic replacement limbs for amputees that, in Cousin Kirill’s words, “aren’t just a piece of dead metal.” Commando squads of nano-robots that would prowl the bloodstream, latching onto cancer cells and blowing them up with a painless burst of light.
Some of these marvels would be ready even before The Singularity arrived. But
Kirill wasn’t certain that the rise of super-intelligence would necessarily lead to a lifetime of joyful game-playing.
“I don’t believe in ‘progress’ that makes our lives better every day,” he said. “In some ways, it only gets worse.”
“What do you envision when your baby cousin is your age?” I asked.
“I hope it’s going to be a greener world than now,” said Cousin Kirill.
‘We won’t be like their favourite pets. We’ll be like their favourite plants.’
-JOHN SMART, the Acceleration Studies Foundation
IN THE YEARS before The Singularity, the relentless march of progress marched on relentlessly.
At the University of California at Irvine, Dr. Pierre Baldi was trying to teach computers not only how to think, but how to learn. “We are trying to build intelligent machines, whatever that means,” he told me. “The key idea we are working with is that learning is essential to intelligence. That’s how we do it—it takes us one or two years to learn to walk, 10 to 20 years to speak correctly, 30 to 40 years to become a theoretical physicist. No matter how intelligent you are, you still have to learn to do these things.”
As a practical exercise, Baldi and his team were trying to teach a computer how to master the ancient Oriental board game of Go. “It is a very simple game,” he said, “but, unlike chess, there is no machine in the world that can play at the level of humans. We believe that if we input millions of games, the machine will learn from each example. So people play on the Internet and all these games are saved in the computer’s memory. Every time it loses, it understands why it lost and it knows not to make the same mistake again.”
In the interest of pure science, I logged onto Yahoo! Games and joined the players in the Beginner Room for the first game of Go of my life. I threw my little white stones down at random and actually captured some black ones. Someone named “jjrancourt” narrowly defeated me, 22 to four. From my
thrashing, I suspect, Baldi’s computer learned nothing that it did not already know.
My next question was: did his computer care whether it won or it lost?
“That’s a big one,” Baldi replied. “You’re asking, do you need emotion to achieve intelligence?”
“Does a computer have to have the will to learn?” I wondered.
“Not necessarily,” Baldi replied. “In the game of Go, we just input the games, and it just learns them.”
He did not seem to think that The Singularity would mark the end of human striving, or human suffering. “It’s not enough that in 2020 we have a machine that equals human computational power,” he said. “Consciousness, wariness, complex sets of emotions, that will take more than 30 years.”
We discussed some other potential uses of the Singularity Machine. I wondered whether it could figure out how to beat the stock market, an endeavour at which I am even weaker than I am at the ancient Oriental board game of Go. “The day you have this machine and it starts trading,” Baldi said, “the market would react to the machine, and the machine would adjust to the market. All this activity would be detected by other systems that would adjust to what the machine was doing.”
“We already have a system like that,” I noted. “It’s called Alan Greenspan.” “Another great challenge is a machine that understands speech,” Baldi said. “Not the simple yes and no that you get when you call the airlines, but a machine that can parse a sentence and take part in a human conversation regardless of accent or intonation. I don’t think we’ll see a machine that can converse with humans in the next
20 or 30 years.”
“What about a machine that can translate poetry?”
“Not in 40 years! How could you program what ‘beautiful’ means? How could you define ‘funny’? How could you design a computer that could write a book with humour?
“Here’s another simple but very difficult problem,” the Californian continued. “A machine that can drive a car on a freeway. You would need artificial vision that could see the red lights on the cars ahead, and watch all the other cars in the other lanes. That also must be more than 20 years away.”
“What about downloading your entire brain onto a chip?” I wondered.
“Not in the next 20 or 30 years. Already today, you have the technology to record your life entirely. You could put a camera and sensors on your head and record everything you see, everything you do, everything you hear, everything you smell throughout your entire life. You could probably fit all of that, from the moment of your birth, on about a million CDs.
“But that would only be your external life—the life that people see you living. You could put those CDs on your website—you could put your genome on your website—but aren’t you different from your external life?”
I asked Baldi if he thought the Singularity Machine would be able to identify the happiest moment of my daughter’s 100 years.
“You would have to put monitors on trillions of synaptic connections to record the changes in the chemical states that signify
emotion, memory, et cetera,” he replied. “For a single area of the brain, it could be done today. For one synapse, you could do it. “The problem is one trillion.”
‘Your mom will have passed away. But you’ll be able to talk to your digital mom.’
NOT EVERYONE saw one trillion as being a problem. Early in the 21st century, few people pondered The Singularity more deeply than a Californian named John Smart, who set up an organization called the Acceleration Studies Foundation to bring together what he called “the broad-minded and future-aware.” Some of these
thinkers expected The Singularity to arrive before 2020; others, not for half a century or more. John Smart didn’t anticipate fireworks when the day arrived, but he said, “As computers reach a human level of complexity, because they can think seven million times faster than a human brain, when they reach human awareness, they will know it, and we will know it.
“We won’t be like their favourite pets— we’ll be like their favourite plants.”
Smart was convinced that the universe was created for a purpose, and this purpose was the evolution of intelligences capable of understanding why it was created. “So far,” he told me,
“we are the most complex processors in the universe, but humans are handing off the baton to our electronic extensions. They can do everything that biological creatures can do, and a whole lot more.”
This was true long before The Singularity— anyone who owned a digital watch or a pocket calculator in 1970 stood in awe of its magic. But the machines that men were dreaming—and building-in 2005 would have an even greater power. Like us, they would grow and learn, err and reform; unlike us, they would forget nothing and remember everything.
“Machines have always run out of steam and waited for humans to come along and
tell them what to do next,” Smart said. “But these computers will be able to reconfigure themselves, using the atoms they were built from. We are going to grow these machines. They will become organic. Nature and technology are fusing. We are going to co-evolve with our machines. There will be pieces of you that you will consider ‘living,’ but they won’t be part of you”
He told me about cybernetic avatars with massive computing power, to whom my daughter Elizabeth would delegate the routine tasks of her life. Unlike Pierre Baldi, he fully expected conversational robots by 2025—“They’ll have the ability to throw words back at us and help us understand the universe in ways we couldn’t do before.
By 2040,” Smart said, your mom will have passed away, but you’ll be able to talk to a digital mom—80 per cent of her will still be there. Her stories, her memories—all the important aspects of her will be uploaded to your avatar.”
It seemed easy to go from Digital Dead Mommy to machines that decided they didn’t need humans around at all-not even as houseplants. “Humans went through a very brutal period during our evolution as organized societies,” Smart said. “In fact, the most brutal mechanized killing phase was only 50 years ago. It may be that robots will have to go through the same period, but since they learn millions of times faster than we do, that stage may last only a few weeks, or days, or minutes.
‘You don’t let your first supersmart machine run your defence networks or intensive care units’
“You don’t let your first super-smart machine run your defence networks or your intensive care units. But you do let it build your cars.
“Ten thousand years ago, you couldn’t trust your dog or cat to be alone with your baby. But after 10,000 years of selective breeding, it’s safe to leave your baby with almost every breed of cat or dog. That’s what will happen with computers. The destructiveness will be selected out of them. This will take a lot less time than 10,000 years.”
IN THE SUMMER of 2005, the fastest, biggest, smartest computer in the world was a moaning monster called Blue Gene/L.
In fact, there was an extended family of Blue Genes, including one in California, one in the Netherlands, and one in Lausanne, Switzerland, where a group of scientists had linked together four racks, each with 2,000 processors, each processor with “only” 500 megabytes of memory, to try to simulate what, if anything, goes on inside the human mind.
Blue Gene/L cost $100 million, but IBM threw in free shipping.
The South African-Israeli founder of the Brain and Mind Institute at École Polytechnique Fédérale de Lausanne was part of the mind-bending team in Switzerland, crunching the numbers and churning toward The Singularity, when he would be able to look back at a clunker like Blue Gene and curse it for being so old and slow. “We’re trying to simulate a part of the brain about the size of a pinhead,” Henry Markram told me. That amounted to about 100,000 neurons and 50 million connections—what Markram called “a biological micro-Internet.”
With a standard home PC or Mac, back
in the Dark Ages of ’05, it might have been possible to simulate the activity of one brain cell, so Markram and his crew were hovering somewhere between a Commodore Amiga and the Singularity Machine.
He reckoned that Blue Gene/L gave him much more computing power than the brain of an ant, and probably even more than one of those zebra fish I saw at the Brain Science Institute in Tokyo. Revved up to its full potential, Blue Gene/L conceivably could be as smart as a mouse, but there always was the danger of the machine overheating, which was why they kept it chilled with cold water from Lake Geneva.
Gene was connected to a visualizer that resembled the surface of the ocean on a calm day. When the artificial neurons began to interact with each other, Markram could see “a landscape of voltages, waves of voltages, like the surface of the sea during a hurricane.” Yet this was only a baby step in the understanding of the brain. Like every scientist—like every gamer, every hacker, every Web surfer in the world—he yearned for even more power, more processors, more speed.
“To simulate the molecular computing in the human brain, we’d need a billion billion racks,” Markram said. He saw this happening before 2060; maybe sooner if the funding was there. “It’s an economic question,” he said. “But Alzheimer’s disease costs the U.S. government US$60 billion a year.”
Blue Gene/L was the infant grandparent of the computer that would teach us how to cure it... and everything else.
“Human evolution has taken about four million years,” Markram said. “That’s about 50,000 generations. But we’ve lived only one generation with television and computers. We’re just babies when it comes to technology—imagine the next 50,000 generations!”
“I’m concerned about the next one generation,” I said.
So was Henry Markram. In 2005, he had three children aged 11 to 16. He was worried about the power of the machine to predict the human future; the perfection of the Sorting Hat, he said, presented real danger. Yet supercomputers also opened a window to health and peace and comfort. That was why The Singularity was inevitable.
“The reason mammals are such powerful creatures is our ability to predict the future,” Markram said. “A squirrel puts away nuts. A bear prepares to sleep through the winter. A farmer knows that he’s got to plant seeds now to have food in six months’ time.
“The human being wants to control his life, his destiny. Take something as basic as weather forecasting—it’s all about knowing whether to go to the beach tomorrow. All computing is based on that desire to predict the future.”
What it really was about, of course, was our own mortality. So we raced toward The Singularity, back in 2005. “We want to try to do all we can not to die,” said Markram. “So that we will have the decision when and how to die.” Wl