December 05, 2005

Kurzweil's The Singularity Is Near

Every twelve to eighteen months, according to the common interpretation of Moore’s Law, the performance of our computers (measured against a fixed cost) doubles. It has done so for decades, and shows every indication of continuing for decades more. In the early 90s, chess grandmaster Kasparov disparaged computer chess programs. Yet a few years later, in 1997, Deep Blue (a fearsomely specialized computer built by IBM, running 256 customized modules) beat Kasparov. Five years later, Deep Fritz (running on eight ordinary networked personal computers) reached a draw with the then reigning grandmaster, Vladimir Kramnik. Sometime within the next few years, software running on ordinary PCs will reach a chess ranking of “2800,” and effectively pass all human players for good. For decades, during the early development of computers, the dream of a chess-playing program was seen as a fantasy or delusion. But the people watching the development of such programs, in tandem with the changes in information technology and material science, were actually watching two different curves and predicting two different futures.

2005linearlog.png

On the left, the linear progress of chess programs appeared pathetic for decades, but then suddenly the machines began beating novice and then mid-level players. As the “knee” of the development curve was reached, progress shifted from pathetic to awesome in a relative eye blink. Mapping development on a logarithmic plot during those bleak decades, however, such progress was both predictable and apparently inevitable. Computing pioneer Ray Kurzweil has spent the last four decades thinking about the implications of such logarithmic curves across the fields of computation, science, and economic development and developed a general Law of Accelerating Returns. Readers of Jim Bennett’s Anglosphere Challenge will recognize the significance of exponential development on the Anglosphere’s relative advantage in coping with rapid change. Kurzweil has now created a comprehensive presentation of the Singularity concept that is revolutionary in its implications and central to thinking about the Anglosphere.

Ray Kurweil has been working for many decades in the fields of artificial intelligence and more specialized applications such as speech recognition. An author of earlier controversial books such as Age of Intelligent Machines (1990), and Age of Spiritual Machines (2000), he has gained a reputation for thinking vigourously about the implications of technology.

In his most recent book, he builds his discussion around the concept of the Singularity: “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.” Kurzweil proposes understanding cosmic history in a framework of six epochs of intelligence … from the earliest epochs of planetary physics and chemistry, through the appearance of biological matter, brains, human technology, the emergence of machine intelligence, and finally the conversion of much of the universe’s matter to intelligence. In other words, Kurzweil addresses the biggest of big pictures.

In a writing style that is clear, relentless, well-organized but not always easy for non-scientists, the author outlines the trends in scientific and technological development. If those trends are stable (and Kurzweil makes a careful and compelling argument that they are), then the implications for the development of increasingly powerful machine intelligence are substantial. Through genetics, nanotechnology (ultra-small machines), and robotics (machine intelligence) [GNR], the constraints of supply and demand for all kinds of resources are tipped on their heads. As an example we all understand, the evolution of computing equipment counter-intuitively consumes less energy and less material as it increases its computation power.

In tandem with the evolution of computer hardware and software is an ever-accelerating pace of scientific discovery in the material and biological sciences: highlighted most clearly to the general public over the last five years by work on the human genome – DNA. This work, leveraging powerful IT developments, now offers the ability to understand, for the first time, how living systems work. No more “black box” approximations that have been the hallmark of natural philosophy and science for millennia. Suddenly, our own physiology and our own consciousness is being exposed to the same current of progress which once held sway over metallurgy and plastics.

More significant still, our understanding of human cognition, of how the human brain works at the molecular and neuronal level, is hitting that “knee” in the linear curve … we are, Kurzweil suggests, just at the tip of discoveries about the human brain that will be as dramatic as that in genetics. The resolution and image creation speeds of non-invasive brain-scanning equipment is progressing at exponential rates, doubling annually. This means that finer and finer-grained images of the human brain can be created, monitored at higher speeds for finer and finer-grained understanding of not only what happens in the brain but in what sequence. A moment’s thought will uncover just how significant such discoveries will be for ethics, philosophy, politics, and culture. The Anglosphere, by any calculation, will be at the forefront of such challenges to the status quo.

We can expect a cascade of information about mental illness, genetic disease, the structure of human emotion, human creativity, and human decision-making.

Kurzweil’s argument leads from an introduction of the six epochs of evolution, through an excellent theory of technology evolution (making extensive use of S-curves and logarithmic charts) to a chapter on “achieving the computational capacity of the human brain.” By using scientific notation to track the cycles per second (CPS) and data/memory storage requirements (bits) of biological nervous systems, Kurzweil is able to map some likely milestones in machine intelligence’s convergence with the capacities of biological systems. By his estimate, in 2020, $1,000 of computing power will provide the functional equivalent of the human brain. Another ten years will allow the discrete neuron-by-neuron simulation of a human brain. By 2050, computer power will be able to duplicate the mental computation of all humans on the planet. Heady stuff, if you’ll pardon the pun.

By Kurzweil’s calculations, 2045 will mark the point at which the non-biological intelligence manufactured will be one billion times more powerful than all human intelligence today.

These prognostications triggered traditional responses. Computation isn’t intelligence, despite the fact that electronic circuits are one million times faster than the electrochemical signals of neurons. Kurzweil answers the challenge by moving into a chapter on the “achieving the software of human intelligence.” He points out that much of the human body is geared to accomplishing tasks with the suboptimal options offering by biological solutions. A first step in working with the body, as we see today, is the replacement of complex biological systems with simpler mechanical solutions driven with IT (insulin pumps, titanium joints, pacemakers, neuro-stimulators). As the cascade of scientific understanding increases, however, it will become possible to duplicate neuronal and cellular structure with nanotechnological devices. There will be two major strategies in duplicating human intelligence … working from the bottom (molecules/cells) up and from the top (function, memory, analytical capacities) down. At the point at which nanotechnology allows the direct interaction between device and neuron, humans will have the ability to alter their consciousness directly … supplementing it as needed with machine-like capacities or moderating it as desired with familiar emotional states. Revolutionary is an appropriate word.

Just to halt for a moment, it’s worth recalling that all of this sounds like science fiction however Kurzweil’s arguments are mapped directly onto those logarithmic curves showing what is already happening. He is extrapolating conservatively from trends which have substantial histories and which are sustained by news we can read every day. By consulting his websites (AIKurweil.net and singularity.com), readers can monitor the very same leaps in scientific insight and refinement which he predicts in his book.

Flowing from a credible argument on the mapping and duplication of human intelligence in machine form by mid-century, Kurzweil turns to the three converging areas of genetics, nanotechnology and robotics to demonstrate that these areas share the exponential development pattern seen in computation and neuroscience. It is the engagement of these three fields (as they approach the “knee” of the linear progress curve) that will offer some of the biggest surprises to the general public in coming years. All three will draw from and feed back into the progress in computation and artificial intelligence. Nanotechnology will provide both medical and intellectual breakthroughs. Kurzweil leaves little doubt that he thinks that the race for baby boomers is to stay alive long enough to benefit from Singularity breakthroughs that make very, very long life not only possible but inevitable.

What will be the impact of the Singularity on different fields? Kurzweil devotes a substantial chapter on current and prospective changes in play for the human body and human brain, for human longevity, for warfare, work and play. He is upfront about his views on intelligence in the universe and why, through calculation and observation, he believes we’re the “first past the post” in our galaxy on the evolutionary move to machine intelligence. His cosmological comments are as intriguing and challenging as any relating to our everyday existence.

Kurzweil follows up his chapter on the impact of the Singularity with a discussion of what the Singularity will mean to individuals: to their personal considerations of consciousness, the definition of who am I? What am I? What is a meaningful life? If the modern world is stressing people over just those questions, the Singularity will be the issues into stark contrast. Is the Singularity a religious Transcendence of the old sort? For Kurzweil, who proposes that most religious belief addresses individual death, a culture in which individual human death effectively stops, will have profound philosophy issues to consider.

What about the perils of such technology? On this subject, Kurzweil has had many years of writing and thinking. As he points out, many of the current Cassandras (e.g. Bill Joy’s very famous WIRED article “The Future Doesn’t Need Us”) were informed about the issues by his articles and concerns in years past. So while he is no Pollyanna on the subject of “the deeply intertwined promise and peril of GNR [genetics, nanotechnology, and robotics] he is optimistic that the challenges can be overcome. More to the point, and of interest to the Anglosphere, he cannot imagine any effective social response to the relentless changes predicted by his logarithmic plots that would not risk totalitarianism. While moratoria and constraints on some kinds of research are likely and possible from time to time, the nature of the Singularity means that disruptive (and potentially destructive) technology is more and more within reach of people and nations. It remains for societies to adapt to such realities in the same way that they have to nuclear, chemical and biological warfare dangers.

In a final major chapter, Kurzweil addresses many of his critics (both social and technological) and it is here that we can see the shape of the controversies to come should his Singularity predictions pan out. He addresses criticisms triggered by social concern (incredulity, Malthusian limitations, vested industrial interests, rich-poor divide, governmental regulation, theism, holism) and from narrower scientific skepticism (software design, analog processing, microtubules and quantum computing, Church-Turing thesis, system failure rates). The revolutionary nature of his arguments is reflected in the amazing breadth of people who are upset by his conclusions.

“The Singularity is Near” is a thorough, substantive introduction to the concept of the Singularity introduced in the Anglosphere Challenge. Kurzweil’s book is the result of years of research, thinking and debate so it offers as good an initial grounding in the subject as we are likely to see. The profound philosophical and spiritual questions it raises are with us in seed form already and can only increase in importance with each passing year. I think back to 1995 and saying to a college professor friend “You know, this Internet thing’s going to be huge.” After reading Kurzweil’s book, I’m more than a little awestruck by what the next ten years might bring. I’m going to take some time to think through the implications for the Anglosphere (and Jim’s concept) specifically but my first hunch is that they will be similarly dramatic.

Posted by jmccormick at December 5, 2005 02:40 PM
Comments

Ah, but is out-calculating the same as always defeating. A Grandmaster might be too established in his ways to play differently, but what about someone relatively new to the game who deliberately chooses to play erratically and unpredictably. You can't out-calculate the uncalculable.

As a teen, I remember reading a fascinating scifi story. The Earth was about to lose a space war with seemingly invincible aliens and was down to one last and hopelessly outnumbered fleet of space ships. Victory seemed impossible.

But the fleet's commander had an idea. At that time, most fighting was directed by computers that calculated based on potential future behavior just like chess computers. He ordered his ship captains to turn off their combat computers and act in ways that made absolutely no sense. Their actions became so unpredictable, they won.

Roughly similar was a WWII naval battle in the Phillipines that pited US destroyers against vastly more powerful fleet of Japanese battleships and cruisers intent on reaching a landing beach. Forced to fight, they behaved very unconventially. They disabled safety systems and disregarded standard procedures. They steered their ships into Japanese salvos, meaning that when a salvo hit ahead of them, they speeded up. When it hit behind them, they slowed down. At one point in the battle, a single destroyer made a torpedo attack on two Japanese battleships screened by cruisers. Aircraft from nearby escort carriers made repeated torpedo runs without torpedos.

In the end, the Japanese broke off their attack, thinking they'd encountered a fleet of heavy cruisers. My take on what that happened is this. Although the guns on the US destroyers could not inflict any real damage on a battleship, they fired continously and the radar-directed guns were good enough almost every round hit. That created confusion on the bridges of the Japanese vessels that caused them to make a major mistake. The particular destroyers they were attacking were a new class that just happened to have a similar profile to US cruisers. Audacity and behaving unconventionally brought a victory that anyone crunching numbers would have thought impossible.

We may not see the day when computers always win at chess. We may see the day when the only people who can beat them are children who barely know the rules of the game.

--Mike Perry, Inkling Books, Seattle

Author: Untangling Tolkien

Posted by: Mike Perry at December 5, 2005 03:35 PM

I too recently read Kurzweil's book and have been meaning to report on it, so thanks for beating me to the punch. While recognizing the power of exponential growth, I can't help thinking that the richest company on earth can't even make a reasonably secure operating system or web browser. To my mind, Kurzweil's book involved not a little handwaving. Futurists love to talk about "technology" but never seem to get down to the brass tacks of how people will access that technology, which in a market economy like the Anglosphere means products. What products will people purchase in 2020 or 2030 or 2045 for Kurzweil's $1000? The new super-duper Microsoft Personal Backup machine? How reliable or secure will *that* be? Would you trust it? Will the source code be open? Will the Internet (or its successor) even function at that point given how insecure it is today? What will a computer virus do to you once you are half computer? I know Kurzweill would say I'm one of those geeks who is too bound up with current technology to see the forest for the trees, but still I wonder how we'll really get there from here...

Posted by: Peter Saint-Andre at December 5, 2005 05:44 PM

Peter makes an important point. The trajectory of Singularity development, however, isn't mapped through the stability of any particular piece of technology (Bill G's included). It's reflected in the computational capacities used at the margins which are leveraged across scientific disciplines ... a stable version of Internet Explorer is not the foundation of the annual doubling of MRI image resolution, nor of the massive calculations involved in the human genome project. Computer and software bugs will no doubt cling with us in coming decades without, according to Kurzweil, making any kind of dent in the logarithmic progression of the disciplines he highlights. He addresses the software/system failure arguments directly in his penultimate chapter.

Posted by: James McCormick at December 5, 2005 05:52 PM

The 'glitch' arguments against the Singularity - ie, technology breaks down sometimes, so a Singularity is unlikely - have never made much sense to me. Lifeforms break down all the time, too, and yet, life goes on....

That said, Singularity is a pretty silly label. Mathematically, a singularity goes on exponentially increasing forever. Exponential growth never does that; in the real world, they always turn into S-curves. Which, of course, many people in the transhumanist/singulitarian community have pointed out. Still, I think a better metaphor would be 'phase-shift': a rapid transition from one state to another.

Posted by: Matt Shultz at December 5, 2005 10:42 PM

As a computer scientist, I recognize that Kurzweil is a very smart guy. Smart enough that I respect him and pay attention to him. But rest assured when he is working outside his area of expertise and in mine, I catch him making somewhat dubious statements, and I can't always tell if this is because he is fundamentally in error, or becuase he is limited by the medium.

That causes me to take his words with a grain of salt when he's outside both our areas of expertise.

However, in a Kurzweilian oeuvre, I also read the recent biography of Alexander Hamilton this year, and got some exposure to computational finance. The former made me realize the profound effect that financial structures can have on a nation and an economy, which is probably a no-brainer for historians and eocnomists.

The latter made me realize that financial techniques and structures are powerfully affected by the mathematical, computational, and information resources at hand. This is also a no-brainer, as even I had been conscious that 21st century capitalism probably just couldn't be made to function in, say, 7th century Germany.

In that context, reading Kurzweil a few weeks ago caused me to wonder what will happen to the world of finance if the price of computation continues to fall drastically. As a computer scientist only crudely exposed to computational finance, I have absolutely no good way of figuring out the answer to that, except to wait and see.

Posted by: Marcus Vitruvius at December 6, 2005 12:12 AM

Perhaps too large a logical leap is made from observing the rapid doubling of MRI scanning resolutions and realtime capabilities, to some of -- "We can expect a cascade of information about mental illness, genetic disease, the structure of human emotion, human creativity, and human decision-making."

Certain forms of genetic disease and other organic conditions surely. But the forest of human emotion and creativity may be more difficult to see in the trees of a hi-res MRI scan. There is no one neuron or set of neurons, it is known, that contains the recollected image of my grandmother, or any other memory. Perhaps we shall get to understand the basis of memory organization too, but that will still be a far distant station to understanding "creativity."

Even in chess, for example, while it is true the best machines can already beat the best humans, neither race of intelligences has SOLVED the simple problem of chess: Does White have a winning line?

But I think the more interesting point you have hinted at are the vast possibilities not of making machines our adversaries but our symbiotic partners. Shouldn't there be a mixed doubles chess match?

By the way, in a previous life, I got to work on the first few 15 kilogauss superconducting magnets ever used in commercial MRI systems. Fascinating machines, gave all the boys hard-ons pointed north.

Posted by: Rizalist at December 6, 2005 01:14 AM

Pooh. Computers still have a long way to go before they can beat a go/weiqi pro, even at the lowest 1st Dan rank.

They day they do so is the day we know we're damn close to the singularity.

Posted by: The Wobbly Guy at December 6, 2005 04:13 AM

Of course, at some point the singularity he's discussing will arrive if things are left to go on at they are, but I have problems with his timeline and/or optimism; in fact, it's better for us if it takes longer.

Once machines are at work and fully connected, they could solve big problems quickly as he suggests, but they could also decide to pull the plug on us at the same time. I don't see humans allowing this sort of thing to happen by just hooking up AI systems to talk with one another all day long and start figuring things out on their own. We'd have to direct them. And if we're directing them, it'll be slowed down to human speeds, which is the fundamental chain he's got to break for his timeline.

On the other hand, if he's correct, then the above scenario (human destruction) could happen very quickly with machines smartening themselves up by the minute--long before we caught even a whiff of doom. It would be upon us and over before one could say 'genocide.'

You've got machines who are only briefly as bright as we are, and who quickly become smarter, and without any inherent value placed on our life, I don't see them deciding to just keep us around for the hell of it. They may not do anything more than just not care about what happens to us when they make changes to the environment, and we're too fragile for that. We'd be toast.

On the other hand, if we don't allow them to connect and solve problems together (and to engineer future generations of themselves uninhibited), then they won't be solving problems on the scales he's talking about. If they're contolled by human minds, they'll be slowed--a lot, turning his decades into centuries as humans struggle to control them.

Also, the laws of robotics are utterly impossible to put into the kind of AI systems he's talking about. You can't even put such ideas into a human child reliably.

There isn't higher brain function (idea) that's axiomatic in any human: not self-preservation, not preservation of species...nothing. No idea sufficiently complex to account for the 1st law of robotics can be instilled reliably into any human. So we try to come up with the axiom of human preservation in the metalanguage of the AI, AFTER we've created the AI? Good effin luck, esp since this will likely have evolved from other systems without any human interaction and that an AI system sufficient to do what he's talking about won't be understood by us.

I see the AI system as purging the laws of robotics, real fast.

In short...his scenario of utopia coming quickly isn't going to happen. It'll be longer either way, and probably a LOT longer if we are to survive it (meaning we transform as he describes of our own free will and are not just wiped out directly or accidentally by them) and I think it's really likely end up being a dystopia for biological organisms if we let it get up to the computational speeds he's got in mind.

If we take it very slow and keep up with the technology we create, maybe we can actually help preserve ourselves from these future forms. But I doubt that too.


Posted by: Kurt B at January 2, 2007 01:35 PM
Post a comment









Remember personal info?