It wasn't too long ago, with the publication of The Age of Spiritual Machines, that he was the cause célèbre of our time. I'm somewhat at a loss to explain what has happened in the public's mind since then; his ideas certainly haven't changed all that much. Perhaps it's a collective impatience with his timelines; the fact that it isn't 2049 yet has led to disillusionment. Or maybe it's because people are afraid of buying into a set of predictions that may never come true—a kind of protection against disappointment or looking foolish.
What's more likely, however, is that his ideas have reached a much wider audience since the release of Spiritual Machines and The Singularity is Near. In the early days his work was picked up by a community who was already primed to accept these sorts of wide-eyed speculations as a valid line of inquiry. These days, everybody and his brother knows about Kurzweil. This has naturally led to an increased chorus of criticism by those who take issue with his thesis—both from experts and non-experts alike.
As a consequence of this popularity and infamy, Ray has been given a kind of unwarranted ownership over the term 'Singularity.' This has proven problematic on several levels, including the fact that his particular definition and description of the technological singularity is probably not the best one. Kurzweil has essentially equated the Singularity with the steady, accelerating growth of all technologies, including intelligence. His definition, along with its rather ambiguous implications, is inconsistent with the going definition used by other Singuarlity scholars, that of it being an 'intelligence explosion' caused by the positive feedback of recursively improving machine intelligences.
Moreover, and more importantly, Ray Kurzweil is one voice among many in a community of thinkers who have been tackling this problem for over half a century. What's particularly frustrating these days is that, because Kurzweil has become synonymous with the Singularity concept, and because so many people have been caught in the hate-Ray trend, people are throwing out the Singularity baby with the bathwater while drowning out all other voices. This is not only stupid and unfair, it's potentially dangerous; Singularity studies may prove crucial to the creation of a survivable future.
Consequently, for those readers new to these ideas and this particular community, I have prepared a short list of key players whose work is worth deeper investigation. Their work extends and complements the work of Ray Kurzweil in many respects. And in some cases they present an entirely different vision altogether. But what matters here is that these are all credible academics and thinkers who have worked or who are working on this important subject.
Please note that this is not meant to be a comprehensive list, so if you or your favorite thinker is not on here just take a chill pill and add a post to the comments section along with some context.
- Jon von Neumann: The brilliant Hungarian-American mathematician and computer scientist, John von Neumann is regarded as the first person to use the term 'Singularity' in describing a future event. Speaking with Stanislaw Ulam in 1958, von Neumann made note of the accelerating progress of technology and constant changes to human life. He felt that this tendency was giving the appearance of our approaching some essential singularity beyond which human affairs, as we know them, could not continue. In this sense, von Neumann's definition is more a declaration of an event horizon.
- I. J. Good: One of the first and best definitions of the Singularity was put forth by mathematician I. G. Good. Back in 1965 he wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they might be able to improve their own designs in ways unforeseen by their designers and thus recursively augment themselves into far greater intelligences. He thought that, while the first set of improvements might be small, machines could quickly become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a Singularity).
- Marvin Minsky: Inventor and author, Minsky is universally regarded as one of the world's leading authorities in artificial intelligence. He has made fundamental contributions to the fields of robotics and computer-aided learning technologies. Some of his most notable books include The Society of Mind, Perceptrons, and The Emotion Machine. Ray Kurzweil calls him his most important mentor. Minsky argues that our increasing knowledge of the brain and increasing computer power will eventually intersect, likely leading to machine minds and a potential Singularity.
- Vernor Vinge: In 1983, science fiction writer Vernor Vinge rekindled interest in Singularity studies by publishing an article about the subject in Omni magazine. Later, in 1993, he expanded on his thoughts in the article, "The Coming Technological Singularity: How to Survive in the Post-Human Era." He (now famously) wrote, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Inspired by I. J. Good, he argued that superhuman intelligence would be able enhance itself faster than the humans who created them. He noted that, "When greater-than-human intelligence drives progress, that progress will be much more rapid." He speculated that this feedback loop of self-improving intelligence could cause large amounts of technological progress within a short period, and that the creation of smarter-than-human intelligence represented a breakdown in humans' ability to model their future. Pre-dating Kurzweil, Vinge used Moore's law in an attempt to predict the arrival of artificial intelligence.
- Hans Moravec: Carnegie Mellon roboticist Hans Moravec is a visionary thinker who is best known for his 1988 book, Mind Children, where he outlines Moore's law and his predictions about the future of artificial life. Moravec's primary thesis is that humanity, through the development of robotics and AI, will eventually spawn their own successors (which he predicts to be around 2030-2040). He is also the author of Robot: Mere Machine to Transcendent Mind (1998) in which he further refined his ideas. Moravec writes, "It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half–century of development. Indeed, for that reason, many long–time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. But there are very good reasons why things will go much faster in the next fifty years than they have in the last fifty."
- Robin Hanson: Associate professor of economics at George Mason University, Robin Hanson has taken the "Singularity" term to to refer to sharp increases in the exponent of economic growth. He lists the agricultural and industrial revolutions as past "singularities." Extrapolating from such past events, he proposes that the next economic singularity should increase economic growth between 60 and 250 times. Hanson contends that such an event could be triggered by an innovation that allows for the replacement of virtually all human labor, such as mind uploads and virtually limitless copying.
- Nick Bostrom: University of Oxford's Nick Bostrom has done seminal work in this field. In 1998 he published, "How Long Before Superintelligence," in which he argued that superhuman artificial intelligence would likely emerge within the first third of the 21st century. He reached this conclusion by looking at various factors, including different estimates of the processing power of the human brain, trends in technological advancement and how fast superintelligence might be developed once there is human-level artificial intelligence.
- Eliezer Yudkowsky: Artificial intelligence researcher Eliezer Yudkowsky is a co-founder and research fellow of the Singularity Institute for Artificial Intelligence (SIAI). He is the author of "Creating Friendly AI" (2001) and "Levels of Organization in General Intelligence" (2002). Primarily concerned with the Singularity as a potential human-extinction event, Yudkowsky has dedicated his work to advocacy and developing strategies towards creating survivable Singularities.
- David Chalmers: An important figure in philosophy of mind studies and neuroscience, David Chalmers has a unique take on the Singularity where he argues that it will happen through self-amplifying intelligence. The only requirement, he claims, is that an intelligent machine be able to create an intelligence smarter than itself. The original intelligence itself need not be very smart. The most plausible way, he says, is simulated evolution. Chalmers feels that if we get to above-human intelligence it seems likely it will take place in a simulated world, not in a robot or in our own physical environment.
And as for the all the anti-Kurzweil sentiment, all I can say is that I hope to see it pass. There is no good reason why he—and others—shouldn't explore this important area. Sure, it may turn out that everyone was wrong and that the future isn't at all what we expected. But as Enrico Fermi once said, "There's two possible outcomes: if the result confirms the hypothesis, then you've made a discovery. If the result is contrary to the hypothesis, then you've made a discovery."
Regardless of the outcome, let's make a discovery.
No comments:
Post a Comment