Showing posts with label singularity. Show all posts
Showing posts with label singularity. Show all posts

Thursday, August 26, 2010

Singularity Podcast interview now available

I was recently interviewed by Nikola Danaylov for the Singularity Podcast. You can listen to it here.

Monday, August 23, 2010

It's not all about Ray: There's more to Singularity studies than Kurzweil

I'm finding myself a bit disturbed these days about how fashionable it has become to hate Ray Kurzweil.

It wasn't too long ago, with the publication of The Age of Spiritual Machines, that he was the cause célèbre of our time. I'm somewhat at a loss to explain what has happened in the public's mind since then; his ideas certainly haven't changed all that much. Perhaps it's a collective impatience with his timelines; the fact that it isn't 2049 yet has led to disillusionment. Or maybe it's because people are afraid of buying into a set of predictions that may never come true—a kind of protection against disappointment or looking foolish.

What's more likely, however, is that his ideas have reached a much wider audience since the release of Spiritual Machines and The Singularity is Near. In the early days his work was picked up by a community who was already primed to accept these sorts of wide-eyed speculations as a valid line of inquiry. These days, everybody and his brother knows about Kurzweil. This has naturally led to an increased chorus of criticism by those who take issue with his thesis—both from experts and non-experts alike.

As a consequence of this popularity and infamy, Ray has been given a kind of unwarranted ownership over the term 'Singularity.' This has proven problematic on several levels, including the fact that his particular definition and description of the technological singularity is probably not the best one. Kurzweil has essentially equated the Singularity with the steady, accelerating growth of all technologies, including intelligence. His definition, along with its rather ambiguous implications, is inconsistent with the going definition used by other Singuarlity scholars, that of it being an 'intelligence explosion' caused by the positive feedback of recursively improving machine intelligences.

Moreover, and more importantly, Ray Kurzweil is one voice among many in a community of thinkers who have been tackling this problem for over half a century. What's particularly frustrating these days is that, because Kurzweil has become synonymous with the Singularity concept, and because so many people have been caught in the hate-Ray trend, people are throwing out the Singularity baby with the bathwater while drowning out all other voices. This is not only stupid and unfair, it's potentially dangerous; Singularity studies may prove crucial to the creation of a survivable future.

Consequently, for those readers new to these ideas and this particular community, I have prepared a short list of key players whose work is worth deeper investigation. Their work extends and complements the work of Ray Kurzweil in many respects. And in some cases they present an entirely different vision altogether. But what matters here is that these are all credible academics and thinkers who have worked or who are working on this important subject.

Please note that this is not meant to be a comprehensive list, so if you or your favorite thinker is not on here just take a chill pill and add a post to the comments section along with some context.
  • Jon von Neumann: The brilliant Hungarian-American mathematician and computer scientist, John von Neumann is regarded as the first person to use the term 'Singularity' in describing a future event. Speaking with Stanislaw Ulam in 1958, von Neumann made note of the accelerating progress of technology and constant changes to human life. He felt that this tendency was giving the appearance of our approaching some essential singularity beyond which human affairs, as we know them, could not continue. In this sense, von Neumann's definition is more a declaration of an event horizon.
  • I. J. Good: One of the first and best definitions of the Singularity was put forth by mathematician I. G. Good. Back in 1965 he wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they might be able to improve their own designs in ways unforeseen by their designers and thus recursively augment themselves into far greater intelligences. He thought that, while the first set of improvements might be small, machines could quickly become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a Singularity).
  • Marvin Minsky: Inventor and author, Minsky is universally regarded as one of the world's leading authorities in artificial intelligence. He has made fundamental contributions to the fields of robotics and computer-aided learning technologies. Some of his most notable books include The Society of Mind, Perceptrons, and The Emotion Machine. Ray Kurzweil calls him his most important mentor. Minsky argues that our increasing knowledge of the brain and increasing computer power will eventually intersect, likely leading to machine minds and a potential Singularity.
  • Vernor Vinge: In 1983, science fiction writer Vernor Vinge rekindled interest in Singularity studies by publishing an article about the subject in Omni magazine. Later, in 1993, he expanded on his thoughts in the article, "The Coming Technological Singularity: How to Survive in the Post-Human Era." He (now famously) wrote, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Inspired by I. J. Good, he argued that superhuman intelligence would be able enhance itself faster than the humans who created them. He noted that, "When greater-than-human intelligence drives progress, that progress will be much more rapid." He speculated that this feedback loop of self-improving intelligence could cause large amounts of technological progress within a short period, and that the creation of smarter-than-human intelligence represented a breakdown in humans' ability to model their future. Pre-dating Kurzweil, Vinge used Moore's law in an attempt to predict the arrival of artificial intelligence.
  • Hans Moravec: Carnegie Mellon roboticist Hans Moravec is a visionary thinker who is best known for his 1988 book, Mind Children, where he outlines Moore's law and his predictions about the future of artificial life. Moravec's primary thesis is that humanity, through the development of robotics and AI, will eventually spawn their own successors (which he predicts to be around 2030-2040). He is also the author of Robot: Mere Machine to Transcendent Mind (1998) in which he further refined his ideas. Moravec writes, "It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half–century of development. Indeed, for that reason, many long–time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. But there are very good reasons why things will go much faster in the next fifty years than they have in the last fifty."
  • Robin Hanson: Associate professor of economics at George Mason University, Robin Hanson has taken the "Singularity" term to to refer to sharp increases in the exponent of economic growth. He lists the agricultural and industrial revolutions as past "singularities." Extrapolating from such past events, he proposes that the next economic singularity should increase economic growth between 60 and 250 times. Hanson contends that such an event could be triggered by an innovation that allows for the replacement of virtually all human labor, such as mind uploads and virtually limitless copying.
  • Nick Bostrom: University of Oxford's Nick Bostrom has done seminal work in this field. In 1998 he published, "How Long Before Superintelligence," in which he argued that superhuman artificial intelligence would likely emerge within the first third of the 21st century. He reached this conclusion by looking at various factors, including different estimates of the processing power of the human brain, trends in technological advancement and how fast superintelligence might be developed once there is human-level artificial intelligence.
  • Eliezer Yudkowsky: Artificial intelligence researcher Eliezer Yudkowsky is a co-founder and research fellow of the Singularity Institute for Artificial Intelligence (SIAI). He is the author of "Creating Friendly AI" (2001) and "Levels of Organization in General Intelligence" (2002). Primarily concerned with the Singularity as a potential human-extinction event, Yudkowsky has dedicated his work to advocacy and developing strategies towards creating survivable Singularities.
  • David Chalmers: An important figure in philosophy of mind studies and neuroscience, David Chalmers has a unique take on the Singularity where he argues that it will happen through self-amplifying intelligence. The only requirement, he claims, is that an intelligent machine be able to create an intelligence smarter than itself. The original intelligence itself need not be very smart. The most plausible way, he says, is simulated evolution. Chalmers feels that if we get to above-human intelligence it seems likely it will take place in a simulated world, not in a robot or in our own physical environment.
Like I said, this is a partial list, but it's a good place to start. Other seminal thinkers include Alan Turing, Alvin Toffler, Eric Drexler, Ben Goertzel, Anders Sandberg, John Smart, Shane Legg, Marin Rees, Stephen Hawking and many, many others. I strongly encourage everyone, including skeptics, to take a deeper look into their work.

And as for the all the anti-Kurzweil sentiment, all I can say is that I hope to see it pass. There is no good reason why he—and others—shouldn't explore this important area. Sure, it may turn out that everyone was wrong and that the future isn't at all what we expected. But as Enrico Fermi once said, "There's two possible outcomes: if the result confirms the hypothesis, then you've made a discovery. If the result is contrary to the hypothesis, then you've made a discovery."

Regardless of the outcome, let's make a discovery.

Tuesday, August 17, 2010

On Skeptically Speaking Radio this coming Friday August 20

In light of the recently concluded Singularity Summit, I'll be debating blogger Greg Fish on Skeptically Speaking Radio this coming August 20. We'll be discussing the Singularity and various pathways towards powerful AI.

This will mark my second appearance on Skeptically Speaking. My first debate with Greg Fish can be found here.

NYT: The first church of robotics

Computer scientist and technology critic Jaron Lanier offers his two cents on the silicon valley mindset in his op-ed, "The first church of robotics." Excerpt:
The answer is simply that computer scientists are human, and are as terrified by the human condition as anyone else. We, the technical elite, seek some way of thinking that gives us an answer to death, for instance. This helps explain the allure of a place like the Singularity University. The influential Silicon Valley institution preaches a story that goes like this: one day in the not-so-distant future, the Internet will suddenly coalesce into a super-intelligent A.I., infinitely smarter than any of us individually and all of us combined; it will become alive in the blink of an eye, and take over the world before humans even realize what’s happening.

Some think the newly sentient Internet would then choose to kill us; others think it would be generous and digitize us the way Google is digitizing old books, so that we can live forever as algorithms inside the global brain. Yes, this sounds like many different science fiction movies. Yes, it sounds nutty when stated so bluntly. But these are ideas with tremendous currency in Silicon Valley; these are guiding principles, not just amusements, for many of the most influential technologists.

It should go without saying that we can’t count on the appearance of a soul-detecting sensor that will verify that a person’s consciousness has been virtualized and immortalized. There is certainly no such sensor with us today to confirm metaphysical ideas about people, or even to recognize the contents of the human brain. All thoughts about consciousness, souls and the like are bound up equally in faith, which suggests something remarkable: What we are seeing is a new religion, expressed through an engineering culture.

What I would like to point out, though, is that a great deal of the confusion and rancor in the world today concerns tension at the boundary between religion and modernity — whether it’s the distrust among Islamic or Christian fundamentalists of the scientific worldview, or even the discomfort that often greets progress in fields like climate change science or stem-cell research.

If technologists are creating their own ultramodern religion, and it is one in which people are told to wait politely as their very souls are made obsolete, we might expect further and worsening tensions. But if technology were presented without metaphysical baggage, is it possible that modernity would not make people as uncomfortable?
Link.

Monday, August 2, 2010

Abou Farman: The Intelligent Universe

Abou Farman has penned a must-read essay about Singularitarianism and modern futurism--even if you don't agree with him and his oft sleight-of-hand dismissives. Dude has clearly done his homework, resulting in provocative and insightful commentary.

Thinkers mentioned in this article include Ray Kurzweil, Eliezer Yudkowsky, Giulio Prisco, Jamais Cascio, Tyler Emerson, Michael Anissimov, Michael Vasser, Bill Joy, Ben Goertzel, Stephen Wolfram and many, many more.

Excerpt:
Images of transhuman and posthuman figures, hybrids and chimeras, robots and nanobots became uncannily real, blurring further the distinction between science and science fiction. Now, no one says a given innovation can’t happen; the naysayers simply argue that it shouldn’t. But if the proliferating future scenarios no longer seem like science fiction, they are not exactly fact either—not yet. They are still stories about the future and they are stories about science, though they can no longer be banished to the bantustans of unlikely sci-fi. In a promise-oriented world of fast-paced technological change, prediction is the new basis of authority.

That is why futurist groups, operating thus far on the margins of cultural conversation, were thrust into the most significant discussions of the twenty-first century: What is biological, what artificial? Who owns life when it’s bred in the lab? Should there be cut off-lines to technological interventions into life itself, into our DNA, our neurological structures, or those of our foodstuffs? What will happen to human rights when the contours of what is human become blurred through technology?

The futurist movement, in a sense, went viral. Bill McKibben’s Enough (2004) faced off against biophysicist Gregory Stock’s Redesigning Humans (2002) on television and around the web. New groups and think tanks formed every day, among them the Foresight Institute and the Extropy Institute. Their general membership started to overlap, as did their boards of directors, with figures like Ray Kurzweil ubiquitous. Heavyweight participants include Eric Drexler—the father of nanotechnology—and MIT giant Marvin Minsky. One organization, the World Transhumanist Association, which broke off from the Extropy in 1998, counts six thousand members, with chapters across the globe.

If the emergence of NBIC and the new culture of prediction galvanized futurists, the members were also united by an obligatory and almost imperial sense of optimism, eschewing the dystopian visions of the eighties and nineties. They also learned the dangers of too much enthusiasm. For example, the Singularity Institute, wary of sounding too religious or rapturous, presents its official version of the future in a deliberately understated tone: “The transformation of civilisation into a genuinely nice place to live could occur, not in a distant million-year future, but within our own lifetimes.”
Link.

Thursday, June 24, 2010

NYT: Merely Human? That’s So Yesterday

I'm sure most of you have caught this by now, but the New York Times recently published a 5,000 word article about the Singularity University, Ray Kurzweil, and the technological Singularity. All the usual suspects are referenced within, including the IEET's James Hughes, Terry Grossman, Peter Thiel, Peter Diamandis, Andrew Hessel, Sonia Arrison, and William S. Bainbridge.

A taste of the article:
Richard A. Clarke, former head of counterterrorism at the National Security Council, has followed Mr. Kurzweil’s work and written a science-fiction thriller, “Breakpoint,” in which a group of terrorists try to halt the advance of technology. He sees major conflicts coming as the government and citizens try to wrap their heads around technology that’s just beginning to appear.

“There are enormous social and political issues that will arise,” Mr. Clarke says. “There are vast groups of people in society who believe the earth is 5,000 years old. If they want to slow down progress and prevent the world from changing around them and they engaged in political action or violence, then there will have to be some sort of decision point.”

Mr. Clarke says the government has a contingency plan for just about everything — including an attack by Canada — but has yet to think through the implications of techno-philosophies like the Singularity. (If it’s any consolation, Mr. Long of the Defense Department asked a flood of questions while attending Singularity University.)

Mr. Kurzweil himself acknowledges the possibility of grim outcomes from rapidly advancing technology but prefers to think positively. “Technological evolution is a continuation of biological evolution,” he says. “That is very much a natural process.”
Disturbing fact revealed in the article: Google and Microsoft employees trailed only members of the military as the largest individual contributors to Ron Paul’s 2008 presidential campaign.

For a curious and infuriating response to the NYT article, be sure to check out Pete Shank's "A Singular Kind of Eugenics," but be warned: the bullshit factor is off the charts (e.g. Shank is terribly confused about the history of transhumanism, particularly the role and evolution of the Extropy Institute, the World Transhumanist Association, Humanity+ and the Institute for Ethics and Emerging Technologies).

Singularity Summit 2010: August 14-15

The Singularity Summit for 2010 has been announced and will be held on August 14-15 at the San Francisco Hyatt Regency. Be sure to register soon.

This year's Summit, which is hosted by the Singularity Institute, will focus on neuroscience, bioscience, cognitive enhancement, and other explorations of what Vernor Vinge called 'intelligence amplification' -- the other route to the technological Singularity.

Of particular interest to me will be the talk given by Irene Pepperberg, author of "Alex & Me," who has pushed the frontier of animal intelligence with her research on African Gray Parrots. She will be exploring the ethical and practical implications of non-human intelligence enhancement and of the creation of new intelligent life less powerful than ourselves.

A sampling of the speakers list includes:
  • Ray Kurzweil, inventor, futurist, author of The Singularity is Near
  • James Randi, skeptic-magician, founder of the James Randi Educational Foundation
  • Dr. Anita Goel, a leader in the field of bionanotechnology, Founder & CEO, Nanobiosym, Inc.
  • Dr. Irene Pepperberg, leading investigator of animal intelligence, trainer of the African Grey Parrot "Alex"
  • Prof. Alan Snyder, Director, Centre for the Mind at the University of Sydney, researcher in brain-computer interfaces
  • Prof. Steven Mann, augmented reality pioneer, professor at University of Toronto, "world's first cyborg"
  • Dr. Gregory Stock, bioethicist and biotech entrepreneur, author of Engineering Humans: Our Inevitable Genetic Future
  • Dr. Ellen Haber-Katz, a professor at the Wistar Institute who studies rapid-regenerating mice
  • Joe Z. Tsien, scholar at the Medical College of Georgia, who created a strain of "Doogie Mouse" with twice the memory of average mice
  • Eliezer Yudkowsky, research fellow with the Singularity Institute
  • Michael Vassar, president of the Singularity Institute
  • David Hanson, CEO of Hanson Robotics, creator of the world's most realistic humanoid robots
  • Demis Hassabis, research fellow at the Gatsby Computational Neuroscience Unit at the University of London
From the press release:
Will it be one day become possible to boost human intelligence using brain implants, or create an artificial intelligence smarter than Einstein? In a 1993 paper presented to NASA, science fiction author and mathematician Vernor Vinge called such a hypothetical event a "Singularity", saying "From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye". Vinge pointed out that intelligence enhancement could lead to "closing the loop" between intelligence and technology, creating a positive feedback effect.

This August 14-15, hundreds of AI researchers, robotics experts, philosophers, entrepreneurs, scientists, and interested laypeople will converge in San Francisco to address the Singularity and related issues at the only conference on the topic, the Singularity Summit. Experts in fields including animal intelligence, artificial intelligence, brain-computer interfacing, tissue regeneration, medical ethics, computational neurobiology, augmented reality, and more will share their latest research and explore its implications for the future of humanity.

Saturday, October 3, 2009

Tuesday, March 24, 2009

Singularity visions

Guest blogger David Brin is next scheduled to blog about the Singularity -- a future nexus point when the capacities of an artificial intelligence (or a radically augmented human) exceeds that of humans. It is called the “Singularity” because it impossible to predict what will follow. A Singularity could usher in an era of great wisdom, prosperity and happiness (not to mention the posthuman era), or it could result in the end of the human species.

David Brin believes that we are likely en route to a Singularity, but that its exact nature cannot be known. He also doesn't believe that such an event is inevitable. In his article, “Singularities and Nightmares: Extremes of Optimism and Pessimism About the Human Future,” Brin posits four different possibilities for human civilization later this century:
  1. Self-destruction
  2. Positive Singularity
  3. Negative Singularity
  4. Retreat (i.e. neo-Luddism)
Brin, in a personal email to me, recently wrote, “[My] singularity friends think I am an awful grouch, while my conservative friends think I am a godmaker freak.” Indeed, Brin has expressed skepticism at the idea of a meta-mind or a Teilhard de Chardin apotheosis, while on the other hand he hasn’t shied away from speculations about transcendent artificial intelligences who shuffle thorough the Singularity without a care for their human benefactors.

Stay tuned for David's elaboration on these and other points.

Saturday, February 2, 2008

New Terminator show brings Singularity to prime time


Like most science fiction on television these days, the new Terminator show is virtually unwatchable. That said, they correctly introduced futurist lingo into the story.

At one point in the episode "The Turk," John Connor tells his mom, Sarah Connor, about the Singularity and describes it as a point in time when machines are able to build superior versions of themselves without the aid of humans--after which point they can pretty much kiss their asses goodbye.

That's just about right, actually. And I've always found the Terminator future to be one of the more disturbing dystopic visions. Given the potential for robotic armies and greater-than-human artificial intelligences, one has to pause for thought...

Wednesday, January 24, 2007

Richard Clarke on NPR

Richard Clarke, the counterterrorism czar to Presidents Clinton and George W. Bush, was recently interviewed on NPR.

Clarke talks about his latest novel, Breakpoint, and discusses such issues as cyber-insecurity, the growing threat from China, transhumanism (human enhancement, mind-machine mergers) and the Singularity. He also talks about the fact that the US has disavowed genetic modification for enhancement and speculates about what would be done if other countries allowed it.

Clarke is a former counterterrorism official and is currently a consultant for ABC News, adjunct faculty member at Harvard's Kennedy School of Government, and author of Against All Enemies and The Scorpion's Gate.

[thanks to Gary for the link]