Showing posts with label technological singularity. Show all posts
Showing posts with label technological singularity. Show all posts

Monday, August 23, 2010

It's not all about Ray: There's more to Singularity studies than Kurzweil

I'm finding myself a bit disturbed these days about how fashionable it has become to hate Ray Kurzweil.

It wasn't too long ago, with the publication of The Age of Spiritual Machines, that he was the cause célèbre of our time. I'm somewhat at a loss to explain what has happened in the public's mind since then; his ideas certainly haven't changed all that much. Perhaps it's a collective impatience with his timelines; the fact that it isn't 2049 yet has led to disillusionment. Or maybe it's because people are afraid of buying into a set of predictions that may never come true—a kind of protection against disappointment or looking foolish.

What's more likely, however, is that his ideas have reached a much wider audience since the release of Spiritual Machines and The Singularity is Near. In the early days his work was picked up by a community who was already primed to accept these sorts of wide-eyed speculations as a valid line of inquiry. These days, everybody and his brother knows about Kurzweil. This has naturally led to an increased chorus of criticism by those who take issue with his thesis—both from experts and non-experts alike.

As a consequence of this popularity and infamy, Ray has been given a kind of unwarranted ownership over the term 'Singularity.' This has proven problematic on several levels, including the fact that his particular definition and description of the technological singularity is probably not the best one. Kurzweil has essentially equated the Singularity with the steady, accelerating growth of all technologies, including intelligence. His definition, along with its rather ambiguous implications, is inconsistent with the going definition used by other Singuarlity scholars, that of it being an 'intelligence explosion' caused by the positive feedback of recursively improving machine intelligences.

Moreover, and more importantly, Ray Kurzweil is one voice among many in a community of thinkers who have been tackling this problem for over half a century. What's particularly frustrating these days is that, because Kurzweil has become synonymous with the Singularity concept, and because so many people have been caught in the hate-Ray trend, people are throwing out the Singularity baby with the bathwater while drowning out all other voices. This is not only stupid and unfair, it's potentially dangerous; Singularity studies may prove crucial to the creation of a survivable future.

Consequently, for those readers new to these ideas and this particular community, I have prepared a short list of key players whose work is worth deeper investigation. Their work extends and complements the work of Ray Kurzweil in many respects. And in some cases they present an entirely different vision altogether. But what matters here is that these are all credible academics and thinkers who have worked or who are working on this important subject.

Please note that this is not meant to be a comprehensive list, so if you or your favorite thinker is not on here just take a chill pill and add a post to the comments section along with some context.
  • Jon von Neumann: The brilliant Hungarian-American mathematician and computer scientist, John von Neumann is regarded as the first person to use the term 'Singularity' in describing a future event. Speaking with Stanislaw Ulam in 1958, von Neumann made note of the accelerating progress of technology and constant changes to human life. He felt that this tendency was giving the appearance of our approaching some essential singularity beyond which human affairs, as we know them, could not continue. In this sense, von Neumann's definition is more a declaration of an event horizon.
  • I. J. Good: One of the first and best definitions of the Singularity was put forth by mathematician I. G. Good. Back in 1965 he wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they might be able to improve their own designs in ways unforeseen by their designers and thus recursively augment themselves into far greater intelligences. He thought that, while the first set of improvements might be small, machines could quickly become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a Singularity).
  • Marvin Minsky: Inventor and author, Minsky is universally regarded as one of the world's leading authorities in artificial intelligence. He has made fundamental contributions to the fields of robotics and computer-aided learning technologies. Some of his most notable books include The Society of Mind, Perceptrons, and The Emotion Machine. Ray Kurzweil calls him his most important mentor. Minsky argues that our increasing knowledge of the brain and increasing computer power will eventually intersect, likely leading to machine minds and a potential Singularity.
  • Vernor Vinge: In 1983, science fiction writer Vernor Vinge rekindled interest in Singularity studies by publishing an article about the subject in Omni magazine. Later, in 1993, he expanded on his thoughts in the article, "The Coming Technological Singularity: How to Survive in the Post-Human Era." He (now famously) wrote, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Inspired by I. J. Good, he argued that superhuman intelligence would be able enhance itself faster than the humans who created them. He noted that, "When greater-than-human intelligence drives progress, that progress will be much more rapid." He speculated that this feedback loop of self-improving intelligence could cause large amounts of technological progress within a short period, and that the creation of smarter-than-human intelligence represented a breakdown in humans' ability to model their future. Pre-dating Kurzweil, Vinge used Moore's law in an attempt to predict the arrival of artificial intelligence.
  • Hans Moravec: Carnegie Mellon roboticist Hans Moravec is a visionary thinker who is best known for his 1988 book, Mind Children, where he outlines Moore's law and his predictions about the future of artificial life. Moravec's primary thesis is that humanity, through the development of robotics and AI, will eventually spawn their own successors (which he predicts to be around 2030-2040). He is also the author of Robot: Mere Machine to Transcendent Mind (1998) in which he further refined his ideas. Moravec writes, "It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half–century of development. Indeed, for that reason, many long–time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. But there are very good reasons why things will go much faster in the next fifty years than they have in the last fifty."
  • Robin Hanson: Associate professor of economics at George Mason University, Robin Hanson has taken the "Singularity" term to to refer to sharp increases in the exponent of economic growth. He lists the agricultural and industrial revolutions as past "singularities." Extrapolating from such past events, he proposes that the next economic singularity should increase economic growth between 60 and 250 times. Hanson contends that such an event could be triggered by an innovation that allows for the replacement of virtually all human labor, such as mind uploads and virtually limitless copying.
  • Nick Bostrom: University of Oxford's Nick Bostrom has done seminal work in this field. In 1998 he published, "How Long Before Superintelligence," in which he argued that superhuman artificial intelligence would likely emerge within the first third of the 21st century. He reached this conclusion by looking at various factors, including different estimates of the processing power of the human brain, trends in technological advancement and how fast superintelligence might be developed once there is human-level artificial intelligence.
  • Eliezer Yudkowsky: Artificial intelligence researcher Eliezer Yudkowsky is a co-founder and research fellow of the Singularity Institute for Artificial Intelligence (SIAI). He is the author of "Creating Friendly AI" (2001) and "Levels of Organization in General Intelligence" (2002). Primarily concerned with the Singularity as a potential human-extinction event, Yudkowsky has dedicated his work to advocacy and developing strategies towards creating survivable Singularities.
  • David Chalmers: An important figure in philosophy of mind studies and neuroscience, David Chalmers has a unique take on the Singularity where he argues that it will happen through self-amplifying intelligence. The only requirement, he claims, is that an intelligent machine be able to create an intelligence smarter than itself. The original intelligence itself need not be very smart. The most plausible way, he says, is simulated evolution. Chalmers feels that if we get to above-human intelligence it seems likely it will take place in a simulated world, not in a robot or in our own physical environment.
Like I said, this is a partial list, but it's a good place to start. Other seminal thinkers include Alan Turing, Alvin Toffler, Eric Drexler, Ben Goertzel, Anders Sandberg, John Smart, Shane Legg, Marin Rees, Stephen Hawking and many, many others. I strongly encourage everyone, including skeptics, to take a deeper look into their work.

And as for the all the anti-Kurzweil sentiment, all I can say is that I hope to see it pass. There is no good reason why he—and others—shouldn't explore this important area. Sure, it may turn out that everyone was wrong and that the future isn't at all what we expected. But as Enrico Fermi once said, "There's two possible outcomes: if the result confirms the hypothesis, then you've made a discovery. If the result is contrary to the hypothesis, then you've made a discovery."

Regardless of the outcome, let's make a discovery.

Friday, February 27, 2009

Vernor Vinge on superhuman intelligence and Moore's Law

Science fiction writer and futurist Vernor Vinge argues that there's more to getting to the Singularity than Moore's Law.

And over at Accelerating Future, Michael Anissimov drives the point home that the Singularity is just about smarter than human intelligence.

Charlie Stross on the Singularity: "Forget it"

Science fiction writer and futurist Charlie Stross has published his FAQ for the 21st Century. In discussing the technological Singularity he notes:
The rapture of the nerds, like space colonization, is likely to be a non-participatory event for 99.999% of humanity — unless we're very unlucky. If it happens and it's interested in us, all our plans go out the window. If it doesn't happen, sitting around waiting for the AIs to save us from the rising sea level/oil shortage/intelligent bioengineered termites looks like being a Real Bad Idea. The best approach to the singularity is to apply Pascal's Wager — in reverse — and plan on the assumption that it ain't going to happen, much less save us from ourselves.
I strongly recommend you read the entire FAQ.

Tuesday, June 3, 2008

Warren Ellis: Singularity 'indivisible from religious faith'

Science fiction author Warren Ellis has written a short and typically trite blog post about what he calls The NerdGod Delusion -- an attack against those who make the case for a technological Singularity:
The Singularity is the last trench of the religious impulse in the technocratic community. The Singularity has been denigrated as "The Rapture For Nerds," and not without cause. It’s pretty much indivisible from the religious faith in describing the desire to be saved by something that isn’t there (or even the desire to be destroyed by something that isn’t there) and throws off no evidence of its ever intending to exist. It’s a new faith for people who think they’re otherwise much too evolved to believe in the Flying Spaghetti Monster or any other idiot back-brain cult you care to suggest.

Vernor Vinge, the originator of the term, is a scientist and novelist, and occupies an almost unique space. After all, the only other sf writer I can think of who invented a religion that is also a science-fiction fantasy is L Ron Hubbard.
Wow, sounds like Warren has some special knowledge of his own. I certainly hope that, aside from this vacuous and inflammatory post, that he'll begin to share some of his expert views into AI theory and the potential for machine minds.

With the bold claim that there is "no evidence" to support the suggestion that SAI is engineerable, I'll have to assume that he's engaged the principle thinkers on the matter and offered sufficient critique to dismiss their findings outright -- thinkers like Eliezer Yudkowsky, Ben Goertzel, Hugo de Garis, and the many others devoted to the problem.

Otherwise, why should we take Ellis seriously? Or are we expected to take his position on mere faith alone?

The day is coming, my friends, when Singularity denial will seem as outrageous and irresponsible as the denial of anthropogenic global warming. And I think the comparison is fair; environmentalists are often chastised for their "religious-like" convictions and concern. It's easy to mock the Chicken Littles of the world.

And like the foot-dragging on climate change, there are consequences to inaction. The bogus and unfair memetic linkage between millenarian beliefs and the Singularity is a dangerous one, and the sooner this association is severed the better.

As I see it, there are four strategies to help us normalize the Singularity debate:
(1) We need to better promote and engage respected thinkers and public intellectuals who are sympathetic to the issue -- key figures like Ray Kurzweil, Robin Hanson, Nick Bostrom, Marvin Minsky, etc.

(2) A new generation of public figures is required -- renowned individuals who are willing to a) put their reputations at stake and b) use their popularity/credibility to raise awareness and help with foresight.

(3) Continue to frame the issue as a scientific endeavor and pitch the various scenarios as hypotheses; we need to keep the language within the scientific vernacular.

(4) Let the critics have it and show them no quarter. Particularly when their denial is mere contradiction and driven by sheer incredulity. We need to force them to better articulate their positions while defending our own with as much evidence can be mustered.
And in the meantime, don't buy into Ellis's empty anti-Singularity rhetoric, which is all that it really is.

Tuesday, May 13, 2008

The Singularity is not what you think

People often ask me for my definition of the technological Singularity.

More specifically, they want me to offer some predictions as to what it will actually look like and what it might mean to them and the human species.

More often than not they don't like my answer, and it's probably because I re-frame the discussion and take the conversation elsewhere.

What people are really asking me to do is predict the outcome of the Singularity. And because I don't, they get frustrated with me.

But that's the problem. That's the whole point of this 'thing' we call the Singularity.

As has been noted elsewhere, virtually everyone has their own definition of the Singularity and it's become a very polluted term, one that's been stripped of all meaning.

So, before I tell you my own 'definition' of the Singularity, let me first tell you what it's not.

It's not any particular outcome or prognostication.

It's not any kind of definable event or transformational process.

Nor is it a term that can be used to describe a futuristic state of existence or the nature of advanced artificial intelligence.

But it's often used to describe these very things -- as if the term can be used as a synonym for what are essentially predictions. When people talk about the Singularity they can't help but inject their own anticipated outcome -- be it positive or negative.

I can be guilty of this at times. But so I don't get myself into too much futurological trouble I tend to refer to things as being in a state of post-Singularity. That's my clever way of avoiding any in-depth discussion as to how we'll actually get there.

Alright, so what's the technological Singularity?

Simply put, it's an unanswered question.

Vernor Vinge used the term Singularity for a very good reason. It's an event horizon in the truest sense.

But instead of a cosmological event horizon caused by a black hole's gravitational pull, it's a social event horizon caused by our inability to extrapolate the trajectory of human civilization beyond a certain point of technological sophistication.

The Singularity, therefore, describes a futurological problem -- a blind-spot in our predictive thinking.

That's it. There's no more to it than that.

Anything beyond this strict and limited definition is a discussion of something else -- an attempt to solve the conundrum and make predictions about 1) the actual characteristics and manifestation of the Singularity and 2) its aftermath.

So, if I say that the Singularity will involve a hard takeoff of SAI, I'm actually presenting a hypothesis that attempts to retire the term 'Singularity' and see it replaced by the term, uh, SAI hard takeoff event (we'll clearly have to come up with something better).

Or, if I say it will be a decades long process that sees humanity transition into a postbiological condition, I am likewise trying to put the term to rest.

Why does our predictive modeling break down?

Two reasons: 1) accelerating change and 2) the theoretic potential for the rise of recursively self-modifying artificial superintelligence.

Essentially, because disruptive change will be coming so fast and furiously, humanity's future remains largely unpredicted; there are too many variables and wildcards. And the rise of SAI, given its potential to be thousands upon thousands of times more powerful than the human mind, is simply beyond our prognosticative sensibilities.

Sure, we can make wild-ass guesses. And maybe one or two of them may actually turn out to be correct. But we won't know for sure until we get there.

Or at least until we get really close.

Consequently, the Singularity is a relativistic term.

People of the future won't use the word. That's a term reserved for us in our ignorance.

But as we get closer to the Singularity we will in all likelihood gain an increased appreciation of what will happen at the point when machine intelligence exceeds the capacity of humans.

And keeps on going.

At that point, once the fog that is the Singularity begins to lift, we will cease to call it the Singularity and replace it with a more descriptive term.

So, as we journey forward, what was once concealed over the horizon will finally be revealed.

In the meantime, just remember to frame the Singularity as a social event horizon, particularly as it pertains to accelerating change and the seemingly imminent rise of SAI.

Wednesday, December 19, 2007

C-Realm Podcast interview

I was recently interviewed by KMO for the C-Realm Podcast.

In this episode KMO speaks to Bill McKibben and gets his insight into the "transhumanist agenda" and what it means to remain human in an engineered age. I provide the counterpoint and discuss the ethical and sociological implications of transhumanism.

Saturday, December 8, 2007

The problem with 99.9 % of so-called 'solutions' to the Fermi Paradox

Non-exclusivity.

Sure everyone has a convenient answer to the Fermi Paradox, but nearly all of them fail the non-exclusivity test. While some solutions to the FP may account for many if not most of the reasons why we haven't detected signs of ETI's, they cannot account for all.

For example, take the notion that interstellar travel is too costly or that civs have no interest in embarking on generational space-faring campaigns. Sure, this may account for a fair share of the situation, but in a Universe of a gajillion stars it cannot possibly account for all. There's got to be at least one, if not millions of civs, who for whatever reason decide it just might be worth it.

Moreover, answers like the ‘zoo hypothesis,’ ‘non-interference,’ or ‘they wouldn’t find us interesting,' tend to be projections of the human psyche and a biased interpretation of current events.

Cosmological determinism

Analyses of the FP need to adopt a more rigid and sweeping methodological frame.

We need to take determinism more seriously. The Universe we observe is based on a fixed set of principles -- principles that necessarily invoke cosmological determinism and in all likelihood sociological uniformitarianism. In other words, the laws of the Universe are moulding us on account of selectional pressures beyond our control.

Civilizations that don't conform to adaptive states will simply go extinct. The trouble is, we have no say in what these adaptive states might be like; we are in the business of conforming such that we continue to survive.

The question is, what are these adaptive states?

Strong convergence

Transhumanist philosopher Nick Bostrom refers to this as the strong convergence hypothesis -- the idea that all sufficiently advanced civilizations converge towards the same optimal state.

This is a hypothesized developmental tendency akin to a Dawkinsian fitness peak -- the suggestion that identical environmental stressors, limitations and attractors will compel intelligences to settle around optimal existential modes. This theory does not favour the diversification of intelligence – at least not outside of a very strict set of living parameters.

The space of all possible minds...that survive

Consequently, our speculations about the characteristics of a post-Singularity machine mind must take deterministic constraints into account. Yes, we can speculate about the space of all possible minds, but this space is dramatically shrunk by adaptationist constraints.

The question thus becomes, what is the space of all possible post-Singularity machine minds that result in a civilization's (or a singleton's) ongoing existence?

And it is here that you will likely begin to find a real and meaningful explanation to the Fermi Paradox and the problem that is non-exclusivity.

Wednesday, September 12, 2007

Monday, August 13, 2007

Sign up for the Singularity Summit

From the SIAI website:
The Singularity Institute for Artificial Intelligence is thrilled to announce the Singularity Summit 2007, a major two-day event bringing together 17 outstanding thinkers to examine a historical moment in humanity's history – a window of opportunity to shape how we develop advanced artificial intelligence. We invite you to join us.
Check out this list of speakers:

* Dr. Rodney Brooks, famous MIT roboticist and founder of iRobot
* Dr. Peter Norvig, director of research at Google
* Paul Saffo, Stanford, leading technology forecaster
* Sam Adams, distinguished engineer within IBM's Research Division
* Jamais Cascio, cofounder of World Changing and creator of Open the Future
* Dr. Ben Goertzel, director of research at SIAI and founder of Novamente
* Dr. J. Storrs Hall, author of Beyond AI: Creating the Conscience of the Machine
* Dr. Charles L. Harper, Jr., senior VP at John Templeton Foundation
* Dr. James Hughes, executive director of Institute for Ethics and Emerging Technologies
* Neil Jacobstein, prominent AI expert and CEO of Teknowledge
* Dr. Stephen Omohundro, founder of Self-Aware Systems
* Dr. Barney Pell, founder and CEO of Powerset
* Christine Peterson, cofounder of Foresight Nanotech Institute
* Peter Thiel, cofounder of PayPal and founder of Clarium Capital
* Wendell Wallach, author of Machine Morality: From Aristotle to Asimov and Beyond
* Eliezer Yudkowsky, Friendly AI pioneer and cofounder of SIAI
* Peter Voss, founder and CEO of Adaptive Artificial Intelligence

Sign-up

Sunday, August 5, 2007

The Fermi Paradox: Advanced civilizations do not…

This article is partly adapted from my TransVision 2007 presentation, “Whither ET? What the failing search for extraterrestrial intelligence tells us about humanity's future.”

As I stated in my previous article, “The Fermi Paradox: Back with a vengeance”:
The fact that our Galaxy appears unperturbed is hard to explain. We should be living in a Galaxy that is saturated with intelligence and highly organized. Thus, it may be assumed that intelligent life is rare, or, given our seemingly biophilic Universe, our assumptions about the general behaviour of intelligent civilizations are flawed.

A paradox is a paradox for a reason: it means there’s something wrong in our thinking.
So, let’s try to figure out what’s going on. Given the Great Silence, and knowing what we may be capable of in the future, we can start to make some fairly confident assumptions about the developmental characteristics of advanced civilizations.

But rather than describe the possible developmental trajectories of extraterrestrial intelligences (ETI's) (a topic I’ll cover in my next article), I’m going to dismiss some commonly held assumptions about the nature of advanced ETI’s – and by consequence some assumptions about our very own future.

Advanced civilizations do not…


…advertise their presence to the local community or engage in active efforts to contact

As SETI is discovering (but is in denial about), space is not brimming with easily detectable radio signals. SETI’s work during the past 40 years indicates that the quest to detect signals will not be easy.

This problem is not as simple as it sounds. A common apology is that we’ve only recently started our search and we have only scratched the surface. The trouble, however, is that it would be no problem for an ETI to communicate with us if they wanted to.

To do this all they would need to do is seed the Galaxy with Bracewell probes (a self-replicating communications beacon). This scenario was explored in Carl Sagan’s Contact in which a Bracewell probe was lying in wait about 26 light years from Earth in the Vega system. The probe was activated by our radio signals, causing it to direct powerful radio signals at Earth – signals that would not be overlooked.

We know that no such object exists in our solar system or within a radius of about 25 to 50 light years. Our radio activity should have most certainly activated any probe lying dormant in our local vicinity by know. It is also reasonable to assume that if ETI’s embarked on such a communications mission that every solar system would likely have its own Bracewell probe.

Which in turn raises a more troubling question: if ETI’s could construct and distribute probes in this way, why haven’t they gone the extra mile and spread other types of self-replicating devices such as uplift or colonization probes?

…engage in any kind of megascale engineering or stellar re-engineering that is immediately obvious to us within our light cone

All stellar phenomenon that we have observed to this point in time appears ‘natural’ and unmodified. We see no clusters of perfectly aligned stars, nor do we signs of Kardashev III civilizations utilizing the energy output of the entire Milky Way.

As for our light cone, the Milky Way is 100,000 light years in diameter; given the possibility that our Galaxy has been able to support intelligent life for about 4.5 billion years, a 100 million year time lag (at its worst) is not severe enough to cause observational problems (except for distant Galaxies).

…colonize the Galaxy

Our Galaxy remains uncolonized despite the theoretical potential for advanced ETI’s to do so – namely the time and the technology. All that would be required is a self-replicating Von Neumann probe that proliferates outward at an exponential rate. Technologies required to build such a spacecraft would include artificial intelligence, molecular assembling nanotechnology, and an advanced propulsion scheme like anti-matter rockets, beamed energy, or interstellar ram-jets.

The reason for non-colonization is not obvious (hence the Fermi Paradox). In addition to technological feasibility there is the issue of economic and sociological imperatives for colonization.

…sterilize the Galaxy

Finally, some good news. We know the Galaxy is not sterile because we exist here on Earth.

Like the colonization potential, the prospect for an advanced ETI to sterilize the Galaxy exists through the use of berserker probes (a term attributed to Fred Saberhagen). These probes could steer NEO’s at planets, unleash nanotechnological phages, or toast planets with directed beams of highly concentrated light.

And like the Bracewell scenario, if a beserker was lying dormant in our solar system it should have destroyed us by now. If sterilization is the goal, there is no good reason for it to wait – particularly as our own civilization hurtles towards a Singularity transition.

Reasons for unleashing fleets of berserkers can be conceived, including xenophobic sociological imperatives or a malign artificial superintelligence (pdf). And all it would take is one civilization to do it. But as Robert Freitas has stated, "The present observational record can only support the much more restricted conclusion that no rapacious galactic civilisations are currently loose in the Galaxy."

…uplift or interact with pre-Singularity intelligences and biospheres

As a civilization that has been left to fend for itself, we have to assume that we, like any other civilization out there, goes it alone. No one is coming to help us. The Great Silence will continue.

Moreover, our presence on Earth and our civilizational development can be explained by naturalistic phenomena. Our existence and ongoing progress has been devoid of extraterrestrial interventions. If we’re going to survive the Singularity, or any other existential risks for that matter, it will have to be of our own devices.

…re-engineer the cosmos

A number of prominent futurists, a list that includes Ray Kurzweil and Hans Moravec, have speculated that the destiny of advanced intelligence is to re-work the cosmos itself. This has been imagined as an ‘intelligence explosion’ as advanced life expands outward into the cosmos like a bubble. The entire Galaxy would be re-organized with much of its matter converted into computronium. Eventually, it is thought that the laws of the Universe will be re-tuned to meet the needs of advanced civilizations.

Unfortunately, we do not appear to inhabit a Universe that even remotely resembles this model. The cosmos appears natural and unperturbed.

This is reminiscent of the God problem and the presence of evil. We live in a Universe that is hostile, indifferent and pointless. If advanced ETI’s had the capacity to re-engineer the Universe such that it was safer, more meaningful and paradisical they would have done so by now. By virtue of the fact that we observe such a dangerous Universe we should probably conclude that such a project is not an option.

In the final part of this series I will make an effort to explain why advanced civilizations don’t do these things and what they might be doing instead.