Showing posts with label simulation argument. Show all posts
Showing posts with label simulation argument. Show all posts

Monday, August 2, 2010

HuffPo: Sims, Suffering and God: Matrix Theology and the Problem of Evil

Check out Clay Farris Naff's latest article, Sims, Suffering and God: Matrix Theology and the Problem of Evil:
And that brings us back to the Sims. How can know whether we're simulations in some superduper computer built by posthumans? Some pretty amusing objections have been raised, such as quantum tests that a simulation would fail. It seems safe to say that any sim-scientists examining the sim-universe they occupy would find that the laws of that universe are self-consistent. To assert that a future computer could simulate us, complete with consciousness, but crash when it came to testing Bell's Inequality strikes me as ludicrous. Unless, of course, the program were released by Microsoft. Oooh, sorry, Bill, cheap shot. Let's take it for granted that we could not expose a simulation from within -- unless the Creators wanted us to.

But the problem of pointless suffering leads me to very different conclusion. Recall Bostrom's first conjecture: that few or none of our civilizations reach a posthuman stage capable of building computers that can run the kind of simulation in which we might exist. There are many ways civilization could end (just ask the dinosaurs!), but the one absolutely necessary condition for survival in an environment of continually increasing technological prowess is peace. Not a mushy, bumper sticker kind of peace, but the robust containment of conflict and competition within cooperative frameworks. (Robert Wright, in his brilliant if uneven book NonZero: The Logic of Human Destiny, unfolds this idea beautifully.)

What is civilization if not a mutual agreement to sacrifice some individual desires (to not pay taxes, for example, or to run through red lights) for the greater common good? Communication, trust, and cooperation make such agreements possible, but the one ingredient in the human psyche that propels civilization forward even as we gain technological power is empathy.
Link.

Tuesday, June 2, 2009

Welcome to the Machine, Part 5: Simulation taxonomy

Previously in series: The Ethics of Simulated Beings, Descartes's Malicious Demon, The Simulation Argument, Kurzweil's Nano Neural Nets.

As shocking as the Simulation Argument is, it's (arguably) a revelation that's no less shocking than previous existential paradigm shifts. While undoubtedly disturbing to the people alive at the time, previous civilizations have come to grips with the knowledge that they do not live on a flat Earth nor at the center of the Universe.

Like the simulation argument, these previous scientific epiphanies assaulted humanity's sense of itself and its cosmic importance within the Universe. But just as it no longer troubles us to know that we don't live at the center of the Universe, it shouldn't bother us to know that we don't reside in the deepest reality. While it's tempting to diminish the "realness" or the validity of a virtual world, so long as certain attributes of existence exist, there's no good reason to value one realm over another.

This being said, there are a number of unanswered questions about the type of simulation we could be living in—answers to which could have a profound impact on our self-conception.

We do not have the means yet to determine whether or not we live in a simulation, let alone the means to determine its potential type and nature. But this hasn't prevented serious speculation; we may be able to describe and categorize the possible simulation types and varieties of virtual life:

Hard and soft simulations

The possibility exists, for example, for what philosopher Barry Dainton describes as hard and soft simulations. Hard simulations result from directly tampering with the neural hardware ordinarily responsible for producing experience whereas people running in a soft simulation have no corporeal source—they are exclusive streams of consciousness generated by computers running the appropriate software; there is no external hardware support.

The inhabitants of The Matrix had bodies that existed outside of the simulation, thus qualifying it as a hard simulation. Sensory experience could be directly machine-controlled through the stimulation of the appropriate areas of the sensory cortex and the movements of the simulated body would be under the control of the source mind, but there would be no need for the source body to actually move. As Morpheus noted, "What is real? How do you define real? If you're talking about what you can feel, what you can smell, what you can taste and see...then real is simply...electrical signals interpreted by your brain."

Complete and partial simulations

There's also the possibility for complete and partial simulations. In a complete simulation, every element of the experience is generated by artificial means (e.g. the complete suppression of all psychological characteristics (including memory) in favor of novel ones) .

But in a partial simulation only some parts or aspects of the experience are generated artificially (e.g. the person retains their individual psychology).

Active and passive simulants

Dainton also describes active and passive simulants. Actives are completely immersed in virtual environments, but they are in all other respects free agents—or, as Dainton concedes, free as any agent can be. Their actions are not dictated by the program, but instead flow from their own psychologies, even if these are machine-implemented.

Passive subjects, however, have a completely preprogrammed course of experiences. "The subjects may have the impression that they are autonomous individuals making free choices," writes Dainton, "but they are deluded." All their conscious decisions are determined by the program. They have apparent psychologies, and are conscious, feeling agents, he notes, but their real psychologies are entirely suppressed or nonexistent.

Original and replacement psychologies

Other varieties of simulated life include subjects who have either retained their original psychologies or are given entirely new ones. In an original psychology simulation, a simulant has an external existence outside the simulation and retains their original psychology -- again, The Matrix provides a good example. But in a replacement psychology situation, the simulant has external existence, but none of the original psychology is retained, only consciousness is transferred.

Communal and individual simulations

Simulation experiences could also be communal or individual.

Communal simulations have a virtual environment that is shared by a number of different subjects, each with individual and autonomous psychological systems.

In an individual simulation, however, there is only one real subject with an autonomous psychology; the other "inhabitants" of the simulation are merely automatons, parts of the machine-generated virtual environment. Communal and individual simulations could also be combined, where 'real' psychologies are intermixed with automatons. This scenario is (somewhat) explored in the 1999 film, The Thirteenth Floor.

Combinations

Which leads to the next level of complexity, the idea that these simulation types could be mixed and matched. Indeed, if powerful simulation technology were to be commonplace it is by no means inconceivable that these simulations, particularly those of the hard variety, would be generated in sufficient numbers.

One thinker who has thought of the various different combinations is Tony Fleet. While there are as many as 32 different combinations, he argues that only 9 of them are viable and/or logically consistent. For example, in a partial simulation scenario, an external entity it required -- therefore this is only possible in the hard simulation case; a partial soft simulation is therefore impossible.

Fleet speculates that the only viable combinations can involve the communal/active; individual/active, individual/passive simulation types (be sure to check out his tables). That said, he does not believe that we've covered all simulation types. For example, there is no distinction between physical, virtual and mixed simulations. Some more work clearly needs to be done to create a complete simulation taxonomy along with all logically consistent combinations.

Posthuman vacations

This opens the door to some remarkable possibilities. How might these simulations and virtual reality experiences be utilized by our descendants, or even our future selves?

It's conceivable that people might take virtual reality 'trips' to the past quite frequently. They would also likely be used on an occasional basis during history lessons for those with a particular interest in experiencing what it was like to live during certain periods of the past (Bostrom's Ancestor Simulations come to mind).

But such trips might also be taken for entertainment purposes. A future activity in a posthuman world might very well involve regular immersive and interactive journeys into simulated realities. And in order to increase the authenticity of such adventures, it's quite possible that posthumans may choose to temporarily suppress their psychologies and memories. Of course, they would recall the entire experiencing after re-awakening in their genuine reality as their authentic selves.

Which means that you might actually be an autonomous simulant with a replacement psychology living in a hard simulation.

And if that's the case, now what? How are you supposed to live?

A topic I'll return to in my next post.

Tuesday, April 14, 2009

Welcome to the Machine, Part 4: Kurzweil's nano neural nets

Previously in series: The Ethics of Simulated Beings, Descartes's Malicious Demon and The Simulation Argument.

As previously noted in this series, our entire world may be simulated. For all we know we're sitting on a powerful supercomputer somewhere, the mere playthings of posthuman intelligences.

But this is not the only possibility. There's another way that this kind of fully immersive 'reality' could be realized -- one that doesn't require the simulation of an entire world. Indeed, it's quite possible that your life is not what it seems -- that what you think of as reality is actually an illusion of the senses. You could be experiencing a completely immersive and totally convincing virtual reality right now and you don't even know it.

How could such a thing be possible? Nanotechnology, of course.

The nano neural net

In his book, The Singularity is Near, futurist Ray Kurzweil describes how a nanotechnology powered neural network could give rise to the ultimate virtual reality experience. By suffusing the brain with specialized nanobots, he speculates that we will someday be able to override reality and replace it with an experience that's completely fabricated. And all without the use of a single brain jack.

Here's how:

First, we have to remember that all sensory data we experience is converted into electrical signals that the brain can process. The brain does a very good job of this, and we in turn experience these inputs as subjective awareness (namely through consciousness and feelings of qualia); our perception of reality is therefore nothing more than the brain's interpretation of incoming sensory information.

Now imagine that you could stop this sensory data at the conversion point and replace it with something else.

That's where the nano neural net comes in. According to Kurzweil, nanbots would park themselves near every interneuronal connection coming in from our senses (sight, hearing, touch, balance, etc.). They would then work to 1) halt the incoming sensory signals (not difficult -- we already know how to use "neuron transistors" that can detect and suppress neuronal firing) and 2) replace these inputs with the signals required to support a believable virtual reality environment (a bit more challenging).

As Kurzweil notes, "The brain does not experience the body directly." As far as the conscious self is concerned, the sensory data would completely override the feelings generated by the real environment. The brain would experience the synthetic signals just as it would the real ones.

Generating synthetic experiences

Clearly, the second step -- generating new sensory signals -- is radically more complicated than the first (not to mention, of course, the difficulty of creating nanobots that can actually work within the brain itself!). Creating and transmitting credible artificial sensory data will be no easy feat. We will need to completely reverse engineer the brain so that we can map all requisite sensory interactions. We'll also need a fairly sophisticated AI to generate the stream of sensory data that's needed to create a succession of believable life experiences.

But assuming we can get a nano neural net to work, the sky's the limit in terms of how we could use it. Kurzweil notes,
You could decide to cause your muscles and limbs to move as you normally would, but the nanobots would intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move, appropriately adjusting your vestibular system and providing the appropriate movement and reorientation in the virtual environment.
From there we will create virtual reality experiences as real or surreal as our imaginations allow. We'll be able to choose different bodies, reside in all sorts of environments and interact with our fellow neural netters. It'll be an entirely new realm of existence. This new world, with all its richness and possibility, may eventually supplant our very own.

And in some cases, we may even wish to suppress and alter our memories such that we won't know who we really are and that we're actually living in a VR environment...

A topic I will explore in more detail in my next post.

Friday, April 10, 2009

Welcome to the Machine, Part 3: The Simulation Argument

Previously in series: The Ethics of Simulated Beings and Descartes's Malicious Demon.

No longer relegated to the domain of science fiction or the ravings of street corner lunatics, the "simulation argument" has increasingly become a serious theory amongst academics, one that has been best articulated by philosopher Nick Bostrom.

In his seminal paper "Are You Living in a Computer Simulation?" Bostrom applies the assumption of substrate-independence, the idea that mental states can reside on multiple types of physical substrates, including the digital realm. He speculates that a computer running a suitable program could in fact be conscious. He also argues that future civilizations will very likely be able to pull off this trick and that many of the technologies required to do so have already been shown to be compatible with known physical laws and engineering constraints.

Harnessing computational power

Similar to futurists Ray Kurzweil and Vernor Vinge, Bostrom believes that enormous amounts of computing power will be available in the future. Moore's Law, which describes an eerily regular exponential increase in processing power, is showing no signs of waning, nor is it obvious that it ever will.

To build these kinds of simulations, a posthuman civilization would have to embark upon computational megaprojects. As Bostrom notes, determining an upper bound for computational power is difficult, but a number of thinkers have given it a shot. Eric Drexler has outlined a design for a system the size of a sugar cube that would perform 10^21 instructions per second. Robert Bradbury gives a rough estimate of 10^42 operations per second for a computer with a mass on order of a large planet. Seth Lloyd calculates an upper bound for a 1 kg computer of 5*10^50 logical operations per second carried out on ~10^31 bits – this would likely be done on a quantum computer or computers built of out of nuclear matter or plasma [check out this article and this article for more information].

More radically, John Barrow has demonstrated that, under a very strict set of cosmological conditions, indefinite information processing (pdf) can exist in an ever-expanding universe.

At any rate, this extreme level of computational power is astounding and it defies human comprehension. It’s like imagining a universe within a universe -- and that's precisely be how it may be used.

Worlds within worlds

"Let us suppose for a moment that these predictions are correct," writes Bostrom. "One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears." And because their computers would be so powerful, notes Bostrom, they could run many such simulations.

This observation, that there could be many simulations, led Bostrom to a fascinating conclusion. It's conceivable, he argues, that the vast majority of minds like ours do not belong to the original species but rather to people simulated by the advanced descendants of the original species. If this were the case, "we would be rational to think that we are likely among the simulated minds rather than among the original biological ones."

Moreover, there is also the possibility that simulated civilizations may become posthuman themselves. Bostrom writes,
They may then run their own ancestor-simulations on powerful computers they build in their simulated universe. Such computers would be “virtual machines”, a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine – a simulated computer – inside your desktop.) Virtual machines can be stacked: it’s possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration...we would have to suspect that the posthumans running our simulation are themselves simulated beings; and their creators, in turn, may also be simulated beings.
Given this matrioshkan possibility, the number of "real" minds across all existence should be vastly outnumbered by simulated minds. The suggestion that we're not living in a simulation must therefore address the apparent gross improbabilities in question.

Again, all this presupposes, of course, that civilizations are capable of surviving to the point where it's possible to run simulations of forebears and that our descendants desire to do so. But as noted above, there doesn't seem to be any reason to preclude such a technological feat.

Next: Kurzweil's nano neural nets.

Tuesday, April 7, 2009

Welcome to the Machine, Part 1: The ethics of simulated beings

Without a doubt some of my favorite video games of all time have been those that involve simulations, including SimCity and The Sims.

When I play these games I fancy myself a demigod, managing and manipulating the slew of variables made available to me; with the click of a mouse I can alter the environment and adjust the nature of the simulated inhabitants themselves.

There's no question that these games are becoming evermore realistic and sophisticated. A few years ago, for example, a plug-in was developed for The Sims allowing the virtual inhabitants to entertain themselves by playing none other than SimCity itself. When I first heard about this I was struck with the vision of Russian Matrioshka nesting dolls, but instead of dolls I saw simulations within simulations within simulations.


And then I remembered good old Copernicus and his principle of mediocrity: We should never assume that our own particular place in space and time is somehow special or unique. Thinking of the simulation Matrioshka, I reflected on the possibility that we might be Sims ourselves: Why should we assume that we are at the primary level of reality?

Indeed, considering the radical potential for computing power in the decades to come, we may be residing somewhere deep within the Matrioshka.

Consequently, we are all faced with a myriad of existential, philosophical and ethical questions. If we are merely simulants, what does it mean to be alive? Are our lives somehow lessened or even devoid of meaning? Should we interact with the world and our fellow simulants differently than before we knew we were living in a simulation? How are we to devise moral and ethical codes of conduct?

In other words, how are we to live?

Well, there's no reason to get too excited over this. It's a bit of speculative metaphysics that doesn't really change anything -- assuming we are in a simulation, we should live virtually the same way as if we were living in the "real" world.

That is unless, of course, those running the simulation expect something from us. Which means we need to figure out what it is exactly we're supposed to do...

Tomorrow - Part 2: Descarte's 'Malicious Demons.'

Tuesday, August 14, 2007

The dark side of the Simulation Argument

Kudos goes out to Nick Bostrom for having his Simulation Argument (SA) featured in the New York Times today. The SA essentially states that, given the potential for posthumans to create a vast number of ancestor simulations, we should probabilistically conclude that we are in a simulation rather than the deepest reality.

Most people give a little chuckle when they hear this argument for the first time. I've explained it to enough people now that I've come to expect it. The chuckle doesn't come about on account of the absurdity of the suggestion, it's more a chuckle of logical acknowledgment -- a reaction to the realization that it may actually be true.

But this is no laughing matter; there are disturbing implications to the SA. We appear to be damned if we're in a simulation, and damned if we're not.

Dammit, we're in a simulation!

If we were ever to prove that we exist inside a simulation, it would be proof that the transhumanist assumption is correct -- that the transition from a human to a posthuman condition is in fact possible. But that will be of little solace to us measly sims! The simulation -- er, our world -- could be shut down at any time. Or, the variables that make up our modal reality could be altered in undesirable ways (e.g. our world could be turned into a Hell realm).

Also, should we reside in a simulation, we have to pretty much assume that our digital benefactors are rather indifferent to our plight. Based on the amount of suffering going on around here we should probably assume a gnostic religious sensibility. These gods are not our allies; they may have created us, but they are not looking out for our best interests.

Dammit, we're not in a simulation!

Now, on the other side of the virtual coin, should we ever prove that we are not in a simulation, that would also be bad. It would be potential evidence that the transition to a posthuman condition may not be possible.

This problem is similar to the Fermi Paradox and the possible resolution that we are the first intelligent civilization to emerge in the Galaxy. This is a hard pill to swallow based on the extreme odds.

Similarly, we should be disturbed that we are not in a simulation because it may imply that we don't have a very bright future -- that civilizations destroy themselves before developing the capacity to create simulations. Otherwise, we have to take on a exceptionally optimistic frame and assume that we'll survive the Singularity and be that special first civilization that spawns simulations. Again, a probabilistically unsatisfactory proposition.

Of course, advanced civilizations may not create simulations on this scale. The Fermi Paradox offers yet another example as to why this is a problematic suggestion. Given the technological potential to colonize the Galaxy, why haven't advanced civilizations done so? Similarly, why wouldn't advanced civilizations create simulations given the technological capacity to do so?

The NYT article goes over a number of these issues and Bostrom provides some possible solutions. Ultimately, however, the answers are unsatisfactory.

The Simulation Argument solves the Fermi Paradox! Maybe...

Perhaps the answer to the Fermi Paradox is that we are in a simulation. It would certainly explain the Great Silence. Why bother simulating extraterrestrials? Maybe that's the point of the simulation -- to study how a civilization advances without any outside intervention.

Or maybe the Fermi Paradox exists because all civilizations are busy working on their simulations....

Or perhaps.....ah, forget it. My brain (which is probably sitting in a vat somewhere) hurts.