Monday, September 18, 2006

Jenkins on the ethics of historical simulations

Toronto-based futurist and lawyer Peter S. Jenkins recently had his paper, "Historical Simulations - Motivational, Ethical and Legal Issues," published in the Journal of Future Studies.

In this paper, Jenkins takes the simulation argument as posited by Nick Bostrom and questions whether a society capable of creating such simulations would be bound by ethical or legal considerations. The answer, says, Jenkins, is in all likelihood, no. Consequently, it is "highly probable that we are a form of artificial intelligence inhabiting one of these simulations."

Jenkins worries about the potential for endless simulation recursion (i.e. simulation "stacking") and the sudden termination of historical simulations. He speculates that the "end point" of history will occur when the requisite technologies required to create simulations becomes available (estimated to be 2050). Jenkins's conclusion: long range planning beyond this date is futile.

Jenkins's paper is quite interesting and provocative. Simulation ethics and legality is clearly going to be a pertinent issue in the coming years. This paper is a good start in this direction. This said, I have a pair of critical comments to make.

First, any kind of speculative sociological analysis of posthuman behaviour and ethics is fraught with challenges. So much so, I would say, that it is nearly an impossible task. From our perspective as potential sims (or is that gnostics?), those who put us in this simulation are acting exceptionally unethically; no matter how you slice it, our subjective interpretation of this modal reality and our presence in it makes it a bona fide reality. My life is no illusion.

As Jenkins asserts, however, our moral sensibilities are not an indication that future societies will refrain from engaging in such activities--and on this point I agree. However, Jenkins bravely attempts to posit some explanations as to why they would still embark on such projects, but I would suggest that any explanation is likely to appear naive, pedestrian and non-normative in consideration of what the real factors truly are; it's like trying to get inside the head of gods.

My second point is that I'm not convinced that simulation recursion is a problem. There are two parts to this issue.

First, there does not appear to be any hard limit to computation that would preclude the emergence of recursive simulations (although, we are forced to wonder why a simulation would be run such that its inhabitants would produce a deeper set of sub-simulations).

According to Robert Bradbury, an expert in speculative computational megaprojects, there are a number of ways in which an advanced civilization could push the limits of computation, including radically reduced clock speeds. Further, should we find ourselves in a simulation, it would be futile to speculate about limits to computational capacities and even the true nature of existence itself! Bradbury writes,
Simulated realities will run slower than the host reality. But since one can presumably distribute resources among them and prioritize them at will (even suspending them for trillions of years) it isn't clear that the rates at which the simulations run are very important. If we happen to be the basement reality, then the universe has the resources and time to run trillions of trillions of trillions (at least) of simulations of our perceived local reality (at least up until the point where we have uplifted the entire universe to KT-III level). If we aren't in the basement reality then speculations are pointless because everything from clock speed to quantum mechanics could be nothing but an invention for the experiment (one might expect that bored "gods" would entertain themselves by designing and simulating "weird" universes).
Second, in consideration of the previous point, and the fact that we cannot presume the intentions of artificially superintelligent entities, speculations as to when a simulation will reach a "termination point" can only yield highly arbitrary and unfounded answers.

Thus, Jenkins is partially correct in his assertion that long-term planning beyond 2050 is futile. The reason, however, is not the advent of advanced simulations, but the onset of artificial superintelligence--what is otherwise referred to as the Singularity.

For more on this subject, there's my Betterhumans article from a few years back, "Welcome to the Unreal World", and my personal favourite, philosopher Barry Dainton's "Innocence Lost: Simulation Scenarios: Prospects and Consequences."

No comments:

Post a Comment