This article is partly adapted from my TransVision 2007 presentation, “Whither ET? What the failing search for extraterrestrial intelligence tells us about humanity's future.”
In my previous two articles I attempted to re-affirm the Fermi Paradox (FP) and circumscribe some of the possible interstellar activities and developmental aspects of advanced extraterrestrial intelligences (ETI’s).
In this article I will offer two broad solutions to the FP: 1) unavoidable self-destruction and 2) localized non-migratory existence.
It is not my intention at this time to provide a complete list of possible reconciliations, nor am I claiming to have found any kind of special answer; I just wish to explore these two particular possibilities.
At the conclusion of this article I offer some suggestions to help us move forward as we work to solve the observational problem that is the Great Silence.
Self-Destruction and the Great Filter
This is the most likely and philosophically satisfying answer to the Fermi Paradox – although hardly the most desirable.
Looking at ourselves as a typical example of a pre-Singularity civilization, what do we find? We find a species already in possession of apocalyptic technologies and on the verge of developing an entirely new generation of lethal weapons. In short order we will be required to manage an assortment of apocalyptic technologies; it will be akin to spinning plates. There are only so many that can be managed before one of them falls – and one is all that is needed to end the story.
Examples of pending existential risks include the ongoing threat of nuclear holocaust, a nanotechnological disaster, poorly programmed artificial superintelligence (ie Singularity as extinction event), catastrophic pandemic, and so on.
A counter-argument is often made that self-inflicted catastrophism could never be exclusive to all civilizations. How is it, ask critics, that all civilizations cannot escape such a fate? Robin Hanson attempted to answer this question by proposing the Great Filter hypothesis – the suggestion that a developmental stage exists for all life which is insurmountable. The question then: is the Great Filter behind us, or does it await us in our future?
I would argue, based on much of the data I presented earlier, that the Rare Earth hypothesis has to be rejected. Moreover, a healthy application of the self-sampling assumption strongly indicates that the filter is ahead of us should it exist. The Galaxy is likely brimming with life, including complex life.
As for as the search for extraterrestrial life is concerned, Hanson argues that the detection of ETI's would be bad. This would indicate, given our observation of an unperturbed, uncolonized galaxy, that the Great Filter is indeed still ahead of us.
Another disturbing data point as a self-sampling species is that we here on earth have come to possess apocalyptic technologies long before we have developed the capacity to live off-planet or live in self-contained biospheres. All our eggs are in one basket and they will continue to remain that way into the foreseeable future.
And then there's the disturbing Doomsday Argument which suggests that we're closer to the end than the beginning of human civilization.
Perhaps the most common and smug solution to the Fermi Paradox is the suggestion that we are the first. It is frequently used because it is said to best satisfy Occam’s Razor. But while it may be the simplest solution, it defies our sense of probability and disregards the central lesson of the Copernican Principle – the idea that we are not unique, and very likely a typical example.
Earlier I presented a picture of a biophilic Universe. If this issue is to be settled by a battle between Occam’s Razor and the Copernican principle, on this matter I’ll take Copernicus any day.
Interestingly, the longer we survive as a species without extraterrestrial contact, the more we can assume that we have passed the Great Filter.
Localized non-migratory digital existence
Now, the prospect of human extinction is quite obviously mere speculation. As Morpheus proclaimed in the Matrix: “We are still here!” Consequently, there are some non-extinction scenarios that I would like to explore.
The past 40 years of scientific progress has forced a re-evaluation of humanity’s potential. We appear to be headed for a transformation that takes us away from biological existence and towards a postbiological, or digital existence. Our future visions must take this into account. As Milan Cirkovic and Robert Bradbury have noted, we need to adopt a digital perspective (pdf).
Why leave the local system when everything can be accomplished at home? Localized existence may hold promise for all the aspirations that an advanced intelligence could conceivably conjure.
Specifically, advanced intelligences may engage in computational megaprojects and live virtual reality existences. It would be an existential phase transitioning into virtual space such that interstellar colonization would never emerge as a feasible option or experiment.
For example, advanced ETI’s may construct Jupiter (pdf) and Matrioshka Brains. A Jupiter Brain would utilize all the matter of entire planet for the purpose of computation, while a Matrioshka Brain (a kind of Dyson sphere) would utilizes the energy output of its parent star.
Determining an upper bound for computational power is difficult, but a number of thinkers have given it a shot. Eric Drexler has outlined a design for a system the size of a sugar cube that would perform 10^21 instructions per second. Robert Bradbury gives a rough estimate of 10^42 operations per second for a computer with a mass on order of a large planet. Seth Lloyd calculates an upper bound for a 1 kg computer of 5*10^50 logical operations per second carried out on ~10^31 bits – this would likely be done on a quantum computer or computers built of out of nuclear matter or plasma [see this article and this article for more information].
More radically, John Barrow has demonstrated that, under a very strict set of cosmological conditions, indefinite information processing (pdf) can exist in an ever-expanding universe.
This type of computational power is astounding and defies human comprehension. It’s like imagining a universe within a universe -- and that may be precisely be how it's used.
What would a future civilization do with all this power?
A civilization’s transition into high-speed digital mode may come about as natural consequence of its development. The switch from an analog civilization to a digital one – one in which the clock-speed would be accelerated to billions if not trillions of times faster than before – would preclude the desire to interact with the outside world.
Megascale computers may be used to support uploaded civilizations. It may prove to be the existential substrate of choice – one in which the potential for self-destruction is greatly mitigated.
Advanced civilizations may also use this computer power to run simulations for reasons of scientific research, running ancestor simulations or for entertainment (pdf) purposes. Simulations may also be run as a part of some sort of ethical or sociological necessity.
Another possibility is the Hedonistic Imperative, a term attributed to David Pearce. Given that virtually every religion has fantasized about an afterlife of bliss and an end to suffering, paradise engineering may come to represent the optimal end-state for intelligent life. Ultimately, societies will always be comprised of conscious individuals. The optimization of subjective experience may take precedence over colonial ambitions.
This tendency may be part of a broader, more 'existential' focus on life. Civilizational achievement may not be measured by the rate of imperialistic expanse or by how much energy it can consume, but in how individuals relate to themselves and their place in the Universe. This quest for introspective enlightenment may be characterized by efforts to optimize the mode of conscious experience.
What about long term survival?
In regards to long-term survival, Vernor Vinge has predicted that post-Singularity intelligences will build local secondary systems to ensure the near-immortality of the infocomplex. These could exist in off-planet repositories. Shields composed of nanotechnology and femtotechnology could deal with the issue of gamma ray bursters and other cosmological threats.
As for the local star, it could be given added life through stellar-engineering projects in which the crucially low elements are re-introduced. Eventually, however, migration to a younger star would be necessary.
There may also be unknown reasons for this type of existence. But what is certain is that wide-scale colonization is not in the cards.
Moving Forward
Admittedly, these two broad solutions -- self-destruction and non-migration scenarios -- are unsatisfactory. The notion that not even one civilization can escape self-destruction is difficult to believe. Moreover, localized digital existence and the proliferation of colonization waves are not either/or scenarios; one can imagine a civilization embarking on both paths.
As we move forward in attempting to solve the FP we need to apply much stricter methodologies to the problem.
Solutions to the FP must avoid the trappings of sociological analyses, which often present non-exclusive scenarios. Answers like the ‘zoo hypothesis,’ ‘non-interference,’ or ‘they wouldn’t find us interesting,' tend to be projections of the human psyche and our own modern-day realities. Moreover, these sorts of solutions, while they may account for some of the actions of advanced civilizations, cannot account for all.
Instead, a more rigid and sweeping methodological frame needs to be applied– one which takes cosmological determinism and sociological uniformitarianism into account. In other words, we need to be concerned with cosmological limits and the pressure of physical and resource constraints.
This is what is Nick Bostrom refers to as the strong convergence hypothesis -- the idea that all sufficiently advanced civilizations converge towards the same optimal state. This is a hypothesized developmental tendency akin to a Dawkinsian fitness peak -- the suggestion that identical environmental stressors, limitations and attractors will compel intelligences to settle around optimal existential modes. This theory does not favour the diversification of intelligence – at least not outside of a very strict set of living parameters.
The trick will be to predict what these deterministic constraints are. One can imagine factors such as limited resources, access to energy, computational requirements (including heat dissipation, error correction, and latency problems) and self-preservational modes (i.e. political and social orientations that eliminate the possibility of self-destruction).
A side benefit of this exercise is that it doubles as a foresight activity. The better we become at predicting the make-up of advanced ETI's, the better we will be at predicting our own future.
Consequently, our very own survival may depend on it.
No comments:
Post a Comment