Stephen Hawking is arguing that humanity may be putting itself in mortal peril by actively trying to contact aliens (an approach that is referred to as Active SETI). I’ve got five reasons why he is wrong.
Hawking has said that, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.”
He’s basically arguing that extraterrestrial intelligences (ETIs), once alerted to our presence, may swoop in and indiscriminately take what they need from us—and possibly destroy us in the process; David Brin paraphrased Hawking’s argument by saying, “All living creatures inherently use resources to the limits of their ability, inventing new aims, desires and ambitions to suit their next level of power. If they wanted to use our solar system, for some super project, our complaints would be like an ant colony protesting the laying of a parking lot.”
It’s best to keep quiet, goes the thinking, lest we attract any undesirable alien elements.
A number of others have since chimed in and offered their two cents, writers like Robin Hanson,Julian Savulescu, and Paul Davies, along with Brin and many more. But what amazes me is thateveryone is getting it wrong.
Here’s the deal, people:
1. If aliens wanted to find us, they would have done so already
First, the Fermi Paradox reminds us that the Galaxy could have been colonized many times over by now. We’re late for the show.
Second, let’s stop for a moment and think about the nature of a civilization that has the capacity for interstellar travel. We’re talking about a civ that has (1) survived a technological Singularity event, (2) is in the possession of molecular-assembling nanotechnology andradically advanced artificial intelligence, and (3) has made the transition from biological to digital substrate (space-faring civs will not be biological—and spare me your antiquated Ring World scenarios).
Now that I’ve painted this picture for you, and under the assumption that ETIs are proactively searching for potentially dangerous or exploitable civilizations, what could possibly prevent them from finding us? Assuming this is important to them, their communications and telescopic technologies would likely be off the scale.Bracewell probes would likely pepper the Galaxy. And Hubble bubble limitations aside, they could use various spectroscopic and other techniques to identify not just life bearing planets, but civilization bearing planets (i.e. looking for specific post-industrial chemical compounds in the atmosphere, such as elevated levels of carbon dioxide).
Moreover, whether we like it or not, we have been ‘shouting out to the cosmos’ for quite some time now. Ever since the first radio signal beamed its way out into space we have made our presence known to anyone caring to listen to us within a radius of about 80 light years.
The cat’s out of the bag, folks.
2. If ETIs wanted to destroy us, they would have done so by now
I’ve already written about this and I suggest you read my article, “If aliens wanted to they would have destroyed us by now.”
But I’ll give you one example. Keeping the extreme age of the Galaxy in mind, and knowing that every single solar system in the Galaxy could have been seeded many times over by now with various types of self-replicating probes, it’s not unreasonable to suggest that a civilization hell-bent on looking out for threats could have planted a dormant berserker probe in our solar system. Such a probe would be waiting to be activated by a radio signal, an indication that a potentially dangerous pre-Singularity intelligence now resides in the ‘hood.
In other words, we should have been destroyed the moment our first radio signal made its way through the solar system.
But because we’re still here, and because we’re on the verge of graduating to post-Singularity status, it’s highly unlikely that we’ll be destroyed by an ETI. Either that or they’re waiting to see what kind of post-Singularity type emerges from human civilization. They may still choose to snuff us out the moment they’re not satisfied with whatever it is they see.
Regardless, our communication efforts, whether active or passive, will have no bearing on the outcome.
3. If aliens wanted our solar system’s resources, they would haven taken them by now
Again, given that we’re talking about a space-faring post-Singularity intelligence, it’s ridiculous to suggest that we have anything of material value for a civilization of this type. They only thing I can think of is the entire planet itself which they could convert into computronium (Jupiter brain)—but even that’s a stretch; we’re just a speck of dust.
If anything, they may want to tap into our sun’s energy output (e.g., they could build a Dyson Sphere or Matrioshka brain) or convert our gas giants into massive supercomputers.
It’s important to keep in mind that the only resource a post-Singularity machine intelligence could possibly want is one that furthers their ability to perform megascale levels of computation.
And it’s worth noting that, once again, our efforts to make contact will have no influence on this scenario. If they want our stuff they’ll just take it.
4. Human civilization has absolutely nothing to offer a post-Singularity intelligence
But what if it’s not our resources they want? Perhaps we have something of a technological or cultural nature that’s appealing.
Well, what could that possibly be? Hmm, think, think think….
What would a civilization that can crunch 10^42 operations per second want from us wily and resourceful humans….
Hmm, I’m thinking it’s iPads? Yeah, iPads. That must be it. Or possibly yogurt.
5. Extrapolating biological tendencies to a post-Singularity intelligence is asinine
There’s another argument out there that suggests we can’t know the behavior or motivational tendencies of ETI’s, therefore we need to tread very carefully. Fair enough. But where this argument goes too far is in the suggestion that advanced civs act in accordance to their biological ancestry.
For examples, humans may actually be nice relative to other civs who, instead of evolving from benign apes, evolved from nasty insects or predatory lizards.
I’m astounded by this argument. Developmental trends in human history have not been driven by atavistic psychological tendencies, but rather by such things as technological advancements, resource scarcity, economics, politics and many other factors. Yes, human psychology has undeniably played a role in our transition from jungle-dweller to civilizational species (traits like inquisitiveness and empathy), but those are low-level factors that ultimately take a back seat to the emergent realities of technological, demographic, economic and politico-societal development.
Moreover, advanced civilizations likely converge around specific survivalist fitness peaks that result in the homogenization of intelligence; there won’t be a lot of wiggle room in the space of all possible survivable post-Singularity modes. In other words, an insectoid post-Singularity SAI or singleton will almost certainly be identical to one derived from ape lineage.
Therefore, attempts to extrapolate ‘human nature’ or ‘ETI nature’ to the mind of its respective post-Singularity descendant is equally problematic. The psychology or goal structure of an SAI will be of a profoundly different quality than that of a biological mind that evolved through the processes of natural selection. While we may wish to impose certain values and tendencies onto an SAI, there’s no guarantee that a ‘mind’ of that capacity will retain even a semblance of its biological nature.
So there you have it.
Transmit messages into the cosmos. Or don’t. It doesn’t really matter because in all likelihood no one’s listening and no one really cares. And if I’m wrong, it still doesn’t matter—ETIs will find us and treat us according to their will.
No comments:
Post a Comment