Walter L. Wagner and Luis Sancho are pursuing a lawsuit in a U.S. federal court to prevent Geneva's Large Hadron Collider from being switched on later this summer.
They're afraid that the new giant particle accelerator could destroy the entire planet.
That is, without trying to over-state the obvious, a rather extraordinary claim; it doesn't get much more serious than that.
Wagner and Sancho argue that scientists at the European Center for Nuclear Research, or CERN, have played down the chances that the collider could produce a tiny black hole or a strangelet that would convert Earth to a shrunken mass of strange matter.
They also claim that CERN has failed to provide an environmental impact statement as required under the National Environmental Policy Act.
This case illustrates a disturbing new trend -- one that started with the development of the atomic bomb: we are increasingly coming into the possession of technologies that could cause the complete extinction of the human species.
Or, at the very least, technologies that we think might destroy us.
Memories of the Manhattan Project
We don't know for certain that the collider will create a black hole or cause some other unpredictable disaster.
But we suspect that it might. Thus, it has to be considered an existential risk.
This is now the second time this has happened to us.
Back during the early days of the Manhattan Project, a number of scientists voiced their concern that the explosion might start a runaway chain-reaction by "igniting" the atmosphere. It was decided that the threat was very low and, as we all know, the United States went ahead and detonated the first bomb on July 16, 1945.
But for a brief moment 63 years ago, some concerned observers held their breath and nervously watched at the bomb lit-up the New Mexico sky.
And now we have a new contender for the perceived existential threat de jour.
Let science be our guide
Is the Hadron Collider an existential risk? Well, based on our current understanding of physics, we have to conclude that there is a non-zero probability that the collider will act in a way that could destroy the planet.
Just how non-zero is the next big question.
Three years ago, Max Tegmark and Nick Bostrom wrote a piece for Nature in which they took a stab at the question. They warned that humanity has been lulled into a false sense of security and that "the fact that the Earth has survived for so long does not necessarily mean that such disasters are unlikely, because observers are, by definition, in places that have avoided destruction."
To reach an answer, they combined physics, philosophy, probability theory (and most assuredly a hefty dose of wild-ass guessing) and concluded that a civilization destroys itself by a particle accelerator experiment once every billion years.
Admittedly, one in a billion seems excruciatingly improbable.
So let's have some fun and smash those particles together at extreme velocities.
But I have to wonder: what if they had concluded a one in a million chance? Is that sufficiently low? Remember, we're talking about the fate of all human life here.
What about one in a hundred thousand?
At what probability point do we call it all off?
How will we ever be able to agree? And would our conclusions cause us to become cripplingly risk averse?
I have no good answers to these questions, suffice to say that we need to continually develop our scientific sensibilities so that we can engage in risk assessment with facts instead of conjectures and hysteria.
The new normal
Moving forward, we can expect to see the sorts of objections being made by Wagner and Sancho become more frequent. Today's it's particle accelerator experiments. Tomorrow it will be molecular nanotechnology. The day after tomorrow it will be artificial intelligence.
And then there are all those things we haven't even thought of yet.
The trick for human civilization will be to figure out how to assess tangible threats, determine level of risk, and devise steps on how to take action.
But it doesn't end there. Inevitably, we will develop technologies that have great humanitarian potential, but are like double-edged swords. Molecular nanotechnology certainly comes to mind.
Consequently, we also have to figure out how to manage our possession of an ever-increasing arsenal of doomsday weapons.
It will be a juggling act where one single mistake will mean the end of all human life.
Not a good proposition.
No comments:
Post a Comment