Monday, March 30, 2009

The perils of nuclear disarmament: How relinquishment could result in disaster

Most everyone agrees that humanity needs to get rid of its nuclear weapons. There's no question that complete relinquishment will all but eliminate the threat of deliberate and accidental nuclear war and the ongoing problem of proliferation.

Indeed, the ongoing presence of nuclear weapons is the greatest single threat to the survival of humanity. To put the problem into perspective, there are currently 26,000 nuclear warheads ready to go -- 96% of which are controlled by the United States and Russia. These two countries alone could unleash the power of 70,000 Hiroshimas in a matter of minutes. In the event of an all-out nuclear war between the U.S. and Russia, it is estimated that as many as 230 million Americans and 56 million Russians would be killed by the initial blasts. The longer term impacts are nearly incalculable, but suffice it to say human civilization would be hard pressed to survive.

Given the end of the Cold War and the establishment of the START agreements, the idea of a deliberate nuclear war seems almost anachronistic. But the potential nightmare of an accidental nuclear exchange is all to real. We have already come very close on several occasions, including the Stanislav Petrov incident in 1983. We are living on borrowed time.

The assertion, therefore, that we need to completely rid ourselves of nuclear weapons appears more than reasonable; our very survival may depend on it. In fact, there are currently a number of initiatives underway that are working to see this vision come true. President Barack Obama himself has urged for the complete eliminate of nuclear weapons.

But before we head down the path to disarmament, we need to consider the consequences. Getting rid of nuclear weapons is a more difficult and precarious proposition than most people think. It's important therefore that we look at the potential risks and consequences.

There are a number of reasons for concern. A world without nukes could be far more unstable and prone to both smaller and global-scale conventional wars. And somewhat counter-intuitively, the process of relinquishment itself could increase the chance that nuclear weapons will be used. Moreover, we have to acknowledge the fact that even in a world free of nuclear weapons we will never completely escape the threat of their return.

The Bomb and the end of global-scale wars

The first and (so far) final use of nuclear weapons during wartime marked a seminal turning point in human conflict: the development of The Bomb and its presence as an ultimate deterrent has arguably preempted the advent of global-scale wars. It is an undeniable fact that an all-out war has not occurred since the end of World War II, and it is very likely that the threat of mutually assured destruction (MAD) has had a lot to do with it.

The Cold War is a case in point. Its very nature as a "war" without direct conflict points to the acknowledgment that it would have been ludicrous to engage in a suicidal nuclear exchange. Instead, the Cold War turned into an ideological conflict largely limited to foreign skirmishes, political posturing and espionage. Nuclear weapons had the seemingly paradoxical effect of forcing the United States and the Soviet Union into an uneasy peace. The same can be said today for India and Pakistan -- two rival and nuclear-capable nations mired in a cold war of their own.

It needs to be said, therefore, that the absence of nuclear weapons would dramatically increase the likelihood of conventional wars re-emerging as military possibilities. And given the catastrophic power of today's weapons, including the introduction of robotics and AI on the battlefield, the results could be devastating, even existential in scope.

So, while the damage inflicted by a restrained conventional war would be an order of magnitude lower than a nuclear war, the probably of a return to conventional wars would be significantly increased. This forces us to ask some difficult questions: Is nuclear disarmament worth it if the probability of conventional war becomes ten times greater? What about a hundred times greater?

And given that nuclear war is more of a deterrent than a tactical weapon, can such a calculation even be made? If nuclear disarmament spawns x conventional wars with y casualties, how could we measure those catastrophic losses against a nuclear war that's not really supposed to happen in the first place? The value of nuclear weapons is not that they should be used, but that they should never be used.

Upsetting the geopolitical balance

Today's global geopolitical structure has largely converged around the realities and constraints posed by the presence of apocalyptic weapons and by the nations who control them. Tension exists between the United States and Russia, but there are limits to how far each nation is willing to provoke the other. The same can be said for the United States' relationship with China. And as already noted, nuclear weapons may be forcing the peace between India and Pakistan (it's worth noting that conventional war between two nuclear-capable nations is akin to suicide; nuclear weapons would be used the moment one side senses defeat).

But should nuclear weapons suddenly disappear, the current geopolitical arrangement would be turned on its head. Despite its rhetoric, the United States is not a hegemonic power. We live in a de facto multi-polar geopolitical environment. Take away nuclear weapons and we get a global picture that looks startlingly familiar to pre-World War I Europe.

Additionally, the elimination of nuclear weapons could act as a destabilizing force, giving some up-and-coming nation-states the idea that they could become world players. Despite United Nations sanctions against invasion, some leaders could become bolder (and even desperate) and lose their inhibitions about claiming foreign territory; nations may start to take more calculated and provocative risks -- even against those nations who used to be nuclear powers.

Today, nuclear weapons are are being used to keep "rogue states" in check. It's no secret that the United States is willing (and even thinking about) bombing Iran as it works to develop its own nuclear weapons and threaten the region, if not the United States itself (Iran will soon have intercontinental ballistic capability; same for North Korea).

It can be said, therefore, that the composition of a nuclear-free world would be far more unstable and unpredictable than a world with nukes. Relinquishment could introduce us to an undesirable world in which new stresses and conflicts rival those posed by the threat of nuclear weapons.

It should be noted, however, that nuclear weapons do nothing to mitigate the threat of terrorism. MAD becomes a rather soft deterrent when "political rationality" comes into question; rationality can be a very subjective thing, as is the sense of self-preservation, particularly when nihilism and metaphysical beliefs come into play (i.e. religious fanaticism).

Nukes could still get in the wrong hands

Even in a world where nuclear weapons are eliminated it would not be outlandish to suggest that fringe groups, and even rogue nations, would still work to obtain the devices. The reasons for doing so are obvious, a grim turn of events that would enable them to take the rest of the world hostage.

Consequently, we can never be sure that a some point down the line, when push comes to shove for some countries or terrorist groups, that they'll independently work to develop their own nuclear weapons.

Dangers of the disarmament process

Should the nuclear capable nations of the world disarm, the process itself could lead to a number of problems. Even nuclear war.

During disarmament, for example, it's conceivable that nations would become distrustful of the others -- even to the point of complete paranoia and all-out belligerence. Countries would have to work particularly hard to show concrete evidence that they are in fact disarming. Any evidence to the contrary could severely escalate tension and thwart the process.

Some strategic thinkers have even surmised that there might be more incentive for a first strike with small numbers of nuclear weapons on both sides, where the attacking nations could hope to survive the conflict. As a result, it's suspected that the final stage of disarmament, when all sides are supposed to dismantle the last of their weapons, will be an exceptionally dangerous time. As a result, disarmament is paradoxically more likely to increase the probability of deliberate nuclear war.

And in addition, concealing a few nukes at this stage could give one nation an enormous military advantage over those nations who have been completely de-nuclearized. This is not as ridiculous as it might seem: it would be all too easy and advantageous for a nation to conceal a secret stockpile and attempt to gain political and military advantages by nuclear blackmail or attack.

Conclusion

I want to make it clear at this time that I am not opposed to nuclear disarmament.

What I am trying to do here is bring to light the challenges that such a process would bring. If we're going to do this we need to do a proper risk assessment and adjust our disarmament strategies accordingly (assuming that's even possible). I still believe that we should get rid of nuclear weapons -- it's just that our nuclear exit strategy will have to include some provisions to alleviate the potential problems I described above.

At the very least we need to dramatically reduce the number of live warheads. Having 26,000 active weapons and a stockpile the size of Mount Everest is sheer lunacy. There's no other word for it. It's a situation begging for disaster.

All this said, we must also admit that we have permanently lost our innocence. We will have to live with the nuclear threat in perpetuity, even if these weapons cease to physically exist. There will never be a complete guarantee that countries have completely disarmed themselves and that re-armament won't ever happen again in the future.

But thankfully, a permanent guarantee of disarmament is not required for this process. The longer we go without nuclear weapons, the better.

Sunday, March 29, 2009

When superheros run amok: Exploring posthuman and technological themes in Watchmen

Warning: Spoiler alert.

Many things have been said about the recent film adaptation of the Watchmen graphic novel series, particularly the ways in which it has come to redefine the superhero genre. While it's certainly a brave film that's succeeded in pushing a number of boundaries, I believe its true strength lies in its various philosophical themes and social commentary. In particular, I was drawn to the discussions of technological power and the innovative ways in which this commentary was represented on the screen.

Looked at metaphorically, Watchmen is largely a treatise on the use and misuse of advanced technologies and the resultant soul-searching, dehumanization and loss of innocence that inevitably follows. It's a cynical and sobering look at weapons technologies in particular and how they often work to create disparities -- whether it be the militaristic disparities between combatant nations or the ways in which it pulls people apart.

And with the presence of a god-like posthuman superhero, Watchmen shows the ways in which a greater-than-human intelligence could either serve humanity's needs or bring it to its knees.

Doctor Manhattan and The Bomb

By far, the most powerful of the Watchmen is Doctor Manhattan, a "quantum superhero" with powers so incredible that he is essentially considered a god. He has the ability to manipulate matter with his thoughts, allowing him to teleport, change his size, copy himself, move objects through space and disintegrate people. His powers are frightening to say the least.

As his name suggests, Manhattan is the walking, talking personification of the nuclear bomb. And as it turns out, god happens to be an American -- a god that the U.S. government isn't afraid to utilize. In Strangelovian manner, the United States uses Manhattan (both directly and indirectly) as a super-weapon to stave off the Soviet Union. Disturbingly, Manhattan does as he's told. He is indifferent to the nature of his powers and the horrors he can unleash; like the bomb itself -- or any technology for that matter -- Dr. Manhattan is largely neutral. It's those who choose to unleash his awful powers who make the moral judgments.

But as the unfolding story suggests, there are consequences to the use of such power.

Take the episode in Vietnam, for example, and the awesome image of the gigantic Doctor Manhattan cutting a swath through the jungle and annihilating the Viet Cong with the wave of his hands. As the explosions around him would indicate, this is an alternate history in metaphor -- one in which the United States has chosen to use nuclear weapons in Vietnam.

This alternate history in which the U.S. wins in Vietnam doesn't end there, however. The impact of this action is felt back home; sure, the Americans may have won, but the resulting social climate and negative reaction results in a completely decayed and degraded America -- one whose population no longer wants anything to do with "superheros."

A cynical and pessimistic outlook

And it's not just Dr. Manhattan -- all the Watchmen can be seen as representing the impacts of one-sided weapons technologies. The flame throwing Edward Blake was right alongside Manhattan in Vietnam and for good reason. Blake is there to represent the dark and ugly side of America -- a symbol for the arrogant and jaded sentiment that tends to accompany militaristic success. Edward Blake represents the worst of a belligerent and techno-happy America as he runs amok with callous indifference.

The cynicism of the Watchmen doesn't stop there. Even the final outcome of the story, where millions of deaths are the only way to prevent all out human extinction, is a sobering look at the powers at our disposal. It's only through the use of apocalyptic scale technologies that world peace can be secured; it's The Day the Earth Stood Still all over again, but this time with the fist coming down hard.

Doctor Manhattan and the posthuman
"I've walked across the sun. I've seen events so tiny and so fast they hardly can be said to have occurred at all, but you... you are a man. And this world's smartest man means no more to me than does its smartest termite." -- Doctor Manhattan to Ozymandias
Interestingly, the Watchmen commentary goes beyond the advent of nuclear weapons. The discussion extends to the potential emergence of those powers that radically exceed human capacities. Given the incredible scale of Dr. Manhattan's abilities, he can also be seen as a personified instantiation of a posthuman or superintelligent artificial intelligence (SAI). The film explores the ways in which such a power could be an alienating, alienated, and dehumanizing force.

For context, transhumanists and speculative AI theorists consider the emergence of an SAI -- an entity with intellectual capacities that are radically more advanced than the human mind (such an intelligence could emerge from an AI or as an outgrowth from a highly modified human brain). Such discussions have evoked images of god-like intelligences capable of reworking human affairs and even the fabric of the Universe itself. Because we lack the proper terminology or frame to envision such an intelligence, many have referred to this potential AI as being 'god-like.'

Doctor Manhattan encapsulates many of these characteristics. He is the reluctant god, one who is largely indifferent to human affairs. Manhattan lives in the quantum universe and does not perceive time with a linear perspective, something that alienates him even further from those around him. Consequently, his interests and intellectual endeavors lead him to a different mind-space altogether; he is primarily concerned with the inner workings and unfolding of the Universe. His ability to relate to humanity recedes with each passing day, almost to the point where he can no longer distinguish between a living and dead human being.

This is a fear levied by some futurists as they worry about the emergence of a poorly programmed or indifferent SAI. Indeed, how and why would an intellect that runs at a radically increased clock-speed and expanded/alternative mind-space relate to unaugmented humans? It's an open question. In Watchmen this problem nearly results in human extinction, not due to the actions of Manhattan, but out of inaction and apathy.

At one point in the film, Manhattan escapes to Mars so that he can avoid human contact. He does so because he finds personal interaction with humans annoying and a distraction. This is a god who would rather retreat into himself, preferring solitude on Mars where he can ruminate on existence and construct masterful structures.

Eventually Laurie Jupiter convinces Manhattan to come back to Earth and rescue humanity from nuclear armageddon -- but it's on account of an improbable existential quirk that he changes his mind -- the closest he can come to actually caring.

[As an aside, and looking as his inability to empathize and relate to other people, Doctor Manhattan can also be seen as an 'autistic superhero'. He is very much locked-in to his inner life, he has a fascination with the minutiae of all that is around him, and he is the beneficiary of prodigious talents. Sounds very autistic to me.]

A defeatist tale?


Despite the seemingly happy ending (even in consideration of the millions of deaths that were required to make it happen), Watchmen leaves the viewer with a profound sense of defeat. Indeed, what kind of a world do we live in if megadeaths are required to keep the peace? Is near-armageddon required to keep us in line? Will a common enemy (climate change, perhaps) unite all people in a common cause?

Or is all this rather pedestrian and grossly over-simplified? And what about the potential benefits that technologies may bring?

These are all questions for discussion at the very least. Watchmen leaves the viewer asking more questions than when they came in -- certainly the sign of a great and provocative film.

Friday, March 27, 2009

The hazards of being a cyborg, or why heart patients should never be allowed to do their own home wiring

Being a cyborg is not all it's cracked up to be -- especially if the wiring in your house is not up to snuff. Case in point is a recent incident in Denmark involving a patient with an implantable cardioverter defibrillator, a shower, and an improperly grounded washing machine (you can see where this is going).

Soon after receiving the device the patient was taking a shower when he experienced a pair of electrical shocks. Obviously this is not supposed to happen, so he returned to the hospital. The physicians were stumped -- there was no apparent physical reason why the device, which delivers a shock to restore normal heart rhythm if an arrhythmia occurs, should have gone off.

But during the analysis the physicians started to suspect that electrical noise had caused an inappropriate ICD discharge. On this hunch the hospital sent an electrician to check the wiring of the patient's house.

Sure enough, the electrician discovered that the washing machine was not properly grounded (the patient had installed it himself) and it was emitting the problematic electrical noise.

Interestingly (or perhaps disturbingly), this is not an isolated case; there have been scattered reports of similar events with heart defibrillators. Back in 2002 cardiologists in Hong Kong reported two such cases -- one caused by electrical signals from a power drill, the other by signals from a washing machine. And German cardiologists described an instance of a defibrillator shock delivered because of electromagnetic signals from, yes, you guessed it, a washing machine (it's becoming clear that washing machines have it in for cyborgs).

It's worth noting that the ICD is a safe treatment provided that all regulations for electrical equipment is followed.

Thursday, March 26, 2009

Should we shout at the cosmos?

David Brin is guest blogging this week.

I am assuming that some of you have gone slumming, and read my piece about METI -- the recent effort by some to transform the Search for ExtraTerrestrial Intelligence into a prolonged and vigorous effort to actively send MESSAGES to such entities.

If you haven't read it, that's okay. I can wait while you read it here.

(Cue elevator muzak. "The girl from Impanima." While the rest of us shuffle our feet and avoid staring at each other. Maybe comment on the weather...)

Ah, good, you're back. So, do you think that we little Earthlings ought to start hollering, trying to draw attention from any advanced civilizations, out there? Really? We are presumably the youngest and most ignorant race in all the galaxy, like an infant, stumbling around in a jungle we do not understand, with no knowledge at all about the situation out there. And yet, it is behooved on US to do the shouting?

As they put it in "Bored of the Rings..." Yoohoo? Beasties? Come and eat us!

Do I really expect slathering hordes of Kardassian invaders? Um, no. In fact, in all of this controversy, I have never, ever expressed fear of alien attack or invasion. Ever. What bugs me is not so much the likelihood of attack, which I deem to be fairly low - but not-zero.

No, it is the profound and cult-like arrogance that has arisen, among those within the extremely narrow, self-referential and inbred SETI community, who no longer even seem to be able to notice the unwarranted assumptions that they make. Indeed, in order never to have those assumptions questioned, they go to great lengths to isolate themselves from colleagues in other branches of science. It is that refusal to even discuss these matters, at wide-open scientific conferences, where their catechisms might be scrutinized by biologists, geologists, technologists and others, that demonstrates how far down the road of cult fanaticism they have gone.

It is a pity, because SETI is a noble undertaking. An expression of the curiosity and expansiveness and eagerness that typifies humanity, at its best. It deserves better than it's getting. It deserves adults. It deserves science.

=======

Next, George wants to talk about...

"Will a transparent society help humanity survive extinction risks?" We live in an age of increasing privacy concerns and along with it the rise of defeatism and cynicism. But a number of thinkers have turned these anxieties on their head by suggesting that a society without privacy is a safe society. David Brin calls this the transparent society, and he believes it’s this kind of openness that will help human civilization get through its most difficult phase yet – and it might just get us past a series of extinction risks and on through to the Singularity."

I never would have predicted, as a youth, that I would grow up to be "Mr. Transparency." Or that so many people would misinterpret my stance as "anti-privacy." (Actually, I love privacy! I just want citizens to know enough so that THEY can defend their freedom and privacy, instead of counting on unreliable elites to do it for them.)

Another immense topic, that I cover at book-length in The Transparent Society: Will Technology Make Us Choose Between Privacy and Freedom? A topic containing vast subtleties and twists and surprises... and shame on you, if you react with just a pat, pablum answer to the quandary, instead of exploring and asking questions, the way a serious citizen would!

A number of my transparency-related articles can be viewed on my web site. For those with little time: A little allegory from The Transparent Society. Of intermediate length: my controversial Salon Magazine privacy article:

Oh... and finally...

One of my ongoing themes has been a 21st Century struggle to empower citizens, after the 20th Century's relentless trend toward the "professionalization of everything." But this may be about to change. For example, an overlooked aspect of the 9/11 tragedy was that citizens themselves were most effective in our civilization's defense, reacting with resiliency and initiative while armed with new technologies (more here).

Yeah, I've spent a LOT of time on all this stuff.... ;-(

Think. Take on and embrace complexity. dogmas are for slaves and conquistadors.

NS: Building a robot octopus

INVEST €10 million in a robotic octopus and you will be able to search the seabed with the same dexterity as the real eight-legged cephalopod. At least that's the plan, say those who are attempting to build a robot with arms that work in the same way that octopuses tentacles do. Having no solid skeleton, it will be the world's first entirely soft robot.

The trouble with today's remote-controlled subs, says Cecilia Laschi of Scuola Superiore Sant'Anna in Pisa, is that their large hulls and clunky robot arms cannot reach into the nooks and crannies of coral reefs or the rock formations on ocean floors. That means they are unable to photograph objects in these places or pick up samples for analysis. And that's a major drawback for oceanographers hunting for signs of climate change in the oceans and on coral reefs.

Because an octopus's tentacles can bend in all directions and quickly thin and elongate to almost twice their length, they can reach, grasp and manipulate objects in tiny spaces with extraordinary dexterity.
More.

Taste the future in your mouth

Carlton Natural Blonde beer ad.

Via io9. Blame them.

Wednesday, March 25, 2009

Singularity? Schmingularity? Are we becoming gods?

David Brin is guest blogging this week.

...greeted with hand-rubbing glee by fellows like Ray Kurzweil and the "extropians" who foresee transformation into higher, smarter, and more durable kinds of beings.

Needless to say, many people have ambivalent feelings about the Singularity. As I describe in the essay, “Singularities and Nightmares: Extremes of Optimism and Pessimism About the Human Future", some fear the machines will stomp on their makers. Or else crush our pride by being kind to us, the way we might pat a dog on the head.

Others feel that humanity may get to come along, accompanying our creations through the wild ride toward godhead, as I illustrate in one of the few post-singularity science fiction stories, "Stones of Significance."

(At the same site see other short stories, plus the provocative "Do we really want immortality?")

Meanwhile, others urge that we reject the coming changes, or else claim that we'll have no choice. That this Singularity thing will turn out to be a failed dream, like all the other promises of transcendence that were sung about by previous generations of mystical romantics.

Indeed, one thing about all this fascinates me -- that personality generally overrides culture and logic and reason. More and more, we are learning this. Somebody who would have been a grouch 500 years ago is likely to be one, today. The kind of person who would have been a raving transcendentalist in Roman days, foretelling a God-wrought ending time - either in flames or paradise - would today be among those who now prophecy either world destruction or redemption... by means of science. The envisioned means change, but the glorious vision of doom or glory do not.

Oh, what is a pragmatic optimist to do? We are beset by exaggerators! When what we need to moderate, step by step action... adamant, radical, even militant moderation! Progressively pursuing all the good things without allowing our zealotry to blind us to the quicksand and minefields along the way. Simplistic dogmas are dumb, whether they are political or techno-transcendentalist. It is pragmatists who will be best suited to negotiate with the rising AI entities. And it will be those who emphasize decency, not dogma, who teach the new gods to be pleasant. To be people.

And that's a VERY brief commentary on perhaps the greatest issue of our time. Wish I had more time. But I'll be commenting furthe from time to time, at CONTRARY BRIN.

====

Oh, for some cool recent science fiction about the near future, see my stories "Shoresteading" and "The Smartest Mob"

=====

NEXT... George says: "A number of years ago, David Brin contacted me to bring me up to speed on his efforts to raise awareness about the active SETI approach, also known as METI (messages to extraterrestrial intelligences). Brin argues that human civilization is not ready to call attention to itself – at least not yet -- and that we should engage in a broader discussion before doing so.

Brin writes,
'Let there be no mistake. METI is a very different thing than passively sifting for signals from the outer space. Carl Sagan, one of the greatest SETI supporters and a deep believer in the notion of altruistic alien civilizations, called such a move deeply unwise and immature....

'Sagan — along with early SETI pioneer Philip Morrison — recommended that the newest children in a strange and uncertain cosmos should listen quietly for a long time, patiently learning about the universe and comparing notes, before shouting into an unknown jungle that we do not understand.
"Brin invited me to join a closed discussion group where this issue is examined and debated. The purpose of the exercise is to not just think more deeply about this issue, but to also raise awareness and possibly prevent a catastrophe (alien invasion perhaps?). Essentially, Brin argues that METI needs to be strongly considered before any group or individual takes it upon themselves to shout out to the heavens. He is particularly concerned how some groups, including SETI, are dismissive of his concerns. His fear is that someone will unilaterally decide to start transmitting messages into the depths of space.

'I was unsure at first about whether or not I should join this group. As a contact pessimist I’m fairly certain that the fear about a METI approach is unwarranted -- not because ETI's are likely to be friendly, but because no one's listening. And even if they are listening, there's nothing we can do about it; any advanced ETI that's on a search-and-destroy mission would likely have the 'search' aspect figured out. I'm not sure how any civilization could hide in the Galaxy. Consequently, METI is somewhat of a non-issue in my opinion.

'That being said, however, I did reach the conclusion that there is a non-zero chance that we could run into trouble should we change our approach from listening to messaging. For example, resident berserkers could be waiting, for what ever reason, for this sort of change in our radio signals. Perhaps they are waiting for a sign that we've passed a certain developmental threshold.

'I think this argument is extremely weak and improbable, but it's not impossible; it should not be ruled out as a potential existential risk.

'Which leads me to the precautionary principle. Since no one is listening, there is no harm in not sending messages out into the cosmos. Again, if a friendly ETI wanted to do a meet-and-greet, they should have no trouble finding us. But because there is the slim chance that we may alert a local berserker (or something unknown), we should probably refrain from the METI approach for the time being."


Thoughts? Don't leap to conclusions! Read up: "Shouting at the Cosmos."

Mondolithic Studios: Global Super Organism [image]

This is a concept sketch from the guys at Mondolithic Studios. It was done for Focus Magazine who were looking for an image that represented the emergence of a true ‘global’ intelligence:
Global society can be defined by incorporating concepts from cybernetics, evolutionary theory, and complex adaptive systems and can therefore be seen as a network of self-producing components, and therefore as a living system or “superorganism”. A superorganism is a higher-order, “living” system, whose components are organisms themselves. (in this case, individual humans and their technology).
More.

Tuesday, March 24, 2009

Brin#2 Thoughts on the Singularity

David Brin is guest blogging this week.

Again, thanks George for inviting me to participate. Any of you who wish to pursue me with questions and issues can find me at my own blog, CONTRARY BRIN.

The commentators last time, alas, seemed smugly dismissive of a concept (uplift) that surely SOME of humanity will zealously pursue, in the next generation. Their blithe shrugs -- e.g. "why would anyone want to do this?" and "What's the benefit?" are genuinely good questions, but only if posed by people who actually try to answer them first!

Seriously, that is how you engage an issue. You paraphrase what you expect that your opponents BEST arguments might be, before knocking them down. In the case of Uplift, there are so many obvious reasons to try it -- such as the inherent human curiosity, gregariousness and hunger for diverse voices. A hunger expressed in science fiction, but rooted in the exogamous mating impulse and the everpresent yearning to acquire allies far beyond the boundary of the tribe.

If there aren't aliens, then building our own sounds cool. Anyway, how better to see our human assumptions questioned than by expanding our tribal circle to include new perspectives. Even if neodolphins and neochimps were partly uplifted twoward human thought modalities, they would inherently bring with them ways of viewing the world that were different than ours, and that might inform our art, our science, philosophy, or even spot many of our false assumptions and mistakes.

Anyway, sapience is clearly HARD. Earth only achieved it once. (And if you hold with the hoary old mythology that dolphins already have it, can you offer a scintilla of proof? If they are our equals, how come we're the only ones trying?) Me? As I expressed in my novel EARTH - Mother Gaia would probably do well to have more than one caretaker species to serve as frontal lobes. Complexity can equal wisdom.

These are among many reasons TO do uplift. And I am ornery and contrary enough to perceive some flaws in them, myself! All of them are answerable. But the point is that smug dismissers of a concept ought to at least play fair and move their minds across the natural and obvious opposing arguments, paraphrasing and proving they are familiar enough with them, before using real logic to knock them down.

We deserve better thinking... certainly if we're going to be a species that deserves to do uplift.

=====

On to the next topic... George says:

The Technological Singularity describes a future nexus point when the capacities of an artificial intelligence (or a radically augmented human) exceeds that of humans. It is called the “Singularity” because it impossible to predict what will follow such an event. A Singularity could usher in an era of great wisdom, prosperity and happiness, or it could result in the end of the human species.

David Brin believes that we are likely en route to a Singularity, but that its exact nature cannot be known, nor that such an event is inevitable. In his article, “Singularities and Nightmares: Extremes of Optimism and Pessimism About the Human Future,” Brin posits four different possibilities for human civilization later this century:

1. Self-destruction
2. Positive Singularity
3. Negative Singularity
4. Retreat

Brin, in a personal email to me, recently wrote, “[My] singularity friends think I am an awful grouch, while my conservative friends think I am a godmaker freak.” Indeed, Brin has expressed skepticism at the idea of a meta-mind or a Teilhard de Chardin apotheosis, while on the other hand he hasn’t shied away from speculations about transcendent artificial intelligences who shuffle thorough the Singularity without a care for their human benefactors.


A fascinating -- and HUGE topic... and I'll let folks click over to that essay in order to get up to speed on the range of astounding futures that may be involved.

Tomorrow we can nibble at the edges of a singularity!

With cordial regards,

David Brin

Uplifting animals? Yes we should


Definitely feeling an anti-uplift vibe in the comments section and in personal emails; at the very least it seems people are a bit 'meh' about the whole thing.

Funny -- for my leftie, vegetarian, animal rights leaning transhumanist comrades this is somewhat of a no-brainer. Makes me wonder what kind of ideological underpinnings exist that can predetermine one's position on the matter...

But what's with the animal exclusionism?

Why should only human persons be uplifted to a postbiological condition? Assuming we get to a posthuman, post-Singularity state, does it really make sense to leave the natural world exactly as it is? I thought the whole point of this futurist exercise was to figure out ways to rework the entire ecosystem such that we can finally retire the autonomous process of artificial selection and all the pointless suffering therein. Given the advent of postbiological space, what would be the point of continuing to allow the existence of biological creatures who have to wallow and struggle through the slime?

Moreover, I've never suggested that we augment dolphins and elephants so they become post-dolphins and post-elephants. I make it very clear in my paper (which it appears most people haven't bothered to read) that the uplift exercise is more radical than people think:
A future world in which humans co-exist with uplifted whales, elephants and apes certainly sounds bizarre. The idea of a United Nations in which there is a table for the dolphin delegate seems more fantasy than reality. Such a future, however, even when considering the presence of uplifted animals, may not turn out just quite the way we think it will.

Intelligence on the planet Earth is set to undergo a sea change. Post-Singularity minds will either be manifest as cybernetic organisms, or more likely, as uploaded beings. Given the robust nature of computational substrate, intelligence is set to expand and diversify in ways that we cannot yet grasp, suffice to say that postbiological beings will scarcely resemble our current incarnation.

In this sense, “postbiological” is a more appropriate term than “posthuman”. The suggestion that posthumans will live amongst post-apes and post-elephants misses the point that a convergence of intelligences awaits us in our future. Our biological heritage may only likely play a very minor part in our larger postbiological constitution – much like the reptilian part of our brain does today in terms of our larger neurological functioning.

And like the other sapient animals who share the planet with us, and with whom we can claim a common genetic lineage, we will one day look back in awe as to what was once our shared biological heritage.
I hope this clarifies things and sets a more expansive vision of what I have in mind when I say uplift. And what is meant by a post-Singularity ecosystem.

As for the morality of the whole thing and the issue of obligations, again I would direct readers to my paper. But in summary, uplift technologies represent a primary good in the Rawlsian sense. So it becomes an issue of social justice once all persons are included -- human or otherwise (and if you can't accept the fact that not all persons are humans, well then I'm surprised you find any value to my blog). Nonhuman persons have a right to these technologies and it is our obligation as the most capable and informed members of the larger social community to make them available.

And by using Rawl's notion of original position, we can assume consent; as a thought experiment, if you had the choice of being born as a radically advanced postbiological entity or a bonobo in the jungle, you would undoubtedly choose the former.

Animal uplift is an important issue -- one that touches upon everything from animal welfare and social justice right through to our most fantastical futurist visions. It may be a highly philosophical and speculative line of inquiry today, but the day is fast coming when this will become a very relevant issue.

Singularity visions

Guest blogger David Brin is next scheduled to blog about the Singularity -- a future nexus point when the capacities of an artificial intelligence (or a radically augmented human) exceeds that of humans. It is called the “Singularity” because it impossible to predict what will follow. A Singularity could usher in an era of great wisdom, prosperity and happiness (not to mention the posthuman era), or it could result in the end of the human species.

David Brin believes that we are likely en route to a Singularity, but that its exact nature cannot be known. He also doesn't believe that such an event is inevitable. In his article, “Singularities and Nightmares: Extremes of Optimism and Pessimism About the Human Future,” Brin posits four different possibilities for human civilization later this century:
  1. Self-destruction
  2. Positive Singularity
  3. Negative Singularity
  4. Retreat (i.e. neo-Luddism)
Brin, in a personal email to me, recently wrote, “[My] singularity friends think I am an awful grouch, while my conservative friends think I am a godmaker freak.” Indeed, Brin has expressed skepticism at the idea of a meta-mind or a Teilhard de Chardin apotheosis, while on the other hand he hasn’t shied away from speculations about transcendent artificial intelligences who shuffle thorough the Singularity without a care for their human benefactors.

Stay tuned for David's elaboration on these and other points.

Monday, March 23, 2009

Will we "uplift" animals to sapiency?

David Brin is guest blogging this week.

Greetings, oh developing sentient beings! Let me thank George Dvorsky for this opportunity to chatter with his blogizens and answer a few questions -- or else face some of the "reciprocal accountability" of which I am supposedly some kind of champion.

George selected a range of interesting topics, upon which I've arrogated opinions in the past. All are passionately interesting! Though I must keep each day's involvement here quite brief. Alas, life has become frenetic, with speeches and consulting work, my new inventions, and three active kids (the biggest project of all!) Because of this, I am forced to draw some lines, if only to save some time for writing!

So, for starters, those of you who don't know me can view a brief bio at-bottom...or else here. Also see my profile as a public-speaker/pundit. (I'll be appearing in DC and Phoenix, across the next few months.)

Let's begin.

George intro'd today's topic:
Biological uplift describes the act of biologically enhancing nonhuman animals and integrating them into human and/or posthuman society. There is no reason to believe that we won’t some day be able to do so; the same technologies that will someday work to augment the human species could also be applied to other animals. The big questions now have to do with whether or not we should embark on such a project and how we could do so in an ethical and responsible manner.

Recently on his blog, David Brin wrote, “[See] Developmental and ethical considerations for biologically uplifting nonhuman animals,” by George Dvorsky... opining that we humans will soon attempt what I described 30 years ago, when I coined “uplift” in several novels that explored the concept from many angles. George's fascinating paper, might have benefited from more on the sfnal history of the idea. Before me, HG Wells, Cordwainer Smith, and Pierre Boulle depicted humans endowing animals with powers of intelligence and speech - though always in a context of abuse and involuntary servitude. Indeed, those cautionary tales may have helped ensure that it will be done openly and accountably, hence qualifying the tales as "self-preventing prophecies." Allowing me to be the first to ponder "what if we tried to do uplift ethically and well?"
All right. I am not a biologist. My training was in astrophysics and electrical engineering. But, as a science fiction author, I feel liberated to explore any topic, especially if I can gain access to real experts using pizza and beer! Hence, I got to know some people working in research on dolphins and apes. I was also, for a year, the managing editor of the Journal of the Laboratory on Human Cognition (UCSD). So (perhaps arrogantly) I felt free to speculate about humans modifying animals to make them intelligent partners if our civilization.

There are so many issues here.

1. Can we replicate - in other creatures or in AI - the stunning way that Homo sapiens outstripped the needs of mere hunter-gathering, to reach levels of mentation that can take us to other planets and invent symphonies and possibly destroy the world? That was one hell of a leap! In Earth I speculated about half a dozen quirky things that might explain that vast overshoot in ability. In my next novel Existence I speculate on a dozen more.

In truth, we just don't know. I frankly think it may be harder than it looks.

2. SHOULD we do such a thing, say, to dolphins or chimps. If someone tried to, they would be hounded and bombed by animal rights people. Even though - if the attempt were successful - the descendants of such apes or cetaceans would be glad it happened. Of course, there would be pain, along the way.

3. That pain and controversy was why I felt I could avoid the simplistic "idiot plot" that sucked in almost every other Uplift author, from Wells to Boulle. The notion that we would abuse or enslave such creatures has some deep metaphorical resonance -- and during a long transition they would not be our peers. But as a goal? A reason to create new beings? It really is kind of pathetic, as are the simplistic tales.

I wanted, instead, to explore what might happen if we took on such a challenge with the BEST of intentions! Wouldn't the new species have problems anyway? Problems that are much more subtle and interesting than mere oppression?

My own artistic fetish is always to show the New Thing being done openly, with all systems of accountability functioning and civilization and citizens fully engaged, aware and intelligently involved. The reason I do this is simple... because absolutely nobody else writing fiction or movies today EVER does that. Ever. At all. Hence, making that assumption always leads in refreshing and original directions.

4. Artistically, of course, it is wonderful to work with characters who come from an uplifted species. I get to stretch my imagination, and the reader's, exploring what sapient dolphins or chimps might feel and think, under the pressure of such development, tugged between both the ancient instincts of their forebears and the new template being imposed upon them by their "patrons."

And that will have to do. I welcome feedback & questions, but there's so little time. If you feel I've neglected you or if you have more to say, feel free to drop in at my own blog CONTRARY BRIN.

With cordial regards,

David Brin

David Brin’s bestselling novels, such as EARTH and KILN PEOPLE, have been translated into more than 20 languages. THE POSTMAN was loosely KevinCostnerized in 1998. A scientist and futurist, Brin speaks and consults widely about over-the-horizon social and technological trends. THE TRANSPARENT SOCIETY won the nonfiction Freedom of Speech Award of the American Library Association.

Video of dolphins blowing bubble rings [amazing]


I knew that dolphins engage in this kind of behavior, but this is the first time I've actually had a chance to see it. This is so beautiful -- and a powerful indication that dolphins are the highly intelligent and creative creatures we've always imagined them to be. This video nearly brought me to tears.

NS: Fears over 'designer' babies leave children suffering

A California fertility clinic recently withdrew its offer of 'designer' babies after facing a storm of criticism (yes, this is the same fertility clinic I blogged about back in February). An OpEd from New Scientist claims this is symptomatic of a deeper societal problem, namely the misplaced taboo against genetic manipulation:

Such fears are misplaced: IVF-PGD is little use for creating designer babies. You cannot select for traits the parents don't have, and the scope for choosing specific traits is very limited. What IVF-PGD is good for is ensuring children do not end up with disastrous genetic disorders.

Nearly 150 years after Darwin unveiled his theory of evolution, we have yet to grasp one of its most unsettling implications: having diseased children is as natural as having healthy ones. Every new life is a gamble, an experiment with novel gene combinations that could be a brilliant success or a tragic failure.

Thanks to technology, we are no longer entirely at the mercy of this callous process. Rather than regarding this ability with suspicion, we should be celebrating it and encouraging its use. Instead, we continue to allow children be born with terrible diseases because of our collective ignorance and superstition.

Entire article.

A formative moment: Superman 3 and the 'robot scene'


I can't even begin to tell you how disturbing this was to me as an impressionable 13-year old. It literally gave me nightmares and I had a hard time shaking it off. It was my first encounter with the suggestion that our minds and flesh could intertwine with our technologies.

This image, that of a person being forcibly turned into a controllable machine, has stuck with me ever since; it was, in retrospect, an undeniably potent formative experience. Even as a teenager, the thought of having synthetic components work in conjunction with and override our biological functions was all too plausible.

Perhaps it is this nightmarish vision that has drawn me to transhumanism, and with it a strong desire to see these technologies work in our favor rather than against it.

Kurzweil: When minds merge with machines

Scientific American has republished an article by Ray Kurzweil about the coming merger of minds with machines. Excerpt:

Sometime early in this century the intelligence of machines will exceed that of humans. Within a quarter of a century, machines will exhibit the full range of human intellect, emotions and skills, ranging from musical and other creative aptitudes to physical movement. They will claim to have feelings and, unlike today’s virtual personalities, will be very convincing when they tell us so. By around 2020 a $1,000 computer will at least match the processing power of the human brain. By 2029 the software for intelligence will have been largely mastered, and the average personal computer will be equivalent to 1,000 brains.

Once computers achieve a level of intelligence comparable to that of humans, they will necessarily soar past it. For example, if I learn French, I can’t readily download that learning to you. The reason is that for us, learning involves successions of stunningly complex patterns of interconnections among brain cells (neurons) and among the concentrations of biochemicals known as neurotransmitters that enable impulses to travel from neuron to neuron. We have no way of quickly downloading these patterns. But quick downloading will allow our nonbiological creations to share immediately what they learn with billions of other machines. Ultimately, nonbiological entities will master not only the sum total of their own knowledge but all of ours as well.

As this happens, there will no longer be a clear distinction between human and machine. We are already putting computers—neural implants—directly into people’s brains to counteract Par­kinson’s disease and tremors from multiple scle­rosis. We have cochlear implants that restore hear­ing. A retinal implant is being de­veloped in the U.S. that is intended to provide

at least some visual perception for some blind individuals, basically by replacing certain visual-processing circuits of the brain. A team of scientists at Emory University implanted a chip in the brain of a paralyzed stroke victim that allowed him to use his brainpower to move a cursor across a computer screen.

In the 2020s neural implants will improve our sensory experiences, memory and thinking. By 2030, instead of just phoning a friend, you will be able to meet in, say, a virtual Mozam­bican game preserve that will seem compellingly real. You will be able to have any type of ex­perience—business, social, sexual—with anyone, real or simulated, regardless of physical proximity.

Read the entire article.

Sunday, March 22, 2009

Monday's word of the day is: Uplift

As previously noted, David Brin will be guest blogging on Sentient Developments this week. The first topic that David will be addressing is one that is near and dear to both of our hearts: biological uplift. To get you primed for this discussion I can recommend a number of articles, books and resources.

First, check out the Wikipedia entry on biological uplift (although this entry could use a lot of work).

Second, there's my paper from a few years back, "All Together Now: Developmental and ethical considerations for biologically uplifting nonhuman animals." My basic argument is that we should strongly consider the inclusion of nonhuman animals into postbiological space. The more the merrier, I say.

Third, be sure to check out (or review) David's seminal work on the matter from a fictional perspective, namely his Uplift Series. Books in this collection include:
It's also work thinking about the proto-uplift classics, namely H.G. Wells's The Island of Doctor Moreau (1896) and Olaf Stapledon's Sirius (1944).

Lastly, check out some of the work done by Sue Savage-Rumbaugh and the Great Ape Trust. Just to be clear, Sue is not an advocate of biological uplift, but the work that she does integrating bonobos into non-traditional living environments and in comprehending their language and culture speaks directly to this issue; there's a very fine line between cultural and biological uplift. For starters, check out the article, "Sue Savage-Rumbaugh on the welfare of apes in captivity." Also be sure to check out the work of the Great Ape Trust.

And while we're on this topic: please support the work done by the Great Ape Project and advocate for the inclusion of great apes into the personhood spectrum.

Thursday, March 19, 2009

Sandberg on transitioning to the posthuman

From Anders Sandberg's blog, Andart:

I think the key idea of the posthuman in the sense I use it is simply this: we can change the human condition *a lot* in the near future. Not through some gradual natural change, and not necessarily through a deliberate decision to go this or that way. That forces us to take stock of the current human condition and seriously consider what we think we ought to keep and what could go. It also makes vivid the contingency of our current state: it is the relativisation not just of our culture, but our species.

The things that still make sense in the light of such relativisation are going to be very robust and important principles. But most of life is about things that could be different - we could have different societal arrangements, different structures of life, different motivational systems, different arts and entertainment etc. The big principles may set the boundaries and aims for whole societies and civilizations, but the stuff that makes them worthwhile to live in will be pretty arbitrary, cultural and mercurial. We need the universality of human rights (or, rather, person rights) and structures like open democratic societies to flourish, but the kinds of flourishing people are doing are going to be manifold, unexpected and often controversial.

More

David Brin guest blogging here next week

Science fiction writer, scientist and renowned futurist David Brin will be guest blogging here on Sentient Developments next week.

Brin is a best-selling author whose future-oriented novels include Earth and Hugo Award winners Startide Rising and The Uplift War (a part of the Uplift Series -- and yes, he coined the term).

He is also known as a leading commentator on modern technological trends. His non-fiction book, The Transparent Society: Will Technology Force Us To Choose Between Privacy And Freedom?, won the Freedom of Speech Award of the American Library Association. Brin consults and speaks for a wide variety of groups interested in the future, ranging from Defense Department agencies and the CIA to Procter & Gamble, Google and other major corporations. He has also been a participant in discussions at the Philanthropy Roundtable and other groups seeking innovative problem solving approaches.

There's a lot of simpatico between Brin's work and my own, so his contributions will be right at home here. David will be writing about biological uplift, the Singularity, Active SETI (messages to extraterrestrial intelligences), and how a transparent society might work to help us mitigate catastrophic risks.

You can follow David's blog at Contrary Brin. Be sure to check out his home page.

Wednesday, March 18, 2009

The Cajun Crawler


Who needs wheels when you have, uh, legs. Lots of legs.

Enter the Cajun Crawler -- a project that was completed for the Fall '08 semester at the University of Louisiana. The scooter was inspired by Theo Jansen's leg mechanism (see videos below). According to the developers, the Cajun crawler's legs are made of standard 5052 Aluminum and the joints all contain deep-groove ball bearings; smooth as silk.

As noted, the device was inspired by Theo Jansen's kinetic sculptures:


Tuesday, March 17, 2009

I want: Vibram Five Fingers



The Vibram Five Fingers for barefooting sports. These would be fantastic for CrossFit.

Harmonic 313: When Machines Exceed Human Intelligence

I have to get this Harmonic 313 album just based on the title alone.

Transcription: Risks posed by political extremism

Below is a transcription of the talk I gave last year at the IEET's symposium on Building a Resilient Civilization. The title of my presentation was: "Democracy in danger: Catastrophic threats and the rise of political extremism."

If you don't want to read the entire transcription you can always read my summarized version, "Future risks and the challenge to democracy," or just watch the video.

Many thanks to Jeriaska for putting this together.

The world’s democracies are set to face their gravest challenge yet as viable and ongoing political options. George Dvorsky—who serves on the Board of Directors for the Institute for Ethics and Emerging Technologies and Humanity+ while bloging at Sentient Developments—presents on how given these high stakes situations, democratic institutions may not be given the chance to prevent catastrophes or deal with actual crises.

Risks Posed by Political Extremism

dvorsky_01.png

We have had some great ideas today. Jamais, Mike, Martin and others have put together some concrete ways in which we can use the institutions that we have at our disposal and go about dealing with existential risks in a way where we can still live as civilized human beings and not have our lives diminished appreciably.

What I am going to be speaking about today though is another path that we could take. What if we do not set up a resilient civilization? Aside from the obvious result of there being a global-scale catastrophe or outright human extinction, there is the path down to political extremism, which might be a natural consequence of the emergence of existential risks in the first place. There is a double-edged sword here.

dvorsky_02.png

That is essentially the theme that will be at the base level of this discussion. Throughout the 20th century there was a lot of political perturbations and restructuring that happened, largely driven by the maturation of the nation-state and industrial economies. These states had to figure out very quickly how to manage a civilization and redistribute wealth. You had a number of ideological forces coming into play to argue this exact point. It explains a lot of the tensions that did occur in the 20th century, most dramatically in the form of totalitarianism. Meanwhile, the democratic nations, who resisted this radicalism, were working to develop the welfare state and put Keynesian economics into practice. Not everyone had to fall into these radical patterns.

Looking ahead into the 21st century, the politics and restructuring of our political institutions will be driven by the demands of mitigating existential risks. In particular, managing the impacts of disruptive technologies and the threats posed by apocalyptic-scale weapons and ongoing environmental degradation are subjects we absolutely do have to talk about.

This restructuring is already happening. We are living in a post-911 world. That was an example of “superterrorism,” where it was a rather devastating event that impacted not just those immediately involved but on our psychologies and societal sensibilities. We have seen what has happened in the seven years since then. A reaction is happening. Looking into the future, we can certainly anticipate that there will be more of this.

Even the term “existential risks” is slowly starting to seep its way into the popular vernacular. Some of you might have caught during the second debate between John McCain and Barack Obama, McCain actually used the term “existential risks” during the debate, which caught me by surprise. He was not speaking of course of humanwide extinction. He was speaking specifically about the state of Israel and its current situation as it is being confronted with what it perceives to be an existential risk in the form of a nuclear attack from a rather belligerent Iran right now. This I expect not only to enter into our parlance more frequently but to seep itself into public policy in a very real way in terms of our institutions and the ways in which we react to these threats.

dvorsky_03.png

This is a three-part presentation. Existential risks will change the political landscape. There is an ever-growing multiplicity of threats. The next generation of threats are on the horizon. Today, many of us have spoken about what exactly those are going to be. What is worse is that there is an increased chance that there is going to be an increased chance of unchecked development and proliferation. Given the nature of information technology today and the access to information, there is this increased threat that anybody, given the right information and the right resources, can put these threats together. Further compounding that is an increased sense of motivation among some groups, whether done at the individual level or state level, to put these weapons into practice.

dvorsky_04.png

We are collectively speaking here about global catastrophic risks. It does not result in humanwide extinction but it is an event of such devastation that each person on this planet will be impacted in some way. When that happens we will be overnight put into a reactive state. That will be an order of magnitude beyond anything that has happened since World War II, in terms of our entire structure having to be based around this reaction.

Another issue as we are dealing with these catastrophes, maybe politicians will lose faith in the kinds of remedies we are trying to articulate today. Perhaps they may think that we are naive and that a rather more heavy hand is required. They may lose faith in the ability of democracies to deal with these particular problems and look for more extreme measures and more draconian answers to these problems.

dvorsky_05.png

The future may unfortunately not be exactly what we had hoped for. At the end of the Cold War, you had this rather optimistic sense that things were finally going to change for the better. You had this feeling that Western liberal democracy was in the process of triumphing and that free market capitalism was about to envelope the world, you had all this talk about the end of history, and a new world order. However, this has not been the case. The last fifteen years have been replete with violence and has resulted arguably in a more unstable world from a geopolitical perspective. We have hardly reached the sense of a new world order or an end of history.

dvorsky_06.png

In the 21st century, with the introduction of apocalyptic threats, it will not be politics as usual. These times are going to call for more drastic measures. The mere presence of existential risks will result in there being more political extremism. This is a negative feedback loop. Those who feel persecuted will have an added impetus to use weapons of mass destruction.

What kind of challenge does this pose to democracy? Democracy is still the exception and not the rule. Right now, according to some surveys, as little as 45-48% of the world can be classified as being truly free. This was a figure that was down to around 35% as early as 1973. Speaking about cognitive biases today, we may actually be the victims of an ideological bias. We have this idea that democracy is here and here to stay. This may not in fact be the case.

We may actually be living in a rather extraordinary time in human history where the social and technological situation is such that we have reached an equilibrium where we can have the strong democracies and freedoms that we have. We may be entering into a period of social disequilibrium where democracies simply will not be able to withstand the pressure of the threat of existential risks. A big part of what I am speaking about today is the perceived need for extremism. There will be an unprecedented need for social control. That does not mean merely looking at people and wondering what they are doing, but getting them to do something or not to do something.

It can also be not only in anticipation of a disaster happening and getting individuals to work to prevent it from happening but it can be in response to an actual catastrophic event on the scale of World War II, where suddenly everyone in society is mobilized to participate in disaster recovery.

dvorsky_08.png

The second part is defining and anticipating political extremism. What do we mean by political extremism? It is a relative term with no fixed political baseline. It can be used to describe the actions or ideologies of individuals or groups that are outside of a perceived establishment. The views and actions of perceived extremists are typically contrasted with what we would consider moderate opinion. It is also used to describe those groups who violate the sense of there being a common moral standard. Again, it does not have to be a group of individuals outside society, but could be members of your own government.

dvorsky_09.png

Extremists can direct their angst either internally, within their own group, or outside of the confines of the state, or both. They will often be accused of advocating violence against the will of society and the actions are often considered beyond what is necessary. Some might consider the actions of the outgoing administration of the United States in the introduction of the Patriot Act and warrantless wiretapping are extreme measures beyond the call of what is necessary. The term is almost always used as a pejorative. No one declares themselves to be an “extremist.”

dvorsky_07.png

What will give rise to various forms of extremism? When times are good, you are not going to have agitation. You are not going to have calls for more radical political action. When you have economic, environmental and civil strife, that is when things get churned up. There is an old revolutionary credo that the worse things are, the better. That is the only time when people are going to be willing to do something about their situation. Reasons for extremism at the state level are when the ends are felt to justify the means. There is seen to be just cause to implement policies that reduce our civil liberties.

dvorsky_10.png

Radicalism begets radicalism. When you had the emergence of fascism in 1930’s Europe, arguably the reason for it was that it was a reaction to Bolshevism and the threat of Collectivism. You had radicalism establish itself in one spot, and others freaking out about it and deciding to embrace radicalism to counter it. Given prescriptions for the 21st century on how we will address existential risks, you could see the same bipolar stratification happening with different forms of radicalism. I cannot speak to them specifically right now, but perhaps later when I go over particular threats themselves. You could envision a radical progressive group countered by a radical luddite group, for example, as they gain political power.

dvorsky_11.png

There is the issue of future shock, as well. The idea that accelerating change will upset a lot of our psychological sensibilities is something that Émile Durkheim referred to as “anomie.” Because things are changing so quickly, the public does not really know what to grab onto. Their footing is lost, social norms are changing. Today, of course, we are feeling it through such things as gay marriage. Social changes are happening with greater rapidity than they have in the past. I look at things going on at Google with a certain degree of reverence because I simply cannot imagine how they are piecing things together.

How the Nazis took advantage of this in the 1930s was by appealing to people’s sensibility and nostalgia. While on the one hand they would tap into what technology had to offer, they would offer them comfort in values that they could relate to. Accelerating technological change can also give rise to a call for radical action. You also need a psychologically primed populace. A catastrophic event will create a populace that is hysterical or primed for a strong central authority figure to tell them what to do and how to do it. The regime will take advantage of that and scare the populace with threats in hopes of keeping them in control that way. This is a common political tactic. A psychologically primed populace will be both welcoming of and supportive of a regime that comes in and take away those civil liberties.

Despite the previous slide, where I went over conditions, there are drivers that will speak to 21st century politics. How are we going to restructure our politics such that we will avoid wisespread catastrophes and human extinction? We are going to need our political institutions deal with the managing of ongoing disasters. Right now what comes to mind are the environmental disasters that appear to be looming on the horizon. However, one could imagine a pandemic of some sort, a nano-disaster, not at an existential level. Meeting the demands of managing these disasters and dealing with disaster recovery are going to be cost prohibitive to say the least. As usual, the economy is going to be a huge issue in the 21st century.

There are some secondary drivers as well that may again lead to some more sense of instability. That is the emergence of disruptive technologies. That will have profound socio-economic effects that have been somewhat outside the bounds of this symposium. There is nanotechnology, for example, and what it will mean to the manufacturing sector. There is robotics and what it will mean to unemployment. Then there will be dealing with the restructuring of society that will be on the scale of the previous industrial revolution that will happen because of AI.

dvorsky_12.png

Another possibility, one that could be considered extreme, is do-nothingism. That is denial, underestimation and circumvention. Today we are seeing corporate interests interested in obfuscation and disinformation. For their own selfish interests they will see the world burn. Political self-preservation, scared politicians that are simply afraid to do anything, may only be interested in maintaining the status quo. It also might be an issue of human nature. We are victims often of our own psychologies, denial or our inability to grok probabilities, and so on. We may fail to realize a threat is on the horizon.

There is also the possibility of isolationism. Countries that do not want to deal with this will not follow the path of meeting this threat. Also, there is backwardness, simply not comprehending the issue at hand. Look at Africa over the past thirty years now largely being in denial of the AIDS epidemic. That is a massive disaster in its own right.

dvorsky_13.png

Moving on to assessing the various threats, I broke them down into authoritarianism, totalitarianism, paramilitary groups and radical social groups. What are the drivers for an iron-fisted government? I cannot stress enough both the positive and negative injunctions for making people act in a certain way. A state that distrusts and is fearful of its own citizens will pick quick, easy and lazy ways to deal with crisis situations, such as circumventing due process by tweaking the Constitution or putting it aside.

dvorsky_14.pngdvorsky_15.pngdvorsky_16.png

This type of regime can manifest itself through an existing democratic regime like the United States or Canada. It is just a matter of putting the right tools into action to make it happen. For instance, the Nazis were voted into power. It could be a coup, a junta, an occupying force through the establishment of police states and dictatorships. There was a coup d’etat in Pakistan in 1999 with Musharraf taking control of a nuclear-capable country. What can an authoritarian state do to deal with the threat of there being a disaster? It can declare a state of emergency, suspend elections, dissolve the government, ban all criticism and protests, reduce privacy and mobility rights, conduct illegal arrests and torture.

dvorsky_17.png

This would certainly be a threat in terms of the gross diminishment of our rights and civil liberties. It would provoke reactions internally and abroad, working to destabilize the situation even further. I think you could devote an entire symposium to the future of totalitarian threats. It is important to address, simply because it is an existential risk unto itself.

dvorsky_18.png

Why would an authoritarian threat want establish totalitarianism? It is the need for absolute social control, to mobilize the people and get them to think the way that is in the state’s best interest through the imposition of an ideological imperative. This could be a religious imperative, or it could be regime based on getting itself back on its feet after a global catastrophe. I think some of the same technologies that would work to enable totalitarianism in the 21st century would also work to undermine the instantiation of totalitarianism. Namely, communications technologies could be use to hack the system.

This could manifest itself through an existing regime. It might not be through the radical left or right as we know it. It could be the emergence of neo-totalitarianism under new conditions. The political tools would be the same as authoritarianism, but other things that they have at their disposal could be a monopoly on all political activity, on the ideology, on the means of coercion and the means of persuasion. All economic and professional activities of the state would become subject to the state.