Wednesday, December 31, 2008
Forest fire fighting bot
This thing looks seriously badass: A forest fire "clear cut" robot.
Designed by Jordan Guelde and Daniel Shankland II for entertainment purposes (i.e. toys), the bot is designed to clear large areas of foliage from around a forest fire to help stop the flames from spreading. The device is equipped with an array of hubless motors and an integrated fuel system.
Via Yanko Design.
New Year's Eve sci-fi recommendation: Strange Days
If you're looking for a New Year's Eve themed movie to watch tonight I suggest you check out Strange Days.
It's a sleek, edgy and fascinating sci-fi film from 1995. Not a classic science fiction movie by any means, it is most definitely worth watching as it has some interesting and provocative transhumanist themes in it.
Directed by Kathryn Bigelow and produced and written by James Cameron, it stars Ralph Fiennes, Angela Basset, Juliette Lewis, Tom Sizemore, Michael Wincott, and Vincent D'Onofrio. The film takes place on the eve of the millennium (December 31, 1999) in LA, and centers on the story of Lenny Nero (Ralph Fiennes), an ex-cop who peddles a kind of conscious-experience recorder and playback system. Called a "squid," it's a headpiece that allows one to transmit digital recordings of other people's thoughts, feelings, and memories into their brain. As Lenny describes it, "this is real life, pure and uncut, straight from the cerebral cortex."
Lenny deals "clips" (the recordings) as well as "squids" for this new and illegal entertainment system. Of course, sex and violence are the most popular themes, but Lenny refuses to deal in "blackjack" -- a slang term for snuff clips.
Reminiscent of 1983's Brainstorm, Strange Days is a film that deals with not just the potential addictive and drug-like quality of such technologies, but with the ethical aspects as well.
Lenny, for example, can't quite get over his break-up with his former girlfriend. He happens to have clips of his experiences with her when they were together, so to help ease the pain, he escapes into the past by putting on a squid.
While the creation of snuff clips certainly attracts unwanted elements, the film also explores the potential benefits of this technology, namely the advantage of having completely realistic and shared out-of-body experiences. In one scene, for example, a character without legs is able to experience what it's like to run on a beach.
The film is also notable for its extended first person POV shots which required Bigelow's team to create entirely new, light-weight 35mm cameras. The opening scene, for example, is a dramatic no-cut sequence that is quite breathtaking, leaving you wondering how they hell they pulled some of the effects and stunts off.
Just one word of warning to the faint-of-heart, as there's some pretty graphic violence and sexuality in this movie.
It's a sleek, edgy and fascinating sci-fi film from 1995. Not a classic science fiction movie by any means, it is most definitely worth watching as it has some interesting and provocative transhumanist themes in it.
Directed by Kathryn Bigelow and produced and written by James Cameron, it stars Ralph Fiennes, Angela Basset, Juliette Lewis, Tom Sizemore, Michael Wincott, and Vincent D'Onofrio. The film takes place on the eve of the millennium (December 31, 1999) in LA, and centers on the story of Lenny Nero (Ralph Fiennes), an ex-cop who peddles a kind of conscious-experience recorder and playback system. Called a "squid," it's a headpiece that allows one to transmit digital recordings of other people's thoughts, feelings, and memories into their brain. As Lenny describes it, "this is real life, pure and uncut, straight from the cerebral cortex."
Lenny deals "clips" (the recordings) as well as "squids" for this new and illegal entertainment system. Of course, sex and violence are the most popular themes, but Lenny refuses to deal in "blackjack" -- a slang term for snuff clips.
Reminiscent of 1983's Brainstorm, Strange Days is a film that deals with not just the potential addictive and drug-like quality of such technologies, but with the ethical aspects as well.
Lenny, for example, can't quite get over his break-up with his former girlfriend. He happens to have clips of his experiences with her when they were together, so to help ease the pain, he escapes into the past by putting on a squid.
While the creation of snuff clips certainly attracts unwanted elements, the film also explores the potential benefits of this technology, namely the advantage of having completely realistic and shared out-of-body experiences. In one scene, for example, a character without legs is able to experience what it's like to run on a beach.
The film is also notable for its extended first person POV shots which required Bigelow's team to create entirely new, light-weight 35mm cameras. The opening scene, for example, is a dramatic no-cut sequence that is quite breathtaking, leaving you wondering how they hell they pulled some of the effects and stunts off.
Just one word of warning to the faint-of-heart, as there's some pretty graphic violence and sexuality in this movie.
Currently reading: The Black Swan: The Impact of the Highly Improbable
My copy finally arrived in the mail yesterday: The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb. I'm looking forward to reading this one.
Taleb's 'Black Swan theory' refers to large-impact, hard-to-predict and rare events beyond the realm of normal expectations. Taleb regards many scientific discoveries as black swans, namely those that are undirected and unpredicted. He gives the rise of the Internet, the personal computer, World War I, as well as the September 11, 2001 attacks as examples of Black Swan events.
Chris Anderson, editor-in-chief of Wired magazine and the author of The Long Tail: Why the Future of Business Is Selling Less of More, offers this review:
Taleb's 'Black Swan theory' refers to large-impact, hard-to-predict and rare events beyond the realm of normal expectations. Taleb regards many scientific discoveries as black swans, namely those that are undirected and unpredicted. He gives the rise of the Internet, the personal computer, World War I, as well as the September 11, 2001 attacks as examples of Black Swan events.
Chris Anderson, editor-in-chief of Wired magazine and the author of The Long Tail: Why the Future of Business Is Selling Less of More, offers this review:
Four hundred years ago, Francis Bacon warned that our minds are wired to deceive us. "Beware the fallacies into which undisciplined thinkers most easily fall--they are the real distorting prisms of human nature." Chief among them: "Assuming more order than exists in chaotic nature." Now consider the typical stock market report: "Today investors bid shares down out of concern over Iranian oil production." Sigh. We're still doing it.Amazon link.
Our brains are wired for narrative, not statistical uncertainty. And so we tell ourselves simple stories to explain complex thing we don't--and, most importantly, can't--know. The truth is that we have no idea why stock markets go up or down on any given day, and whatever reason we give is sure to be grossly simplified, if not flat out wrong.
Nassim Nicholas Taleb first made this argument in Fooled by Randomness, an engaging look at the history and reasons for our predilection for self-deception when it comes to statistics. Now, in The Black Swan: the Impact of the Highly Improbable, he focuses on that most dismal of sciences, predicting the future. Forecasting is not just at the heart of Wall Street, but it’s something each of us does every time we make an insurance payment or strap on a seat belt.
The problem, Nassim explains, is that we place too much weight on the odds that past events will repeat (diligently trying to follow the path of the "millionaire next door," when unrepeatable chance is a better explanation). Instead, the really important events are rare and unpredictable. He calls them Black Swans, which is a reference to a 17th century philosophical thought experiment. In Europe all anyone had ever seen were white swans; indeed, "all swans are white" had long been used as the standard example of a scientific truth. So what was the chance of seeing a black one? Impossible to calculate, or at least they were until 1697, when explorers found Cygnus atratus in Australia.
Nassim argues that most of the really big events in our world are rare and unpredictable, and thus trying to extract generalizable stories to explain them may be emotionally satisfying, but it's practically useless. September 11th is one such example, and stock market crashes are another. Or, as he puts it, "History does not crawl, it jumps." Our assumptions grow out of the bell-curve predictability of what he calls "Mediocristan," while our world is really shaped by the wild powerlaw swings of "Extremistan."
In full disclosure, I'm a long admirer of Taleb's work and a few of my comments on drafts found their way into the book. I, too, look at the world through the powerlaw lens, and I too find that it reveals how many of our assumptions are wrong. But Taleb takes this to a new level with a delightful romp through history, economics, and the frailties of human nature.
Tuesday, December 30, 2008
Pleasure's perils: Why the 'sex chip' may not be such a good idea
Scientists have taken us one step closer to achieving permanent bliss.
Neuroscientists Morten Kringelbach and Tipu Aziz recently announced that they were able to stimulate the pleasure centers of the brain by implanting a chip that sends tiny shocks to the orbitofrontal cortex. This is the same area that is responsible for feelings of pleasure induced by such things as eating and sex.
Now before you put yourself on the waiting list for this device you may want to consider the implications. Sure, on-demand erotic bliss sounds all fine and well -- but such an add-on would come at a considerable price. As experiments and real-life situations have demonstrated, there are limits to how much pleasure both humans and other animals are able to experience before extreme compulsiveness sets in. Simply put, our current psychologies aren't really capable of handling it.
For this and other reasons, the advent of the 'sex chip' -- or even the fabled orgasmatron -- would introduce a slew of ethical problems. Governments will more than likely classify these sorts of technologies as drugs and work to restrict access; a completely blissed out citizenry is hardly desirable in a corporatist system. Proponents will argue that it's an issue of cognitive liberty -- that people have a right to manipulate their own minds as they see fit and work to reduce suffering in themselves and others. And yet others will contend that there's a hedonistic imperative in effect with profound existential and spiritual implications for the species as a whole.
Suffice to say, this will be a hotly contested topic in relatively short order.
Making pleasure
The ability to tweak the brain's pleasure center is nothing new.
Researchers James Olds and Peter Milner figured out a way to do it by accident in 1954 when they were studying the brain's reticular formation. During their experiments on mice, they discovered that electric shocks in the brain's septal area triggered the reward response. These responses were so potent that, when given the choice, mice would rather starve themselves to death than give up the ability to flip their own reward switch; at its worst, the mice were obsessively flipping their switches at 5 second intervals.
In the following decades, neuroscientist Robert G. Heath began to experiment with larger mammals, including bulls and humans. He developed a device comprised of electrodes and an implant tube (called a canula) which could deliver precise doses of chemicals into the brain. Specifically, he injected acetylcholine into a patient's septal area which caused "vigorous activity" to show up on the EEG. Patients undergoing this experiment described intense pleasure, including multiple orgasms lasting as long as thirty minutes.
In 1972, Heath attempted to "cure" a 24-year old male's homosexuality by using the technique to reprogram his sexual orientation through reconditioning. During a three hour span the man, infamously known as subject "B-19," stimulated himself nearly 1,500 times, inducing feelings of "almost overwhelming euphoria and elation." At the end of the experiment B19 had to be forcefully disconnected from the device. [It's worth noting that the experiment did not alter B-19's sexual orientation after disconnection.]
More recently, as part of some early work on Deep Brain Stimulation (DBS) in 1986, a 48-year-old woman with a stimulating electrode implanted in her right central thalamus started to compulsively self-stimulate after discovering that it could produce erotic sensations. The nVPL electrode was meant to treat her chronic pain, but the stimulation also produced sexual sensations. The woman, who had control over the bursts, eventually developed a severe addiction to the stimulator.
It got so bad, in fact, that she began to self-stimulate herself throughout the day and to the point where she began to neglect personal hygiene and family commitments. The patient even developed a chronic ulceration at the tip of the finger she used to adjust the amplitude. And interestingly, the patient frequently tampered with the device in an effort to increase the stimulation. The patient eventually asked for limited access to the device, only to eventually demand that it be returned to her.
Over the course of two years, the stimulator caused compulsive use that became associated with frequent attacks of anxiety, depersonalization, periods of psychogenic polydipsia and complete inactivity. A similar case was recorded in 2005 when a Parkinson's patient developed an addiction to a DBS electrode that produced a 'morphine like' sensation.
Too much of a good thing?
There's no doubt in my mind that an implantable 'sex chip' would result in a slew of these pathologies. Our capacity for pleasure in the natural state has been carefully calibrated by the forces of natural selection. Feelings of sexual stimulation only needed to be good enough to encourage reproduction -- but not so good that an animal would be obsessed to the point of self-neglect. Nature did not prepare our psychologies for these extreme out-of-bounds sensations.
Pleasure-inducing technological devices threaten to overturn our delicate psychological balance. We already know how drugs mess with the limits of human restraint and it's often the psychological dependence caused by these stimulants that's very difficult to overcome. Once a person feels the extremes of pleasure it's very difficult to come back down -- and even more so when they have control over the inducement of the pleasure.
So, should these devices be banned?
Yes and no.
Like the current prohibition on both soft and hard drugs, there's a certain efficacy to a patriarchal imperative that works to protect citizens from themselves. Sex chip junkies wouldn't be unlike other kinds of junkies. Highly addicted and dysfunctional persons would find themselves outside the social contract and completely dependent on the state.
But what about the pursuit of happiness and other freedoms? And our cognitive liberties? A strong case can be made that we all have a vested interest in the quality of our own minds and the nature of our subjective experiences. Ensuring access to these sorts of technologies may prove to be a very important part of struggle for psychological autonomy.
This issue also brings to mind the hedonistic imperative. There's more to this debate than the immediate needs of our materialist condition and our Puritan predispositions. This is an issue with deep existential and spiritual implications. In a hostile universe with no meaning other than what we ascribe to it, who's to say that entering into a permanent state of bliss is somehow wrong or immoral? It could be said that maximizing the human capacity for pleasure is as valid a purpose as any other.
But as demonstrated above, self-stimulation has its pitfalls. It's not easy to come back to a regular baseline life after experiencing prolonged periods of bliss. As a result, I see the bliss-out option as something that makes more sense for persons in their later years. In fact, given the potential for radically extended lifespans, this may be a very reasonable option outside of voluntary death; once a person decides that they've had enough of the crazy game that is life they should be able to opt into a state of permanent bliss (the same could be said for those suffering from chronic pain or illnesses).
But by doing so, a person would effectively disengage from an active and purposeful life. And not only that, given a powerful enough pleasure device, persons would effectively cease to be persons, replaced instead by purely experiential agents. In a way it would like a kind of death.
In the meantime, we need to be careful about what we wish for and take this talk about a 'sex chip' with a grain of salt. Sure, it makes for titillating headlines, but it's probably not something most of us need in our lives at this exact moment.
References:
Neuroscientists Morten Kringelbach and Tipu Aziz recently announced that they were able to stimulate the pleasure centers of the brain by implanting a chip that sends tiny shocks to the orbitofrontal cortex. This is the same area that is responsible for feelings of pleasure induced by such things as eating and sex.
Now before you put yourself on the waiting list for this device you may want to consider the implications. Sure, on-demand erotic bliss sounds all fine and well -- but such an add-on would come at a considerable price. As experiments and real-life situations have demonstrated, there are limits to how much pleasure both humans and other animals are able to experience before extreme compulsiveness sets in. Simply put, our current psychologies aren't really capable of handling it.
For this and other reasons, the advent of the 'sex chip' -- or even the fabled orgasmatron -- would introduce a slew of ethical problems. Governments will more than likely classify these sorts of technologies as drugs and work to restrict access; a completely blissed out citizenry is hardly desirable in a corporatist system. Proponents will argue that it's an issue of cognitive liberty -- that people have a right to manipulate their own minds as they see fit and work to reduce suffering in themselves and others. And yet others will contend that there's a hedonistic imperative in effect with profound existential and spiritual implications for the species as a whole.
Suffice to say, this will be a hotly contested topic in relatively short order.
Making pleasure
The ability to tweak the brain's pleasure center is nothing new.
Researchers James Olds and Peter Milner figured out a way to do it by accident in 1954 when they were studying the brain's reticular formation. During their experiments on mice, they discovered that electric shocks in the brain's septal area triggered the reward response. These responses were so potent that, when given the choice, mice would rather starve themselves to death than give up the ability to flip their own reward switch; at its worst, the mice were obsessively flipping their switches at 5 second intervals.
In the following decades, neuroscientist Robert G. Heath began to experiment with larger mammals, including bulls and humans. He developed a device comprised of electrodes and an implant tube (called a canula) which could deliver precise doses of chemicals into the brain. Specifically, he injected acetylcholine into a patient's septal area which caused "vigorous activity" to show up on the EEG. Patients undergoing this experiment described intense pleasure, including multiple orgasms lasting as long as thirty minutes.
In 1972, Heath attempted to "cure" a 24-year old male's homosexuality by using the technique to reprogram his sexual orientation through reconditioning. During a three hour span the man, infamously known as subject "B-19," stimulated himself nearly 1,500 times, inducing feelings of "almost overwhelming euphoria and elation." At the end of the experiment B19 had to be forcefully disconnected from the device. [It's worth noting that the experiment did not alter B-19's sexual orientation after disconnection.]
More recently, as part of some early work on Deep Brain Stimulation (DBS) in 1986, a 48-year-old woman with a stimulating electrode implanted in her right central thalamus started to compulsively self-stimulate after discovering that it could produce erotic sensations. The nVPL electrode was meant to treat her chronic pain, but the stimulation also produced sexual sensations. The woman, who had control over the bursts, eventually developed a severe addiction to the stimulator.
It got so bad, in fact, that she began to self-stimulate herself throughout the day and to the point where she began to neglect personal hygiene and family commitments. The patient even developed a chronic ulceration at the tip of the finger she used to adjust the amplitude. And interestingly, the patient frequently tampered with the device in an effort to increase the stimulation. The patient eventually asked for limited access to the device, only to eventually demand that it be returned to her.
Over the course of two years, the stimulator caused compulsive use that became associated with frequent attacks of anxiety, depersonalization, periods of psychogenic polydipsia and complete inactivity. A similar case was recorded in 2005 when a Parkinson's patient developed an addiction to a DBS electrode that produced a 'morphine like' sensation.
Too much of a good thing?
There's no doubt in my mind that an implantable 'sex chip' would result in a slew of these pathologies. Our capacity for pleasure in the natural state has been carefully calibrated by the forces of natural selection. Feelings of sexual stimulation only needed to be good enough to encourage reproduction -- but not so good that an animal would be obsessed to the point of self-neglect. Nature did not prepare our psychologies for these extreme out-of-bounds sensations.
Pleasure-inducing technological devices threaten to overturn our delicate psychological balance. We already know how drugs mess with the limits of human restraint and it's often the psychological dependence caused by these stimulants that's very difficult to overcome. Once a person feels the extremes of pleasure it's very difficult to come back down -- and even more so when they have control over the inducement of the pleasure.
So, should these devices be banned?
Yes and no.
Like the current prohibition on both soft and hard drugs, there's a certain efficacy to a patriarchal imperative that works to protect citizens from themselves. Sex chip junkies wouldn't be unlike other kinds of junkies. Highly addicted and dysfunctional persons would find themselves outside the social contract and completely dependent on the state.
But what about the pursuit of happiness and other freedoms? And our cognitive liberties? A strong case can be made that we all have a vested interest in the quality of our own minds and the nature of our subjective experiences. Ensuring access to these sorts of technologies may prove to be a very important part of struggle for psychological autonomy.
This issue also brings to mind the hedonistic imperative. There's more to this debate than the immediate needs of our materialist condition and our Puritan predispositions. This is an issue with deep existential and spiritual implications. In a hostile universe with no meaning other than what we ascribe to it, who's to say that entering into a permanent state of bliss is somehow wrong or immoral? It could be said that maximizing the human capacity for pleasure is as valid a purpose as any other.
But as demonstrated above, self-stimulation has its pitfalls. It's not easy to come back to a regular baseline life after experiencing prolonged periods of bliss. As a result, I see the bliss-out option as something that makes more sense for persons in their later years. In fact, given the potential for radically extended lifespans, this may be a very reasonable option outside of voluntary death; once a person decides that they've had enough of the crazy game that is life they should be able to opt into a state of permanent bliss (the same could be said for those suffering from chronic pain or illnesses).
But by doing so, a person would effectively disengage from an active and purposeful life. And not only that, given a powerful enough pleasure device, persons would effectively cease to be persons, replaced instead by purely experiential agents. In a way it would like a kind of death.
In the meantime, we need to be careful about what we wish for and take this talk about a 'sex chip' with a grain of salt. Sure, it makes for titillating headlines, but it's probably not something most of us need in our lives at this exact moment.
References:
- "Technology and the Pursuit of Happiness," Damn Interesting
- "Bionic 'sex chip' that stimulates pleasure centre in brain developed by scientists," The Daily Mail
- "Erotic self-stimulation and brain implants," MindHacks
Thursday, December 25, 2008
Go vegetarian in 3 easy steps
Here's a New Year's resolution idea: go veg.
You know you want to.
You're becoming increasingly concerned about your health, the environment and the welfare of farm animals. And let's face it, meat is tasting blander with each passing year. You've thought about all these things and it's leading you to an incontrovertible decision: it's time to go vegetarian.
But where to start?
Don't fret, it's easier than you might think -- even if you're a voracious carnivore like I used to be. And to help you get started I've broken the initial transition phase into three basic steps.
Step 1: Decide on the kind of vegetarian you want to be
Before you get started you're going to have to set-up some ground rules for yourself and decide what food stays and what goes. There are no hard-and-fast rules about this, but you'll need to tap into your motivations for becoming a vegetarian to help you make a decision.
If it's animal welfare that you're concerned about, then you'll want to eliminate as many animal products and bi-products as possible. You'll need to consider the harm being done to farm animals at each phase of the product cycle and assign moral consideration to different species (e.g. I place fish much lower on my scale than pigs). This can be a very subjective and personal thing, but it's something you're going to have to figure out.
Another option you have is eating more 'ethically.' For example, you can start to buy eggs laid from free-run chickens. Or, if environmentalism is your primary concern, you can start to buy from local food sources.
In terms of the kind of vegetarianism that's possible, here's a list of the most common types:
Step 2: Do a trial run
Rather than declare that you're suddenly going to be a vegetarian for the rest of your life, I suggest that you break it down into a much smaller step and make it an experiment. By doing so you put less pressure on yourself to succeed and you can be much more relaxed about the whole thing. I became a vegetarian as a part of a 4-week trial run. Much to my surprise it turned out to be a breeze and I was very comfortable with the transition. It's been six years since my 'experiment' and I haven't looked back.
But you may have a different experience. You may go into serious meat withdrawal or find that your energy level is low. This is fine and normal. What's important is that you reflect on these points of friction and work to find solutions.
If your energy is low it may be an indicator that you're not eating the right foods (pick up some health and nutrition books to ensure that you're getting a balanced diet). It may also be a sign that you're body is adjusting to a meat-free diet; these feelings will pass (for those of you who have done a cleanse, you'll know what I'm talking about -- you'll have an abundance of energy in short order).
And if you're craving meat badly, then you might want to think about putting more meat-like foods on the menu (hey, it's a comfort thing -- there's no problem with that).
But what you need to realize is that most of your initial hurdles will stem from knowledge gaps or psychological issues. By preparing to address these problems you're setting yourself up for success. And once your trial run is over you'll be in a better position to decide if vegetarianism is right for you.
Step 3: Stock your kitchen with the right foods
There's more -- much more -- to being a vegetarian than having carrots and celery in the fridge. You'll have to learn how to shop and cook like a vegetarian, which means you're going to have to change the complexion of your kitchen.
I highly recommend that you get some vegetarian cook books. You'll find excellent advice in these books about meal ideas (everything from appetizers through to entrees and desserts) and the kinds of food you should have stocked in the house. One of my favorites is The Clueless Vegetarian by Evelyn Raab.
And if you're a die-hard meat eater, don't despair. There are a number of fantastic meat-substitutes on the market these days, including simulated burgers, ground meat and bacon. There are also some excellent recipe ideas for meals that are very 'hearty' in the meat-eating sense. Be sure to check out 8 Meatless Dishes for Meat-n-Taters Lovers.
A word of caution, though -- it's imperative that you avoid the traps of transitioning to a vegetarian diet by over-compensating with not-so-healthy substitutes. When I went veg my crutch was cheese. I've since scaled back and learned to substitute with much healthier and less fatty foods. I know of others who have gone overboard with chips and soda-pop. I assure you -- you will also have a crutch. Just make sure that you identify it and, if it's unhealthy, you immediately address it.
Okay you're all set to go veg. Three easy steps that will at the very least get you started in the right direction.
Give it a shot and find out if being a vegetarian is right for you.
You know you want to.
You're becoming increasingly concerned about your health, the environment and the welfare of farm animals. And let's face it, meat is tasting blander with each passing year. You've thought about all these things and it's leading you to an incontrovertible decision: it's time to go vegetarian.
But where to start?
Don't fret, it's easier than you might think -- even if you're a voracious carnivore like I used to be. And to help you get started I've broken the initial transition phase into three basic steps.
Step 1: Decide on the kind of vegetarian you want to be
Before you get started you're going to have to set-up some ground rules for yourself and decide what food stays and what goes. There are no hard-and-fast rules about this, but you'll need to tap into your motivations for becoming a vegetarian to help you make a decision.
If it's animal welfare that you're concerned about, then you'll want to eliminate as many animal products and bi-products as possible. You'll need to consider the harm being done to farm animals at each phase of the product cycle and assign moral consideration to different species (e.g. I place fish much lower on my scale than pigs). This can be a very subjective and personal thing, but it's something you're going to have to figure out.
Another option you have is eating more 'ethically.' For example, you can start to buy eggs laid from free-run chickens. Or, if environmentalism is your primary concern, you can start to buy from local food sources.
In terms of the kind of vegetarianism that's possible, here's a list of the most common types:
- Lacto-ovo vegetarianism: No meat, but you can eat dairy and eggs
- Lacto vegetarianism: No meat and no eggs, but you can eat dairy products
- Ovo vegetarianism: No meat and no dairy, but you can eat eggs
- Veganism: No meat or animal bi-products of any kind -- not even honey
- Pescetarianism: You can eat fish, shellfish, and crustacea
- Pollotarianism: You can eat poultry and fowl
- Flexitarianism: A diet that is primarily vegetarian, but you can eat meat when the craving hits
Step 2: Do a trial run
Rather than declare that you're suddenly going to be a vegetarian for the rest of your life, I suggest that you break it down into a much smaller step and make it an experiment. By doing so you put less pressure on yourself to succeed and you can be much more relaxed about the whole thing. I became a vegetarian as a part of a 4-week trial run. Much to my surprise it turned out to be a breeze and I was very comfortable with the transition. It's been six years since my 'experiment' and I haven't looked back.
But you may have a different experience. You may go into serious meat withdrawal or find that your energy level is low. This is fine and normal. What's important is that you reflect on these points of friction and work to find solutions.
If your energy is low it may be an indicator that you're not eating the right foods (pick up some health and nutrition books to ensure that you're getting a balanced diet). It may also be a sign that you're body is adjusting to a meat-free diet; these feelings will pass (for those of you who have done a cleanse, you'll know what I'm talking about -- you'll have an abundance of energy in short order).
And if you're craving meat badly, then you might want to think about putting more meat-like foods on the menu (hey, it's a comfort thing -- there's no problem with that).
But what you need to realize is that most of your initial hurdles will stem from knowledge gaps or psychological issues. By preparing to address these problems you're setting yourself up for success. And once your trial run is over you'll be in a better position to decide if vegetarianism is right for you.
Step 3: Stock your kitchen with the right foods
There's more -- much more -- to being a vegetarian than having carrots and celery in the fridge. You'll have to learn how to shop and cook like a vegetarian, which means you're going to have to change the complexion of your kitchen.
I highly recommend that you get some vegetarian cook books. You'll find excellent advice in these books about meal ideas (everything from appetizers through to entrees and desserts) and the kinds of food you should have stocked in the house. One of my favorites is The Clueless Vegetarian by Evelyn Raab.
And if you're a die-hard meat eater, don't despair. There are a number of fantastic meat-substitutes on the market these days, including simulated burgers, ground meat and bacon. There are also some excellent recipe ideas for meals that are very 'hearty' in the meat-eating sense. Be sure to check out 8 Meatless Dishes for Meat-n-Taters Lovers.
A word of caution, though -- it's imperative that you avoid the traps of transitioning to a vegetarian diet by over-compensating with not-so-healthy substitutes. When I went veg my crutch was cheese. I've since scaled back and learned to substitute with much healthier and less fatty foods. I know of others who have gone overboard with chips and soda-pop. I assure you -- you will also have a crutch. Just make sure that you identify it and, if it's unhealthy, you immediately address it.
Okay you're all set to go veg. Three easy steps that will at the very least get you started in the right direction.
Give it a shot and find out if being a vegetarian is right for you.
Wednesday, December 24, 2008
I ain't givin' up on sleep - A SentDev Classic
A common human 'limitation' that many transhumanists would like to overcome is that of sleep. I am not one of them.
Yes, there are days when I most certainly wish I had more time and energy to do all the things I want to do, but in my mind there are simply too many trade-offs involved that are simply not worth it and possibly even dangerous. Moreover, there are emotional, psychological and aesthetic reasons for not wanting to eliminate sleep.
Before I get into these considerations its worth noting that I may be in the minority here. Demand for stimulants and sleep-replacement drugs are skyrocketing. Take Modafinil, for example. This is truly a lifestyle drug for 24/7 age. Sales are so good that Cephalon, the company that produces Modafinil, is already developing its successor, Armodafinil, and the experimental drug CEP-16795. Looking further into the future, there will be wakefulness promoters that can safely abolish sleep for several days at a stretch, and sleeping pills that will deliver what feels like 8 hours of sleep in half the time. This is an idea, it appears, whose time as come.
Modafinil is truly a remarkable drug. Users can get by on very little sleep -- as little as 4 to 5 hours per night. It has even been known to help people stay awake for as much as 48 consecutive hours.
Unlike other stimulants like caffeine or amphetamines, Modafinil does not result in side effects like jitters, euphoria and crashing. Remarkably, users don't seem to have to pay back any sleep debt. It is different than other stimulants in that it offers the brain many of the same benefits that normal sleep does. Traditional stimulants tend to fake the effects of proper sleep, often with long-term consequences like sleep disorders and ongoing mental fatigue. Modafinil, on the other hand, tends to deliver a genuine feeling of alertness and wakefulness.
There have been very few complaints of side effects from users aside from some complaints of headaches. That said, there may be unseen problems down the road as Modafinil and other drugs start to become more widely used.
What's interesting and even a bit disturbing is that no one one is really sure how it works -- although speculation exists that Cephalon is keeping the answer secret. What is known is that modafinil prevents nerve cells from reabsorbing dopamine, an excitatory neurotransmitter, once it is released into the brain -- but it does so without producing the addictive highs and painful crashes associated with most stimulants. It has been suggested that this is possible because modafinil also interferes with the reuptake of another neurotransmitter, noradrenalin.
Keeping people awake and alert is one thing, addressing the host of things sleep does for the brain is quite another. The sleep cycle is a complex process with multiple phases (e.g. "slow-wave" sleep versus shallower stage 2 sleep, REM phase, etc.). Each phase plays a particular role in brain restoration and regeneration. It will be some time yet before all aspects of the sleep architecture are cataloged, understood and converted into pill form. In the meantime, there may be many individuals who in their rush to eliminate sleep from their lives are putting their cognitive health at risk.
For example, scientists at the Max Planck Institute for Medical Research recently discovered that sleep helps consolidate memories. According to their findings, new information is transferred between the hippocampus, the short term memory area, and the cerebral cortex during sleep. They concluded that it is the cerebral cortex that actively controls this transfer. Quite obviously, if pills like modafinil and other stimulants don't address something as vital as memory storage, people who completely avoid sleep will soon begin to exhibit serious problems.
I'm not suggesting that the sleep architecture is intractably complex. The general consensus amongst the developers is that is not a question of if but when. Some day soon we will have the option to give up on sleep entirely and live 24-hour days.
For myself personally, I can understand the desire for these drugs on an as needed basis. I most certainly could have used something like modafinil back in 2004 when I chaired the TransVision conference; I think I slept a total of only 10 hours during a 4-day stretch. It took me weeks to recover.
But as for eliminating sleep all together, I'm not so sure I'm inclined to do that. I love going to bed and sleeping. I adore that sleepy, dreamy feeling in the early morning when the body is relaxed and I'm hitting the snooze button. I'm reminded of John Lennon's lyrics to "I'm Only Sleeping,"
There are also some emotional and social aspects of sleep to consider. There's nothing quite like making love to your partner and having them fall asleep in your arms. And how wonderful it is to snooze, cuddle and wake up next to someone (provided they didn't steal the sheets, of course).
Sure, sure -- I may sound overly sentimental about the whole thing and even a little Kassian in my seemingly bioLuddite tone. But in all seriousness, these sleepy Dali-like and Learyesque dreamlike states closely resemble my own expectations as to what a posthuman existence might be like. Given how unorthodox and unreal Second Life is becoming, I can't even begin to imagine what an open-ended digital existence might be like. And like the uploaded character in Egan's Diaspora who refuses to give up urinating and defecating for aesthetic reasons, I too would want to retain those biological vestiges that I believe have an intrinsic value.
Aside from these somewhat romantic notions, there are other day-to-day practicalities about not sleeping that should be considered.
I find that sleep provides and essential break to the routine of life. It not only provides a physical and emotional break, but an existential one as well. Sleep is like a temporary death you have each night, only to be reborn the next day (a very Buddhist notion). I also find that the length of time sleeping is important as well. Despite being unconscious for an extended period, I can estimate with excellent accuracy the length of time I have been sleeping. I don't think 3 to 4 hours would cut it for me.
Moreover, there is always the risk that our corporatist society will change the rules of the game once sleep becomes optional. Working hours may be extended to unacceptable levels, and poor people will take the opportunity to work the full 24 hours just to make ends meet. The mind may not need sleep, but the physical body most certainly does.
I would certainly hope that, given the added time, people would instead focus their energies on leisure activities. Still, coming from personal experience, the intensity of my leisure activities are starting to demand respites of their own.
Again, I'm not suggesting that everybody abandon the thought of giving up on sleep. I'm merely making the point that this is not for me. Be careful of what you wish for, as they say, but even more careful about what you may come to lose.
_________________________
References:
This article was originally published on December 15, 2006.
Yes, there are days when I most certainly wish I had more time and energy to do all the things I want to do, but in my mind there are simply too many trade-offs involved that are simply not worth it and possibly even dangerous. Moreover, there are emotional, psychological and aesthetic reasons for not wanting to eliminate sleep.
Before I get into these considerations its worth noting that I may be in the minority here. Demand for stimulants and sleep-replacement drugs are skyrocketing. Take Modafinil, for example. This is truly a lifestyle drug for 24/7 age. Sales are so good that Cephalon, the company that produces Modafinil, is already developing its successor, Armodafinil, and the experimental drug CEP-16795. Looking further into the future, there will be wakefulness promoters that can safely abolish sleep for several days at a stretch, and sleeping pills that will deliver what feels like 8 hours of sleep in half the time. This is an idea, it appears, whose time as come.
Modafinil is truly a remarkable drug. Users can get by on very little sleep -- as little as 4 to 5 hours per night. It has even been known to help people stay awake for as much as 48 consecutive hours.
Unlike other stimulants like caffeine or amphetamines, Modafinil does not result in side effects like jitters, euphoria and crashing. Remarkably, users don't seem to have to pay back any sleep debt. It is different than other stimulants in that it offers the brain many of the same benefits that normal sleep does. Traditional stimulants tend to fake the effects of proper sleep, often with long-term consequences like sleep disorders and ongoing mental fatigue. Modafinil, on the other hand, tends to deliver a genuine feeling of alertness and wakefulness.
There have been very few complaints of side effects from users aside from some complaints of headaches. That said, there may be unseen problems down the road as Modafinil and other drugs start to become more widely used.
What's interesting and even a bit disturbing is that no one one is really sure how it works -- although speculation exists that Cephalon is keeping the answer secret. What is known is that modafinil prevents nerve cells from reabsorbing dopamine, an excitatory neurotransmitter, once it is released into the brain -- but it does so without producing the addictive highs and painful crashes associated with most stimulants. It has been suggested that this is possible because modafinil also interferes with the reuptake of another neurotransmitter, noradrenalin.
Keeping people awake and alert is one thing, addressing the host of things sleep does for the brain is quite another. The sleep cycle is a complex process with multiple phases (e.g. "slow-wave" sleep versus shallower stage 2 sleep, REM phase, etc.). Each phase plays a particular role in brain restoration and regeneration. It will be some time yet before all aspects of the sleep architecture are cataloged, understood and converted into pill form. In the meantime, there may be many individuals who in their rush to eliminate sleep from their lives are putting their cognitive health at risk.
For example, scientists at the Max Planck Institute for Medical Research recently discovered that sleep helps consolidate memories. According to their findings, new information is transferred between the hippocampus, the short term memory area, and the cerebral cortex during sleep. They concluded that it is the cerebral cortex that actively controls this transfer. Quite obviously, if pills like modafinil and other stimulants don't address something as vital as memory storage, people who completely avoid sleep will soon begin to exhibit serious problems.
I'm not suggesting that the sleep architecture is intractably complex. The general consensus amongst the developers is that is not a question of if but when. Some day soon we will have the option to give up on sleep entirely and live 24-hour days.
For myself personally, I can understand the desire for these drugs on an as needed basis. I most certainly could have used something like modafinil back in 2004 when I chaired the TransVision conference; I think I slept a total of only 10 hours during a 4-day stretch. It took me weeks to recover.
But as for eliminating sleep all together, I'm not so sure I'm inclined to do that. I love going to bed and sleeping. I adore that sleepy, dreamy feeling in the early morning when the body is relaxed and I'm hitting the snooze button. I'm reminded of John Lennon's lyrics to "I'm Only Sleeping,"
When I wake up early in the morning,Like Lennon, I am also very fond of dreaming. It's the only time that I can become (quite literally) someone else and dwell in utterly insane and surreal worlds. I wish some of my dreams could be made into movies.
Lift my head, I'm still yawning
When I'm in the middle of a dream
Stay in bed, float up stream
Please don't wake me, no
don't shake me
Leave me where I am
I'm only sleeping
Everybody seems to think I'm lazy
I don't mind, I think they're crazy
Running everywhere at such a speed
Till they find, there's no need
Keeping an eye on the world going by my window
Taking my time
Lying there and staring at the ceiling
Waiting for a sleepy feeling
There are also some emotional and social aspects of sleep to consider. There's nothing quite like making love to your partner and having them fall asleep in your arms. And how wonderful it is to snooze, cuddle and wake up next to someone (provided they didn't steal the sheets, of course).
Sure, sure -- I may sound overly sentimental about the whole thing and even a little Kassian in my seemingly bioLuddite tone. But in all seriousness, these sleepy Dali-like and Learyesque dreamlike states closely resemble my own expectations as to what a posthuman existence might be like. Given how unorthodox and unreal Second Life is becoming, I can't even begin to imagine what an open-ended digital existence might be like. And like the uploaded character in Egan's Diaspora who refuses to give up urinating and defecating for aesthetic reasons, I too would want to retain those biological vestiges that I believe have an intrinsic value.
Aside from these somewhat romantic notions, there are other day-to-day practicalities about not sleeping that should be considered.
I find that sleep provides and essential break to the routine of life. It not only provides a physical and emotional break, but an existential one as well. Sleep is like a temporary death you have each night, only to be reborn the next day (a very Buddhist notion). I also find that the length of time sleeping is important as well. Despite being unconscious for an extended period, I can estimate with excellent accuracy the length of time I have been sleeping. I don't think 3 to 4 hours would cut it for me.
Moreover, there is always the risk that our corporatist society will change the rules of the game once sleep becomes optional. Working hours may be extended to unacceptable levels, and poor people will take the opportunity to work the full 24 hours just to make ends meet. The mind may not need sleep, but the physical body most certainly does.
I would certainly hope that, given the added time, people would instead focus their energies on leisure activities. Still, coming from personal experience, the intensity of my leisure activities are starting to demand respites of their own.
Again, I'm not suggesting that everybody abandon the thought of giving up on sleep. I'm merely making the point that this is not for me. Be careful of what you wish for, as they say, but even more careful about what you may come to lose.
_________________________
References:
Get ready for 24-hour living (New Scientist) To Sleep, Perchance to Process Memory (Wired) An end to sleep? (Futurismic) Learning During Sleep? (Max Planck Society) A real eye opener (The Age)
This article was originally published on December 15, 2006.
Monday, December 22, 2008
Future risks and the challenge to democracy
As we prepare for the emergence of the next generation of apocalyptic weapons, it needs to be acknowledged that the world's democracies are set to face their gravest challenge yet as viable and ongoing political options.
The continuing presence and increased accessibility of Weapons of Mass Destruction (WMDs) are poised to put an abrupt end to politics as usual. Technologies that threaten our very existence will greatly upset current sensibilities about social control and civil liberties. And as a consequence, those institutions that have worked for centuries to protect democratic and humanistic values will be put to the test – a test that may ultimate result in a significant weakening of democracy, if not its outright collapse.
The pending political situation is categorically different than that which followed the Manhattan Project and the advent of nuclear weapons. While proliferation was a problem in the decades proceeding The Bomb’s development, the chances of those weapons getting into the hands of a so-called ‘rogue state’ or non-state actors was slim to none (unless you consider the former Soviet Union, Cuba, China and Pakistan as being rogue states). Moreover, as we move forward we will have more than just nuclear weapons to worry about; future WMDs include bioweapons (such as deliberately engineered pathogens), dirty bombs, weaponized nanotechnology, robotics, misused artificial intelligence, and so on.
What makes these WMDs different is the growing ease of acquisition and implementation by those who might actually use them. We live in an increasingly wired and globalized world where access to resources and information has never been easier. Compounding these problems is the rise and empowerment of non-traditional political forces, namely weak-states, non-state actors and disgruntled individuals. In the past, entire armadas were required to inflict catastrophic damage; today, all that’s required are small groups of motivated individuals.
And the motivations for using such weapons are set to escalate. Political extremism begets political extremism; government clamp-downs (both internally and externally) will likely illicit radical counter reactions. There is also the potential for global-scale arms races as new technologies appear on the horizon (molecular assembling nanotechnology being a likely candidate). Such arms races could increase not just international tensions, but also instigate espionage and preemptive strikes.
Given these high stakes situations, democratic institutions may not be given the chance to prevent catastrophes or deal with actual crises.
21st Century realities
Politics and conflict in the 20th Century was largely centered around differing opinions about the redistribution of wealth. It was a time of adjusting to the demands of the modern nation-state, large populations and mature industrial economies. Responses to these challenges included the totalitarian experiments, World War II -- and for those nations who resisted the radical urge, the instantiation of Keynesian economics and the welfare state.
The coming decades will bear witness to similar sorts of political experimentation and restructuring, including a renewed devotion to extreme measures and radicalism. It is becoming increasingly clear that 21st Century politics will be focused around managing the impacts of disruptive technologies, addressing the threats posed by apocalyptic weapons and environmental degradation, and attending to global-scale catastrophes and crises as they occur.
This restructuring is already underway. We live in the post 9/11 world -- a world in which we have legitimate cause to be fearful of superterrorism and hyperterrorism. We will also have to reap what we sowed in regards to our environmental neglect. Consequently, our political leaders and institutions will be increasingly called-upon to address the compounding problems of unchecked WMD proliferation, terrorism, civil unrest, pandemics, the environmental impacts of climate change (like super-storms, flooding, etc.), fleets of refugees, devastating food shortages, and so on. It will become very necessary for the world's militaries to anticipate these crises and adapt so that they can meet these demands.
More challenging, however, will be in avoiding outright human extinction.
Indeed, the term ‘existential risks’ is beginning to take root in the vernacular. During the presidential debates, for example, John McCain used the expression to illustrate the severity of the Iranian nuclear threat against Israel. While McCain was referring to the threat on Israel’s existence, the idea that humanity faces a genuine extinction risk has returned to the popular consciousness. Eventually these perceived risks will start to play a more prominent role in the political arena, both in terms of politicking and in the forging of policy itself.
So much for the End of History and the New World Order
When the Cold War ended it was generally thought that major wars had become obsolete and that a more peaceful and prosperous era had emerged. Some commentators, like the political scientist Francis Fukuyama, declared that Western liberal democracy and free market capital systems had triumphed and that it would only be a matter of time before it spread to all regions of the planet. For Fukuyama, this equated to the ‘end of history.’
It was also around this time that George H. W. Bush proclaimed the advent of a New World Order. With the collapse of European Communism and the end of bi-polar geopolitics it was hoped that nuclear disarmament would soon follow and with it a global community largely free of conflict.
Today, however, we see that these hopes were idealistic and naïve. There is still plenty of strife and violence in the international system. In fact, the current multi-polar geopolitical arrangement has proven to be far more unstable than the previous orientation, particularly because it has allowed economic, political and cultural globalization to flourish, and along with it, the rise of asymmetrical warfare and escalating motivations for rogue nations and non-state actors to exert terrible damage.
Despite the claims of Fukuyama and Bush, and despite our own collective sensibilities, we cannot take our democracies and civil liberties for granted. When appraising the condition of democracies we must realize that past successes and apparent trajectories are no guarantees of future gain. Indeed, democracy is still the exception around the world and not the rule.
Historically speaking, democracies are an abnormality. As early as 1972 only 38% of the world’s population lived in countries that could be classified as free. Today, despite the end of the Cold War, this figure has only crept up to 46%. We may be the victims of an ideological bias in which we’ve assumed far too much about democracy’s potential, including its correlation with progress and its ability to thrive in drastically different social environments.
Catastrophic and existential risks will put democratic institutions in danger given an unprecedented need for social control, surveillance and compliance. Liberal democracies will likely regress to de facto authoritarianism under the intense strain; tools that will allow democratic governments to do so include invoking emergency measures, eliminating dissent and protest, censorship, suspending elections and constitutions, and trampling on civil liberties (illegal arrests, surveillance, limiting mobility, etc).
Looking further ahead, extreme threats may even rekindle the totalitarian urge; this option will appeal to those leaders looking to exert absolute control over their citizens. What’s particularly frightening is that future technologies will allow for a more intensive and invasive totalitarianism than was ever thought possible in the 20th Century – including ubiquitous surveillance (and the monitoring of so-called ‘thought crimes’), absolute control over information, and the redesign of humanity itself, namely using genetics and cybernetics to create a more traceable and controllable citizenry. Consequently, as a political mode that utterly undermines humanistic values and the preservation of the autonomous individual, totalitarianism represents an existential risk unto itself.
Democracy an historical convenience?
It is possible, of course, that democracies will rise to the challenge and work to create a more resilient civilization while keeping it free. Potential solutions have already been proposed, such as strengthening transnational governance, invoking an accountable participatory panopticon, and the relinquishment of nuclear weapons. It is through this type of foresight that we can begin to plan and restructure our systems in such a way that our civil liberties and freedoms will remain intact. Democracies (and human civilization) have, after all, survived the first test of our apocalyptic potential.
That said, existential and catastrophic risks may reveal a dark path that will be all too easy for reactionary and fearful leaders to venture upon. Politicians may distrust seemingly radical and risky solutions to such serious risks. Instead, tried-and-true measures, where the state exerts an iron fist and wages war against its own citizens, may appear more reasonable to panicked politicians.
We may be entering into a period of sociopolitical disequilibrium that will instigate the diminishment of democratic institutions and values. Sadly, we may look back some day and reflect on how democracy was an historical convenience.
The continuing presence and increased accessibility of Weapons of Mass Destruction (WMDs) are poised to put an abrupt end to politics as usual. Technologies that threaten our very existence will greatly upset current sensibilities about social control and civil liberties. And as a consequence, those institutions that have worked for centuries to protect democratic and humanistic values will be put to the test – a test that may ultimate result in a significant weakening of democracy, if not its outright collapse.
The pending political situation is categorically different than that which followed the Manhattan Project and the advent of nuclear weapons. While proliferation was a problem in the decades proceeding The Bomb’s development, the chances of those weapons getting into the hands of a so-called ‘rogue state’ or non-state actors was slim to none (unless you consider the former Soviet Union, Cuba, China and Pakistan as being rogue states). Moreover, as we move forward we will have more than just nuclear weapons to worry about; future WMDs include bioweapons (such as deliberately engineered pathogens), dirty bombs, weaponized nanotechnology, robotics, misused artificial intelligence, and so on.
What makes these WMDs different is the growing ease of acquisition and implementation by those who might actually use them. We live in an increasingly wired and globalized world where access to resources and information has never been easier. Compounding these problems is the rise and empowerment of non-traditional political forces, namely weak-states, non-state actors and disgruntled individuals. In the past, entire armadas were required to inflict catastrophic damage; today, all that’s required are small groups of motivated individuals.
And the motivations for using such weapons are set to escalate. Political extremism begets political extremism; government clamp-downs (both internally and externally) will likely illicit radical counter reactions. There is also the potential for global-scale arms races as new technologies appear on the horizon (molecular assembling nanotechnology being a likely candidate). Such arms races could increase not just international tensions, but also instigate espionage and preemptive strikes.
Given these high stakes situations, democratic institutions may not be given the chance to prevent catastrophes or deal with actual crises.
21st Century realities
Politics and conflict in the 20th Century was largely centered around differing opinions about the redistribution of wealth. It was a time of adjusting to the demands of the modern nation-state, large populations and mature industrial economies. Responses to these challenges included the totalitarian experiments, World War II -- and for those nations who resisted the radical urge, the instantiation of Keynesian economics and the welfare state.
The coming decades will bear witness to similar sorts of political experimentation and restructuring, including a renewed devotion to extreme measures and radicalism. It is becoming increasingly clear that 21st Century politics will be focused around managing the impacts of disruptive technologies, addressing the threats posed by apocalyptic weapons and environmental degradation, and attending to global-scale catastrophes and crises as they occur.
This restructuring is already underway. We live in the post 9/11 world -- a world in which we have legitimate cause to be fearful of superterrorism and hyperterrorism. We will also have to reap what we sowed in regards to our environmental neglect. Consequently, our political leaders and institutions will be increasingly called-upon to address the compounding problems of unchecked WMD proliferation, terrorism, civil unrest, pandemics, the environmental impacts of climate change (like super-storms, flooding, etc.), fleets of refugees, devastating food shortages, and so on. It will become very necessary for the world's militaries to anticipate these crises and adapt so that they can meet these demands.
More challenging, however, will be in avoiding outright human extinction.
Indeed, the term ‘existential risks’ is beginning to take root in the vernacular. During the presidential debates, for example, John McCain used the expression to illustrate the severity of the Iranian nuclear threat against Israel. While McCain was referring to the threat on Israel’s existence, the idea that humanity faces a genuine extinction risk has returned to the popular consciousness. Eventually these perceived risks will start to play a more prominent role in the political arena, both in terms of politicking and in the forging of policy itself.
So much for the End of History and the New World Order
When the Cold War ended it was generally thought that major wars had become obsolete and that a more peaceful and prosperous era had emerged. Some commentators, like the political scientist Francis Fukuyama, declared that Western liberal democracy and free market capital systems had triumphed and that it would only be a matter of time before it spread to all regions of the planet. For Fukuyama, this equated to the ‘end of history.’
It was also around this time that George H. W. Bush proclaimed the advent of a New World Order. With the collapse of European Communism and the end of bi-polar geopolitics it was hoped that nuclear disarmament would soon follow and with it a global community largely free of conflict.
Today, however, we see that these hopes were idealistic and naïve. There is still plenty of strife and violence in the international system. In fact, the current multi-polar geopolitical arrangement has proven to be far more unstable than the previous orientation, particularly because it has allowed economic, political and cultural globalization to flourish, and along with it, the rise of asymmetrical warfare and escalating motivations for rogue nations and non-state actors to exert terrible damage.
Despite the claims of Fukuyama and Bush, and despite our own collective sensibilities, we cannot take our democracies and civil liberties for granted. When appraising the condition of democracies we must realize that past successes and apparent trajectories are no guarantees of future gain. Indeed, democracy is still the exception around the world and not the rule.
Historically speaking, democracies are an abnormality. As early as 1972 only 38% of the world’s population lived in countries that could be classified as free. Today, despite the end of the Cold War, this figure has only crept up to 46%. We may be the victims of an ideological bias in which we’ve assumed far too much about democracy’s potential, including its correlation with progress and its ability to thrive in drastically different social environments.
Catastrophic and existential risks will put democratic institutions in danger given an unprecedented need for social control, surveillance and compliance. Liberal democracies will likely regress to de facto authoritarianism under the intense strain; tools that will allow democratic governments to do so include invoking emergency measures, eliminating dissent and protest, censorship, suspending elections and constitutions, and trampling on civil liberties (illegal arrests, surveillance, limiting mobility, etc).
Looking further ahead, extreme threats may even rekindle the totalitarian urge; this option will appeal to those leaders looking to exert absolute control over their citizens. What’s particularly frightening is that future technologies will allow for a more intensive and invasive totalitarianism than was ever thought possible in the 20th Century – including ubiquitous surveillance (and the monitoring of so-called ‘thought crimes’), absolute control over information, and the redesign of humanity itself, namely using genetics and cybernetics to create a more traceable and controllable citizenry. Consequently, as a political mode that utterly undermines humanistic values and the preservation of the autonomous individual, totalitarianism represents an existential risk unto itself.
Democracy an historical convenience?
It is possible, of course, that democracies will rise to the challenge and work to create a more resilient civilization while keeping it free. Potential solutions have already been proposed, such as strengthening transnational governance, invoking an accountable participatory panopticon, and the relinquishment of nuclear weapons. It is through this type of foresight that we can begin to plan and restructure our systems in such a way that our civil liberties and freedoms will remain intact. Democracies (and human civilization) have, after all, survived the first test of our apocalyptic potential.
That said, existential and catastrophic risks may reveal a dark path that will be all too easy for reactionary and fearful leaders to venture upon. Politicians may distrust seemingly radical and risky solutions to such serious risks. Instead, tried-and-true measures, where the state exerts an iron fist and wages war against its own citizens, may appear more reasonable to panicked politicians.
We may be entering into a period of sociopolitical disequilibrium that will instigate the diminishment of democratic institutions and values. Sadly, we may look back some day and reflect on how democracy was an historical convenience.
Sunday, December 21, 2008
Jeffrey Kripal on Aldous Huxley and the 'neural Buddhists'
Writing in the Chronicle Review, Jeffrey Kripal argues that a kind of Huxley renaissance is under way. "It is worth returning to Huxley," writes Kripal, "not as he has been for us in the past — the author of the prophetic, dystopian Brave New World — but as he might be for us in the future."
Kripal sees a connection between Huxley's work and that of the burgeoning neural Buddhist movement. He writes:
Kripal sees a connection between Huxley's work and that of the burgeoning neural Buddhist movement. He writes:
But Huxley was suspicious of gurus and gods of any sort, and he finally aligned himself with a deep stream of unorthodox doctrine and practice that he found running through all the Asian religions, which, he proclaimed in Island (his last novel, published in 1962), was a "new conscious Wisdom ... prophetically glimpsed in Zen and Taoism and Tantra." That worldview — which Huxley also linked to ancient fertility cults, the study of sexuality in the modern West, and Darwinian biology — emerges from the refusal of all traditional dualisms; that is, it rejects any religious or moral system that separates the world and the divine, matter and mind, sex and spirit, purity and pollution (and that's rejecting a lot). Put more positively, Huxley's new Wisdom focuses on the embodied particularities of moment-to-moment experience, including sexual experience, as the place of "luminous bliss."Read the entire article.
Science, particularly what would become neuroscience, was a key part of that mature vision. Very late in life, Huxley would drift further and further into an oddly prescient fusion of Tantric Buddhism and neurophysiology, a worldview captured in the "neurotheologian" of Island, identified there as someone "who thinks about people in terms, simultaneously, of the Clear Light of the Void and the vegetative nervous system." This Buddhist neurotheologian was in fact a fictional embodiment of Huxley's own philosophy, which we might frame as "the filter thesis." Following the philosophers Henri-Louis Bergson and C.D. Broad, Huxley consistently argued that consciousness was filtered and translated by the brain through incredibly complex neurophysiological, linguistic, psychological, and cultural processes, but not finally produced by it. We are not who we think we are. Or better, who we think we are is only a temporary mask (persona) that a greater Consciousness wears for a time and a season in order to "speak through" (per-sona). That old English bard had it just right, then: The world really is a stage.
Swarm bots in action
Check out how these robots work in concert to execute a goal, namely pulling a young girl across the room. Individually these bots are useless, but together they get the job done. This is an excellent illustration of how individual and identical agents, when designed and programmed effectively, can be used in a collaborative fashion.
Now imagine that these bots are self-replicating...
Saturday, December 20, 2008
Book: Ehrlich & Ehrlich: The Dominant Animal: Human Evolution and the Environment
Stanford's Paul R. Ehrlich and Anne H. Ehrlich make the case that the human species needs to adapt to the environment and change its "individual motives and values."
Book description:
Book description:
In humanity’s more than 100,000 year history, we have evolved from vulnerable creatures clawing sustenance from Earth to a sophisticated global society manipulating every inch of it. In short, we have become the dominant animal. Why, then, are we creating a world that threatens our own species? What can we do to change the current trajectory toward more climate change, increased famine, and epidemic disease?
Renowned Stanford scientists Paul R. Ehrlich and Anne H. Ehrlich believe that intelligently addressing those questions depends on a clear understanding of how we evolved and how and why we’re changing the planet in ways that darken our descendants’ future. The Dominant Animal arms readers with that knowledge, tracing the interplay between environmental change and genetic and cultural evolution since the dawn of humanity. In lucid and engaging prose, they describe how Homo sapiens adapted to their surroundings, eventually developing the vibrant cultures, vast scientific knowledge, and technological wizardry we know today.
But the Ehrlichs also explore the flip side of this triumphant story of innovation and conquest. As we clear forests to raise crops and build cities, lace the continents with highways, and create chemicals never before seen in nature, we may be undermining our own supremacy. The threats of environmental damage are clear from the daily headlines, but the outcome is far from destined. Humanity can again adapt—if we learn from our evolutionary past.
Those lessons are crystallized in The Dominant Animal. Tackling the fundamental challenge of the human predicament, Paul and Anne Ehrlich offer a vivid and unique exploration of our origins, our evolution, and our future.
John A. Robertson: Procreation as a self-defining experience
"The lens of procreative liberty is essential because reproductive technologies are necessarily bound up with procreative choice. They are means to achieve or avoid the reproductive experiences that are central to personal conceptions of meaning and identity. To deny procreative choice is to deny or impose a crucial self-defining experience, thus denying persons respect and dignity at the most basic level." -- John A. Robertson
Friday, December 19, 2008
Summer Johnson: Who Cares if the Vatican Weighs in on Bioethics?
Great OpEd at the Bioethics Blog by Summer Johnson about the recent admonition from the Roman Catholic Church: "Who Cares if the Vatican Weights in on Bioethics?":
No one. At least that's my view. Certainly not American Catholics who use birth control, IVF, pre-implantation genetic diagnosis, and a wide range of other reproductive technologies previously and even more so now get a Holy finger wagging from Rome.Entire article.
Moreover, I don't know any American Catholics who oppose the commonplace forms of human enhancement we use today and even the borderline ones such as taking brain-boosting drugs or drugs to help us sleep at night.
And this concept that the conservative movement more generally keeps leveraging--human dignity--the centerpiece of Catholic bioethics? Perhaps it could be useful, if anyone knew what it meant.
Wednesday, December 17, 2008
Most people favor reproductive technologies -- but not sex selection
A recent poll conducted in 15 countries by the BBVA Foundation shows that citizens in the developed world are largely in support of assisted reproductive technologies. In particular, most people polled were very much in support of in vitro fertilization, a technique used to help couples with fertility problems (scoring over 7 points on an acceptance scale from 0 to 10). At the same time, however, there was strong disapproval for using the technique to choose a baby's gender, with scores consistently showing below 3 points.
In fact, preimplantation genetic diagnosis (PGD) -- a genetic test that can be carried out on the embryos obtained from artificial fertilization in order to select those to be implanted in the uterus of the future mother -- scored much higher in acceptance than sex selection. But that said, when it was considered as a technique to screen for gender, PGD was flatly rejected for that purpose.
What is it about sex selection that gives cause to such rejection?
For me this is a no-brainer. Couples in the developed world, where gender discrimination and biases are less prominent, should be allowed to use gender selection for family balancing purposes. I'm absolutely flabbergasted that this is still not a right in some countries, including Canada where couples and their doctors face the threat of large fines and jail terms.
Admittedly, not all countries are ready for sex selection; India and China certainly come to mind. But that's not our problem, nor is it an indication of how sex selection would be used here. The idea that sex selection would significantly skew the gender balance here in the developed world is terribly misguided and not based on any real evidence. Given the 2 children per couple tendency, it's highly likely that most couples would opt to have a boy and a girl.
Another argument against sex selection is that it is prejudicial by its very nature -- that the very presence of preference indicates that gender biases exist and will continue to be reinforced. While this is a more nuanced argument, it fails to take into account an undeniable aspect of the human condition: we are a gendered species and gender differences do in fact exist.
A consequence of these differences is the rise of preference. We are a subjective and pragmatic species; we favor situations in which we have better control over our lives and our choices. Moreover, is it so wrong for a couple to want to have a girl, for example? Why should they be faulted for that? Why is preference in this case considered so evil? What about other morphological or psychological traits? Is it acceptable to screen for those characteristics, but not gender itself?
Until we truly get over gender in the biological sense, we need to re-acquaint ourselves with the realities of our gendered condition; we need to speak to the unique needs of male and female psychologies and biologies. And as long as we have laws and social norms that protect women (and men) from discrimination we shouldn't have to worry too much about any kind of prejudicial thinking stemming from couples who simply want to choose a boy or a girl.
In fact, preimplantation genetic diagnosis (PGD) -- a genetic test that can be carried out on the embryos obtained from artificial fertilization in order to select those to be implanted in the uterus of the future mother -- scored much higher in acceptance than sex selection. But that said, when it was considered as a technique to screen for gender, PGD was flatly rejected for that purpose.
What is it about sex selection that gives cause to such rejection?
For me this is a no-brainer. Couples in the developed world, where gender discrimination and biases are less prominent, should be allowed to use gender selection for family balancing purposes. I'm absolutely flabbergasted that this is still not a right in some countries, including Canada where couples and their doctors face the threat of large fines and jail terms.
Admittedly, not all countries are ready for sex selection; India and China certainly come to mind. But that's not our problem, nor is it an indication of how sex selection would be used here. The idea that sex selection would significantly skew the gender balance here in the developed world is terribly misguided and not based on any real evidence. Given the 2 children per couple tendency, it's highly likely that most couples would opt to have a boy and a girl.
Another argument against sex selection is that it is prejudicial by its very nature -- that the very presence of preference indicates that gender biases exist and will continue to be reinforced. While this is a more nuanced argument, it fails to take into account an undeniable aspect of the human condition: we are a gendered species and gender differences do in fact exist.
A consequence of these differences is the rise of preference. We are a subjective and pragmatic species; we favor situations in which we have better control over our lives and our choices. Moreover, is it so wrong for a couple to want to have a girl, for example? Why should they be faulted for that? Why is preference in this case considered so evil? What about other morphological or psychological traits? Is it acceptable to screen for those characteristics, but not gender itself?
Until we truly get over gender in the biological sense, we need to re-acquaint ourselves with the realities of our gendered condition; we need to speak to the unique needs of male and female psychologies and biologies. And as long as we have laws and social norms that protect women (and men) from discrimination we shouldn't have to worry too much about any kind of prejudicial thinking stemming from couples who simply want to choose a boy or a girl.
Cows are people, too - A SentDev Classic
This article was originally published on March 5, 2005.
As someone concerned with and supportive of non-anthropocentric personhood ethics, I've long insisted that any agent capable of subjective and emotional experience--no matter how subtle or simple--deserves moral consideration. Consequently, even the most "lowly" non-human animals IMO qualify for personhood status of varying degrees; it's been long known that many animals have their own personalities and emotional life. Moreover, most of them are capable of suffering, so as animal rights activist Peter Singer has bravely and cogently argued, we need to look out for their welfare.
Reinforcing this notion, recent studies have shown that farm animals do in fact exhibit human-like qualities in terms of their emotional life and in their relations with other animals. Cows in particular, who are often used as an example to showcase mindless docility in animals, do in fact have a complex internal psychological life in which they bear grudges, nurture friendships and become excited over intellectual challenges. They're also capable of feeling strong emotions such as happiness, pain, fear and even anxiety.
And very importantly, it's now suspected that they are even capable of worrying about the future--a psychological attribute that is often considered a critical threshold in personhood determination (i.e. a person should be capable of making plans and having intentions over time). If cows are worrying about the future, that means that i) they have a sense of self, ii) they are concerned about their welfare (which is intention), and iii) they can imagine themselves in the future (which it can be argued is a type of planning, in that they "plan" or expect themselves to exist in the future). Sounds like a person to me.
Many of these characteristics have been observed in other farm animals, including pigs, goats, and chickens. Consequently, animal rights advocates are urging that animal welfare laws need to be significatnly rethought and reconsidered. Christine Nicol, professor of animal welfare at Bristol University, believes that because remarkable cognitive abilities and cultural innovations have been revealed, "[o]ur challenge is to teach others that every animal we intend to eat or use is a complex individual, and to adjust our farming culture accordingly.” She argues that even chickens should be treated as individuals with needs and problems.
Other observations about farm animal behaviour include:
As someone concerned with and supportive of non-anthropocentric personhood ethics, I've long insisted that any agent capable of subjective and emotional experience--no matter how subtle or simple--deserves moral consideration. Consequently, even the most "lowly" non-human animals IMO qualify for personhood status of varying degrees; it's been long known that many animals have their own personalities and emotional life. Moreover, most of them are capable of suffering, so as animal rights activist Peter Singer has bravely and cogently argued, we need to look out for their welfare.
Reinforcing this notion, recent studies have shown that farm animals do in fact exhibit human-like qualities in terms of their emotional life and in their relations with other animals. Cows in particular, who are often used as an example to showcase mindless docility in animals, do in fact have a complex internal psychological life in which they bear grudges, nurture friendships and become excited over intellectual challenges. They're also capable of feeling strong emotions such as happiness, pain, fear and even anxiety.
And very importantly, it's now suspected that they are even capable of worrying about the future--a psychological attribute that is often considered a critical threshold in personhood determination (i.e. a person should be capable of making plans and having intentions over time). If cows are worrying about the future, that means that i) they have a sense of self, ii) they are concerned about their welfare (which is intention), and iii) they can imagine themselves in the future (which it can be argued is a type of planning, in that they "plan" or expect themselves to exist in the future). Sounds like a person to me.
Many of these characteristics have been observed in other farm animals, including pigs, goats, and chickens. Consequently, animal rights advocates are urging that animal welfare laws need to be significatnly rethought and reconsidered. Christine Nicol, professor of animal welfare at Bristol University, believes that because remarkable cognitive abilities and cultural innovations have been revealed, "[o]ur challenge is to teach others that every animal we intend to eat or use is a complex individual, and to adjust our farming culture accordingly.” She argues that even chickens should be treated as individuals with needs and problems.
Other observations about farm animal behaviour include:
- cows within a herd form smaller friendship groups of between two and four animals with whom they spend most of their time, often grooming and licking each other
- cows will also dislike other cows and can bear grudges for months or years
- dairy cow herds have a complex and intense sex life
- cows become excited when they solve intellectual challenges; when solving problems, their heartbeats go up and some even hump into the air in excitment
- sheep can remember 50 ovine faces (even in profile) and they can recognise another sheep after as much as a year apart
Monday, December 15, 2008
RAND institute considers producing liquid fuels from coal
It looks like the U.S. government has RAND looking into the implications of using coal to produce fuel: Producing Liquid Fuels from Coal: Prospects and Policy Issues. This could be a big deal, especially considering the size of the industry it would likely create. A CTL industry would undoubtedly yield important energy security benefits -- most notably a lowering of world oil prices and a decrease in wealth transfers from oil users to oil producers. It would also make the U.S. significantly less dependant on foreign oil.
At the same time, however, the advent of a CTL industry would very likely instigate a host of environmental problems. And it would ultimately prove to be a costly and myopic diversion at a time when more innovative energy solutions are required.
RAND speculates that a domestic CTL industry could produce as much as three million barrels per day of transportation fuels by 2030. This would require 550 million tons of coal production annually. While the U.S. has considerable coal reserves, the amount of mining that would be required (most prominently in Wyoming and Montana) would be nothing short of intense. The residual impacts of mining would adversely change the landscape, the local ecology and water quality. In addition, the coal-to-liquid process requires extremely high levels of water consumption -- this at the dawn of a water shortage crisis.
More conceptually, however, the idea of producing liquid fuel from coal seems more reactionary than visionary. This move would not address the green house gas emissions problem and it would be yet another industry that ravages the Earth in search of a non-renewable resource.
While RAND's conclusion is somewhat tempered and cautionary, the institute does suggest that the U.S. engage in an "insurance policy" approach that promotes the early construction and operation of a limited number of commercial CTL plants.
Here's to hoping that the billions of dollars required to create a fully robust CTL industry will instead be directed to something that's less environmentally insensitive and with an eye to the future.
Read RAND's analyis.
At the same time, however, the advent of a CTL industry would very likely instigate a host of environmental problems. And it would ultimately prove to be a costly and myopic diversion at a time when more innovative energy solutions are required.
RAND speculates that a domestic CTL industry could produce as much as three million barrels per day of transportation fuels by 2030. This would require 550 million tons of coal production annually. While the U.S. has considerable coal reserves, the amount of mining that would be required (most prominently in Wyoming and Montana) would be nothing short of intense. The residual impacts of mining would adversely change the landscape, the local ecology and water quality. In addition, the coal-to-liquid process requires extremely high levels of water consumption -- this at the dawn of a water shortage crisis.
More conceptually, however, the idea of producing liquid fuel from coal seems more reactionary than visionary. This move would not address the green house gas emissions problem and it would be yet another industry that ravages the Earth in search of a non-renewable resource.
While RAND's conclusion is somewhat tempered and cautionary, the institute does suggest that the U.S. engage in an "insurance policy" approach that promotes the early construction and operation of a limited number of commercial CTL plants.
Here's to hoping that the billions of dollars required to create a fully robust CTL industry will instead be directed to something that's less environmentally insensitive and with an eye to the future.
Read RAND's analyis.
MDA's Multiple Kill Vehicle
Check out this incredible hover robot:
It's the product of the Missile Defense Agency and they call it the MKV-L (Multiple Kill Vehicle). The hover bot is meant to be used as a bundle of missile interceptors deployed by a larger carrier. Objectives of this particular test included having the MKV-L hover under its own power and prove its capability to recognize and track a surrogate target in a flight environment.
Terminator style hunter-killer, here we come.
It's the product of the Missile Defense Agency and they call it the MKV-L (Multiple Kill Vehicle). The hover bot is meant to be used as a bundle of missile interceptors deployed by a larger carrier. Objectives of this particular test included having the MKV-L hover under its own power and prove its capability to recognize and track a surrogate target in a flight environment.
Terminator style hunter-killer, here we come.
Sandberg: It's safer to be smart
Neuroscientist Anders Sandberg chimes in on the cognitive enhancement debate and notes that intelligence has its privileges:
A fun reference I found last week is IQ in early adulthood and later risk of death by homicide: cohort study of 1 million men, which demonstrates that among swedish males having intelligence above average reduces the risk of being murdered to 27% of the risk among the lower 11%. Why this is so is a bit unclear, but clearly intelligence is health promoting. It reduces injuries and bad driving too.Just as an aside, high IQ has also been linked to vegetarianism.
Zuckerman's Society Without God
This book was released in October: Society without God: What the Least Religious Nations Can Tell Us About Contentment by by Phil Zuckerman.
From Publishers Weekly:
From Publishers Weekly:
Sociologist Zuckerman spent a year in Scandinavia seeking to understand how Denmark and Sweden became probably the least religious countries in the world, and possibly in the history of the world. While many people, especially Christian conservatives, argue that godless societies devolve into lawlessness and immorality, Denmark and Sweden enjoy strong economies, low crime rates, high standards of living and social equality. Zuckerman interviewed 150 Danes and Swedes, and extended transcripts from some of those interviews provide the book's most interesting and revealing moments. What emerges is a portrait of a people unconcerned and even incurious about questions of faith, God and life's meaning. Zuckerman ventures to answer why Scandinavians remain irreligious—e.g., the religious monopoly of state-subsidized churches, the preponderance of working women and the security of a stable society—but academics may find this discussion a tad thin. Zuckerman also fails to answer the question of contentment his subtitle speaks to. Still, for those interested in the burgeoning field of secular studies—or for those curious about a world much different from the devout U.S.—this book will offer some compelling reading.From Sam Harris:
Most Americans are convinced that faith in God is the foundation of civil society. Society without God reveals this to be nothing more than a well-subscribed, and strangely American, delusion. Even atheists living in the United States will be astonished to discover how unencumbered by religion most Danes and Swedes currently are. This glimpse of an alternate, secular reality is at once humbling and profoundly inspiring - and it comes not a moment too soon.More reviews can be found here and here.
Friday, December 12, 2008
Revisiting The Day the Earth Stood Still - A SentDev Classic
This review was originally published on January 29, 2007.
I sat down with my son recently to watch an old sci-fi classic, The Day the Earth Stood Still. This film is drenched in the 1950's weltanschauung, but it has truly withstood the test of time. I was amazed at how relevant this movie remains to this day nearly 60 years after its release.
Our current global situation is not too far removed from the realities of the 1950s. We continue to struggle for rational discourse and peace. The revealing sciences are yet again offering a glimpse into a future filled with great humanitarian possibilities. We remain wary of apocalyptic threats and the disturbing potential for a new set of extinction risks.
And not surprisingly, our messianic cravings still linger, whether they be for extraterrestrial salvation or the onset of a benign artificial superintelligence. The Day the Earth Stood Still is a wish-fulfillment movie if there ever was one.
Historical context
The 1950s were not a great time for the United States. Nerves were on edge as there seemed to be no end to international tensions and the madness of war. The Cold War had emerged and the stakes were never higher. The world had completely lost its innocence and was now living on borrowed time; the means for apocalyptic destruction were in hand.
With all this desperation and fear in the air, Hollywood was eager to oblige the collective consciousness. Audiences flocked to theaters for one of two reasons: to escape or to confront their fears head-on. A sampling of these films included "The Greatest Show on Earth" (1952), "An American in Paris" (1951), "The War of the Worlds" (1953) and "Invasion of the Body Snatchers" (1956).
But desperate times call for desperate hopes. Hollywood was also anxious to moralize and offer some optimism -- even if it was far-fetched optimism. Religion took a heavy blow after World War II, and many lost faith in a God who was apparently nowhere to be seen and didn't seem to care. If God wouldn't intervene in human affairs, than perhaps Hollywood could; the masses started to seek a different kind of deus ex machina.
Fantasy films in particular offered some interesting possibilities. Comic superheroes like Superman, Captain America and Batman would always come to the rescue. The Bat-Signal was proven to be more reliable than prayer.
Additionally, the newfound enthusiasm for science during the 1950s sparked an interest in science fiction. Combined with growing hopes for rocket ships and fears of alien invasion, these sentiments resulted in one of the greatest science fiction movies of all time, The Day the Earth Stood Still (1951) (hereafter referred as TDTESS).
Substituting fear for reason
The story is exceedingly simple, yet provocative and poignant.
In the film, an extraterrestrial named Klaatu (played by Michael Rennie) arrives in Washington D.C. with an important message for the people of Earth. He insists that all national leaders be present for his address, but given the geopolitical stresses of the time such an arrangement is not possible. Frustrated, Klaatu approaches the scientific community who he believes will listen to reason. In the end, with a number of prominent scientists present, he offers humanity an ultimatum: Earth can either decide to abandon warfare and join other advanced nations -- a peace ensured by a massive deterrent force, the robot race Gort -- or else be considered a threat and subsequently destroyed.
Quite understandably, common sentiments during this era were characterized by pessimism and collective self-loathing. The rise and fall of the Nazi regime and the onset of the Cold War painted a very grim picture of humanity and its capacity for horror. This is the world that Klaatu found himself in, and we, the viewer, see it through his eyes; it is through an outsider's observations that we gain perspective.
Klaatu's unexpected arrival causes great fear in Washington. Not thirty seconds after he steps out of his ship does he get himself shot when his gift is confused for a weapon -- a precarious start to his mission and a sign of things to come. After his recovery in the hospital, Klaatu escapes in hopes of exploring the city. Residents become paranoid and are on the verge of hysteria. "I am fearful," says Klaatu, "when I see people substituting fear for reason." Earlier, during his meeting with the president's aid, he noted, "I'm impatient with stupidity. My people have learned to live without it."
Science, not faith
Unable to meet with political leaders, Klaatu seeks a leading American scientist, Professor Barnhardt. This in itself is very telling -- a suggestion that political leaders are far too myopic and stubborn, detached from reality and mired in their petty squabbles. The world has started to look to a new kind of leadership -- a leadership of reason and intelligence. It is no co-incidence that Barnhardt is made to look like Albert Einstein.
The shift to science also reflects the turning away from religion. "It isn't faith that makes good science," says Barnhardt, "it's curiosity." Barnhardt's words remind me of our current sociocultural reality where science and religion continue to clash. The resurgence of religion around the world has been met with much criticism, most notably by such outspoken scientists as Richard Dawkins and Daniel Dennett.
Somewhat surprisingly, the film lauds the benefits of science and technology a mere 6 years after Hiroshima and Nagasaki. In this sense, TDTESS can be interpreted as a film that does not buy into defeatism, instead suggesting that while science and technology may cause a lot of problems, it may also offer potential solutions.
Klaatu's technology is certainly amazing. His ship can travel 4,000 miles an hour, he has a cream that can heal gunshot wounds overnight, and incredible medical technology that seemingly brings a dead person back to life. As one medical physician noted, "He was very nice about it, but he made me feel like a third-class witch doctor."
The quest for security
In addition to these advanced technologies, Klaatu also brings with him incredible destructive force. In an awesome display of power, he shuts down all the electricity on Earth for half-an-hour. And of course, he has Gort -- the intimidating robotic presence who patiently lurks in the background.
Gort is the stick with which Klaatu can enforce his ultimatum. "There's no limit to what he could do," he says, "He could destroy the Earth." Klaatu stresses the importance of law and the need to enforce it. "There must be security for all, or no one is secure. This does not mean giving up any freedom, except the freedom to act irresponsibly."
Klaatu's plea for world security on film acts as a call for international co-operation in the real world. A number of observers of the day, Einstein included, believed that the advent of nuclear weapons necessitated the creation of more powerful global bodies and even world federalism. Today, with the threat of bioterrorism, ongoing nuclear proliferation, and the future potential for nanotech catastrophes, the call for increased global co-operation can once again be heard.
Driven by the rational desire for self-preservation, Klaatu's society has given the robots police-like powers. "In matters of aggression, we have given them absolute power over us. This power cannot be revoked," says Klaatu, "At the first signs of violence, they act automatically against the aggressor. The penalty for provoking their action is too terrible to risk." Klaatu denies that his people have achieved any kind of perfection, but instead the attainment of a system that works. "Your choice is simple," he says, "Join us and live in peace, or pursue your present course and face obliteration. We shall be waiting for your answer. The decision rests with you."
Interestingly, Gort's power is analogous to the nuclear bomb itself -- they are both ultimate deterrents. The implication brings to mind the so-called policy of Mutually Assured Destruction (MAD). To engage in nuclear war or set off the Gort robots would be one-in-the-same: a suicidal gesture. Did TDTESS suggest that the means to peace was already in hand?
The messianic urge
As Klaatu and Gort fly away in their spaceship, the viewer cannot help but feel that their stern message was akin to an admonition from God. Indeed, the theological overtones in TDTESS are undeniable. Klaatu, when hiding among humans, goes by the name Carpenter, an obvious reference to Jesus. He is the messiah who has come down from the heavens to impart his message and save the people of Earth.
In recent times this theme has been taken quite literally by a number of religious groups and cults, most notably the Raelian sect. Similarly, the craving for messianic guidance is being re-applied to a different source, namely artificial superintelligence. The rise of Singularitarianism is an overt plea for advanced intervention, the suggestion that humanity is not capable of saving itself and that it requires a higher, albeit non-divine, power.
An archetypal story
The Day the Earth Stood Still is a story for the ages. Along with its famous phrase, "Klaatu barada nikto," it has made an indelible mark in popular culture. At a deeper level it is a reflection of how societies deal with desperation, fear and hopelessness. It is an eye-opening snapshot into human nature and the different ways in which people react to stress and an uncertain future.
In this sense it is truly an archetypal story -- one that I'm sure will continue to be relevant in the years and decades to come.
I sat down with my son recently to watch an old sci-fi classic, The Day the Earth Stood Still. This film is drenched in the 1950's weltanschauung, but it has truly withstood the test of time. I was amazed at how relevant this movie remains to this day nearly 60 years after its release.
Our current global situation is not too far removed from the realities of the 1950s. We continue to struggle for rational discourse and peace. The revealing sciences are yet again offering a glimpse into a future filled with great humanitarian possibilities. We remain wary of apocalyptic threats and the disturbing potential for a new set of extinction risks.
And not surprisingly, our messianic cravings still linger, whether they be for extraterrestrial salvation or the onset of a benign artificial superintelligence. The Day the Earth Stood Still is a wish-fulfillment movie if there ever was one.
Historical context
The 1950s were not a great time for the United States. Nerves were on edge as there seemed to be no end to international tensions and the madness of war. The Cold War had emerged and the stakes were never higher. The world had completely lost its innocence and was now living on borrowed time; the means for apocalyptic destruction were in hand.
With all this desperation and fear in the air, Hollywood was eager to oblige the collective consciousness. Audiences flocked to theaters for one of two reasons: to escape or to confront their fears head-on. A sampling of these films included "The Greatest Show on Earth" (1952), "An American in Paris" (1951), "The War of the Worlds" (1953) and "Invasion of the Body Snatchers" (1956).
But desperate times call for desperate hopes. Hollywood was also anxious to moralize and offer some optimism -- even if it was far-fetched optimism. Religion took a heavy blow after World War II, and many lost faith in a God who was apparently nowhere to be seen and didn't seem to care. If God wouldn't intervene in human affairs, than perhaps Hollywood could; the masses started to seek a different kind of deus ex machina.
Fantasy films in particular offered some interesting possibilities. Comic superheroes like Superman, Captain America and Batman would always come to the rescue. The Bat-Signal was proven to be more reliable than prayer.
Additionally, the newfound enthusiasm for science during the 1950s sparked an interest in science fiction. Combined with growing hopes for rocket ships and fears of alien invasion, these sentiments resulted in one of the greatest science fiction movies of all time, The Day the Earth Stood Still (1951) (hereafter referred as TDTESS).
Substituting fear for reason
The story is exceedingly simple, yet provocative and poignant.
In the film, an extraterrestrial named Klaatu (played by Michael Rennie) arrives in Washington D.C. with an important message for the people of Earth. He insists that all national leaders be present for his address, but given the geopolitical stresses of the time such an arrangement is not possible. Frustrated, Klaatu approaches the scientific community who he believes will listen to reason. In the end, with a number of prominent scientists present, he offers humanity an ultimatum: Earth can either decide to abandon warfare and join other advanced nations -- a peace ensured by a massive deterrent force, the robot race Gort -- or else be considered a threat and subsequently destroyed.
Quite understandably, common sentiments during this era were characterized by pessimism and collective self-loathing. The rise and fall of the Nazi regime and the onset of the Cold War painted a very grim picture of humanity and its capacity for horror. This is the world that Klaatu found himself in, and we, the viewer, see it through his eyes; it is through an outsider's observations that we gain perspective.
Klaatu's unexpected arrival causes great fear in Washington. Not thirty seconds after he steps out of his ship does he get himself shot when his gift is confused for a weapon -- a precarious start to his mission and a sign of things to come. After his recovery in the hospital, Klaatu escapes in hopes of exploring the city. Residents become paranoid and are on the verge of hysteria. "I am fearful," says Klaatu, "when I see people substituting fear for reason." Earlier, during his meeting with the president's aid, he noted, "I'm impatient with stupidity. My people have learned to live without it."
Science, not faith
Unable to meet with political leaders, Klaatu seeks a leading American scientist, Professor Barnhardt. This in itself is very telling -- a suggestion that political leaders are far too myopic and stubborn, detached from reality and mired in their petty squabbles. The world has started to look to a new kind of leadership -- a leadership of reason and intelligence. It is no co-incidence that Barnhardt is made to look like Albert Einstein.
The shift to science also reflects the turning away from religion. "It isn't faith that makes good science," says Barnhardt, "it's curiosity." Barnhardt's words remind me of our current sociocultural reality where science and religion continue to clash. The resurgence of religion around the world has been met with much criticism, most notably by such outspoken scientists as Richard Dawkins and Daniel Dennett.
Somewhat surprisingly, the film lauds the benefits of science and technology a mere 6 years after Hiroshima and Nagasaki. In this sense, TDTESS can be interpreted as a film that does not buy into defeatism, instead suggesting that while science and technology may cause a lot of problems, it may also offer potential solutions.
Klaatu's technology is certainly amazing. His ship can travel 4,000 miles an hour, he has a cream that can heal gunshot wounds overnight, and incredible medical technology that seemingly brings a dead person back to life. As one medical physician noted, "He was very nice about it, but he made me feel like a third-class witch doctor."
The quest for security
In addition to these advanced technologies, Klaatu also brings with him incredible destructive force. In an awesome display of power, he shuts down all the electricity on Earth for half-an-hour. And of course, he has Gort -- the intimidating robotic presence who patiently lurks in the background.
Gort is the stick with which Klaatu can enforce his ultimatum. "There's no limit to what he could do," he says, "He could destroy the Earth." Klaatu stresses the importance of law and the need to enforce it. "There must be security for all, or no one is secure. This does not mean giving up any freedom, except the freedom to act irresponsibly."
Klaatu's plea for world security on film acts as a call for international co-operation in the real world. A number of observers of the day, Einstein included, believed that the advent of nuclear weapons necessitated the creation of more powerful global bodies and even world federalism. Today, with the threat of bioterrorism, ongoing nuclear proliferation, and the future potential for nanotech catastrophes, the call for increased global co-operation can once again be heard.
Driven by the rational desire for self-preservation, Klaatu's society has given the robots police-like powers. "In matters of aggression, we have given them absolute power over us. This power cannot be revoked," says Klaatu, "At the first signs of violence, they act automatically against the aggressor. The penalty for provoking their action is too terrible to risk." Klaatu denies that his people have achieved any kind of perfection, but instead the attainment of a system that works. "Your choice is simple," he says, "Join us and live in peace, or pursue your present course and face obliteration. We shall be waiting for your answer. The decision rests with you."
Interestingly, Gort's power is analogous to the nuclear bomb itself -- they are both ultimate deterrents. The implication brings to mind the so-called policy of Mutually Assured Destruction (MAD). To engage in nuclear war or set off the Gort robots would be one-in-the-same: a suicidal gesture. Did TDTESS suggest that the means to peace was already in hand?
The messianic urge
As Klaatu and Gort fly away in their spaceship, the viewer cannot help but feel that their stern message was akin to an admonition from God. Indeed, the theological overtones in TDTESS are undeniable. Klaatu, when hiding among humans, goes by the name Carpenter, an obvious reference to Jesus. He is the messiah who has come down from the heavens to impart his message and save the people of Earth.
In recent times this theme has been taken quite literally by a number of religious groups and cults, most notably the Raelian sect. Similarly, the craving for messianic guidance is being re-applied to a different source, namely artificial superintelligence. The rise of Singularitarianism is an overt plea for advanced intervention, the suggestion that humanity is not capable of saving itself and that it requires a higher, albeit non-divine, power.
An archetypal story
The Day the Earth Stood Still is a story for the ages. Along with its famous phrase, "Klaatu barada nikto," it has made an indelible mark in popular culture. At a deeper level it is a reflection of how societies deal with desperation, fear and hopelessness. It is an eye-opening snapshot into human nature and the different ways in which people react to stress and an uncertain future.
In this sense it is truly an archetypal story -- one that I'm sure will continue to be relevant in the years and decades to come.
Thursday, December 11, 2008
FP: LHC doom among 10 worst predictions of 2008
Foreign Policy has posted their list of the 10 worst predictions of 2008. Among them was the claim by Walter Wagner and others that the Large Hadron Collider would destroy the Earth:
Now, I'm not suggesting that the LHC was or is a legitimate threat. I just want to make the point that we need to be careful about chastising those who warn of such dangers. Existential risks are real risks. And more are on the way.
As for Walter Wagner's prediction, for all we know he was absolutely right, and we are now branching out into a freakishly unlikely and implausible set of all possible Many Worlds. In other words, doom may have been more likely than not -- we just happen to be observing the small set of surviving universes.
Again, I think this highly unlikely; I'm just sayin'.
“There is a real possibility of creating destructive theoretical anomalies such as miniature black holes, strangelets and deSitter space transitions. These events have the potential to fundamentally alter matter and destroy our planet.” —Walter Wagner, LHCDefense.orgAh, yes -- tee hee, it "appears that we're still here." And such is the peril of predicting human extinction: no one will ever be there after the fact to pat you on the back and say, "Dude, you totally called that one." So-called Chicken Littles will always look silly by virtue of the fact that a post-doom state cannot be observed; we can only reflect and marvel at our ongoing survival -- no matter how improbable.
Scientist Walter Wagner, the driving force behind Citizens Against the Large Hadron Collider (LHC), is making his bid to be the 21st century’s version of Chicken Little for his opposition to the world’s largest particle accelerator. Warning that the experiment might end humanity as we know it, he filed a lawsuit in Hawaii’s U.S. District Court against the European Organization for Nuclear Research (CERN), which built the LHC, demanding that researchers not turn the machine on until it was proved safe. The LHC was turned on in September, and it appears that we are still here.
Now, I'm not suggesting that the LHC was or is a legitimate threat. I just want to make the point that we need to be careful about chastising those who warn of such dangers. Existential risks are real risks. And more are on the way.
As for Walter Wagner's prediction, for all we know he was absolutely right, and we are now branching out into a freakishly unlikely and implausible set of all possible Many Worlds. In other words, doom may have been more likely than not -- we just happen to be observing the small set of surviving universes.
Again, I think this highly unlikely; I'm just sayin'.
Subscribe to:
Posts (Atom)