David Pearce is guest blogging this week
First, many thanks to George for inviting me to blog on Sentient Developments. I asked George what I should blog about. He suggested I might start with The Hedonistic Imperative. This topic might be more interesting to readers of Sentient Developments if I respond to critical questions or blog on themes readers feel I've unjustly neglected. If so, please let me know.
Briefly, some background. In 1995 I wrote an online manifesto which advocates the use of biotechnology to abolish suffering in all sentient life. The Hedonistic Imperative predicts that world's last unpleasant experience will be a precisely dateable event in the next thousand years or so - probably a "minor" pain in some obscure marine invertebrate. More speculatively, HI predicts that our descendants will be animated by genetically preprogrammed gradients of intelligent bliss - modes of well-being orders of magnitude richer than today's peak experiences.
I write from the perspective of what is uninspiringly known as negative utilitarianism i.e. I'd argue that we have an overriding moral responsibility to abolish suffering. If my background had been a bit different, I'd probably just call myself a scientifically-minded Buddhist. True, Gautama Buddha didn't speak about biotechnology; but to Buddhists (and Jains) talk of engineering the well-being of all sentient life is less likely to invite an incredulous stare than it does in the West.
I should also add that credit for the first published scientifically literate blueprint for a world without suffering belongs IMO to Lewis Mancini. See "Riley-Day Syndrome, Brain Stimulation and the Genetic Engineering of a World Without Pain" Medical Hypotheses (1990) 31. 201-207. As far as I can tell, Mancini's original paper sank with barely a trace. However, it is now online where it belongs: I've uploaded the text here: http://www.wireheading.com/painless.html.
[I confess my jaw dropped a couple of years ago when I stumbled across it.]
HI was originally written for an audience of analytic philosophers. The Abolitionist Project (2007) http://www.abolitionist.com/ and Superhappiness (2008) http://www.superhappiness.com/ are (I hope) more readable and up-to-date. I won't now go into the technical reasons for believing we can use biotech, robotics and nanotechnology to eradicate the molecular substrates of suffering and malaise from the biosphere. Given the exponential growth of computing power and biotechnology, the abolitionist project could in theory be completed in two or three centuries or less. This timescale is unlikely for sociological reasons. So why should anyone think it's ever going to happen? All sorts of stuff is technically feasible in principle; but a lot of so-called futurology is just a mixture of disguised autobiography and wish-fulfillment fantasy. Is this any different?
Quite possibly not; but here are two reasons for guarded optimism.
Futurists spend a lot of time discussing the possibility of posthuman superintelligence. Whatever else superintelligence may be, we implicitly assume that it must at least weakly be related to what IQ tests measure - just completely off the scale. However, IQ tests ignore one important and extraordinarily cognitively demanding skill that non-autistic humans possess. At least part of what drove the evolution of our uniquely human intelligence was our superior "mind-reading" skills and enhanced capacity for empathetic understanding of other intentional systems. This capacity is biased, selective, and deeply flawed; but I'd argue its extension and enrichment are going to play a critical role in the development of intelligent life in the universe. By contrast, conventional IQ tests are "mind-blind"; they simply ignore social cognition. I'd argue that our posthuman descendants will have a vastly richer capacity to understand the perspective of "what it is like to be" other sentient beings; and this recursively self-improving empathetic capacity will be a vital ingredient of mature superintelligence and posthuman ethics. Of course "super-empathy" doesn't by itself guarantee a utopian outcome. And I'm personally sceptical that digital computers with a classical von Neumann architecture will ever be sentient, let alone superintelligent. But a future (hypothetical) superhuman capacity for empathetic understanding does, I think, make a universal compassion for all sentient beings more likely.
Viewing the way we currently treat other sentient beings as a cognitive and not just a moral limitation is of course controversial. So secondly, let's fall back on a more cynical and conservative assumption. Assume, pessimistically, that what Bentham says of humans will be true of posthumans too: "Dream not that men will move their little finger to serve you, unless their advantage in so doing be obvious to them. Men never did so, and never will, while human nature is made of its present materials." Does this bleak analysis of (post)human nature rule out a world that supports the well-being of all sentience?
No, I don't think so. If it's broadly correct, this limitation does mean is that morally serious actors today should strive to develop advanced technology that makes the expression of (weak) benevolence towards other sentient beings trivially easy - so easy that its expression involves less effort on the part of the morally apathetic than raising one's little finger. For example, whereas one way to combat the cruelty of factory farming is to use moral arguments to promote its abolition - as in their very different ways do PETA and Peter Singer - the other, complementary strategy is to promote technologies that will allow "us" all to lead a cruelty-free lifestyle at no personal cost. Thus see the nonprofit research organization New Harvest: advancing meat substitutes: http://www.new-harvest.org/.
Thirty years hence, if meat-eaters are presented with two equally tasty products, one "natural" from an intensively-reared factory-farmed animal that's been butchered for its flesh as now, the other labelled "cruelty-free" in the form of attractively branded vatfood, how many consumers are deliberately going to choose the cruel option if it doesn't taste better? I'm aware that this kind of optimism can sound naive. Yes, we can all be selfish; but i think relatively few people are malicious, and still fewer people are consistently malicious. So long as the slightest personal inconvenience to members of the master species can be avoided, I think we can extend the parallel of developing cruelty-free cultured meat to the eradication of suffering throughout the living world: ecosystem redesign, depot-contraception, rewriting the vertebrate genome, the lot. With sufficiently advanced technology, the creation of a living world without cruelty needn't be effortful or burdensome to the morally indifferent. Technology can make what is today impossibly difficult soon merely challenging, then relatively easy, and eventually trivial. And of course a lot of people do aspire to be more than merely weakly benevolent. Maybe we're "really" just signalling to potential mates our desirability as nurturing fathers [or whatever story evolutionary psychology tells us explains our altruistic desires.]. But what matters is not our motivation or its ultimate cause, but the outcome.
A cruelty-free world is one thing; but many of us feel ambivalent about extreme happiness, let alone lifelong superhappiness of the kind promised by utopian neurobiology. One reason we may feel ambivalent is that we contemplate, for instance, the selfishness and drug-addled wits of the heroin addict; or the crazed lever-pressing of the rodent wirehead; or the impaired judgement of the euphorically manic. Intellectuals especially may be resistant to prospect of superhappiness, fearing that their intellectual acuity may be compromised. Beyond a certain point, must there be some kind of tradeoff between hedonic tone and intellectual performance?
Not necessarily. Here is just one way in which reprogramming our reward circuitry could actually serve as a tool for intelligence-amplification and cognitive enhancement. Recall Edison's much-quoted dictum: “Genius is one percent inspiration and ninety-nine percent perspiration.” The relative percentages are disputable; but the contribution of sheer hard work and intellectual focus to productivity isn't in doubt. Now if you're a student, an academic or an intellectual, imagine if you could selectively amplify the subjective reward you derive from all and only the cerebral activities that you think you ought to enjoy doing most; and conversely, imagine if you could diminish or switch off altogether the reward from life's baser pleasures. What might you achieve intellectually if you could reprogram your reward circuitry so that you could work pursuing your highest aspirations for 14 hours a day? By way of contrast, using the Internet offers an uncomfortable insight into what one is really interested in. [Sadly, I lose track of the endless hours I've wasted online viewing complete fluff. I tell myself that I'm soon going to enjoy writing a 500 page scholarly tome, The Abolitionist Project. Alas in practice it's more fun surfing the Net for trivia.] In any event, IMO the enemy of intelligence isn't bliss but indiscriminate, uniform bliss; and in the future I think superhappiness and superintelligence can be fused - seamlessly or otherwise.
Are there pitfalls here? Yes, lots. But they are technical problems with a technical solution.
Here's another example. one reason we may be ambivalent about extreme happiness is that we see how it can make people antisocial. One thinks of the heroin addict who neglects his family for the sake of his opioid habit. But what if safe, sustainable designer drugs or gene therapies were available that conferred an unlimited capacity for altruistic pleasure? It's only recently been discovered that the empathogenic hugdrug MDMA (Ecstasy) http://www.mdma.net/ triggers copious release of the "trust hormone" oxytocin: oxytocin seems to be the missing jigsaw piece in explaining MDMA's unique spectrum of action. So to take one scenario, what if mass oxytocin-therapy enabled us to be chronically kind, trusting and empathetic towards each other - the very opposite of "selfish hedonism" of popular stereotype.
Moreover this option isn't just a matter of personal lifestyle choice; I think the implications are more far-reaching. Thoughtful researchers are increasingly concerned about existential and global catastrophic risks in an era of biowarfare, nanotechnology and weapons of mass destruction. Britain's Astronomer Royal, Sir Martin Rees, puts the odds of human extinction this century at 50%. I suspect this figure is too high, but clearly the risk is not negligible. Anyhow, arguably the greatest underlying source of existential and global catastrophic lies in the Y chromosome: testosterone-driven males are responsible for the overwhelming bulk of the world's wars, aggression and reckless behaviour. Decommissioning the Y chromosome isn't currently an option; but the potential civilizing influence of pro-social drugs and gene therapies on dominant alpha males shouldn't be lightly dismissed as a risk-reduction strategy. In general, a world where intelligent agents are happier, trusting and more trustworthy is potentially a much safer world - and much more civilised too.
Are there pitfalls to modifying human nature? Again yes, lots. But there are also profound risks in retaining the biological status quo.
David Pearce
dave@hedweb.com
http://www.hedweb.com/
No comments:
Post a Comment