Wednesday, April 29, 2009

Guest blogger David Pearce answers your questions (part 2)

David Pearce is guest blogging this week.

Here are two more replies in response to questions about my Abolitionist Project article from earlier this week.

Carl makes an important point: "Why think that affective gradients are necessary for motivation at all? Consider minds that operate with formal utility functions instead of reinforcement learning. Humans are often directly motivated to act independently of pleasure and pain."

Imagine if we could find a functionally adequate substitute for the signaling role of negative affect - a bland term that hides a multitude of horrors - and replace its nastiness with formal utility functions. Why must organic robots like us experience the awful textures of physical pain, depression and malaise, while our silicon robots function well without them? True, most people regard life's heartaches as a price worth paying for life's joys. We wouldn't want to become zombies.

But what if it were feasible to "zombify" the nasty side of life completely while amplifying all the good bits - perhaps so we become "cyborg buddhas".

More radically, if the signaling role of affect proves dispensable altogether, it might be feasible computationally to offload everything mundane onto smart prostheses - and instead enjoy sublime states of bliss every moment of our lives, without any hedonic dips at all. I say more on this theme in my reply to "Wouldn't a permanent maximum of bliss be better?" I need scarcely add this is pure speculation.

Leafy asks me to comment on an "animal welfare state, and [...] how your views about the treatment of nonhuman animals (e.g., that animals need care and protection, not liberation, and when animal use or domination might be morally acceptable) differ from those of people such as Singer and Francione".

First, let's deal with an obvious question. Millions of human infants die needlessly and prematurely in the Third World each year. Shouldn't we devote all our energies to helping members of our own species first? To the extent humans suffer more than non-humans, I'd answer: yes - though rationalists should take extraordinary pains to guard against anthropocentric bias. Critically, there is no evidence that domestic, farm or wild mammals are any less sentient than human infants and toddlers. If so, we should treat their well-being impartially. A critic will respond here that human infants have moral priority because they have the potential to become full-grown adults - with the moral primacy that we claim. But we wouldn't judge a toddler with a terminal disease who will never grow up to deserve any less love and care than a healthy youngster. Likewise, the fact that a dog or a chimpanzee or a pig will never surpass the intellectual accomplishments of a three year old child is no reason to let them suffer more. Thus I think it's admirable that we spend a hundred thousand dollars trying to save the life of a 23 week old extremely premature baby; but it's incongruous that we butcher and eat billions of more sophisticated sentient beings each day. Actually, IMO words can't adequately convey the horror of what we're doing in factory farms and slaughterhouses. Self-protectively, I try and shut it out most of the time. After all, my intuitions reassure me, they're only animals, what's going on right now can't really be as bad as I believe it to be. Yet I'm also uncomfortably aware this is moral and intellectual cowardice.

Is a comprehensive welfare system for non-human animals technically feasible? Yes. The implications of an exponential growth of computing power for the biosphere are exceedingly counterintuitive. See, for example, The Singularity Institute or Ray Kurzweil -- though I'm slightly more cautious about timescales. In any event, by the end of the century we should have the computational resources to micromanage an entire planetary ecosystem. Whether we use those computational resources systematically to promote the well-being of all sentient life in that kind of timeframe is presumably unlikely. However, we already - and without the benefit of quantum supercomputers - humanely employ, for example, depot-contraception rather than culling to control the population numbers of elephants in some overcrowded African national parks. Admittedly, ecosystem redesign is only in its infancy; and we've barely begun to use genetic engineering, let alone genomic rewrites. But if our value system dictates, then we could use nanobots to go to the furthest ends of the Earth and the deep oceans and eradicate the molecular signature of unpleasant experience wherever it is found. Likewise, we could do the same to the genetic code that spawns it. In any case, for better or worse, by the mid-century large terrestrial mammals are unlikely to survive outside our "wildlife" reserves simply in virtue of habitat destruction. How much suffering we permit in these reserves is up to us.

Gary Francione and Peter Singer? Despite their different perspectives, I admire them both. As an ethical utilitarian rather than a rights theorist, I'm probably closer to Peter Singer. But IMO a utilitarian ethic dictates that factory-farmed animals don't just need "liberating", they need to be cared for. Non-human animals in the wild simply aren't smart enough to adequately look after themselves in times or drought or famine or pestilence, for instance, any more than are human toddlers and infants, and any more than were adult members of Homo sapiens before the advent of modern scientific medicine, general anaesthesia, and painkilling drugs. [actually, until humanity conquers ageing and masters the technologies needed reliably to modulate mood and emotion, this control will be woefully incomplete.]

At the risk of over-generalising, we have double standards: an implicit notion of "natural" versus "unnatural" suffering. One form of suffering is intuitively morally acceptable, albeit tragic; the other is intuitively morally wrong. Thus we reckon someone who lets their pet dog starve to death or die of thirst should be prosecuted for animal cruelty. But an equal intensity of suffering is re-enacted in Mother Nature every day on an epic scale. It's not (yet) anybody's "fault." But as our control over Nature increases, so does our complicity in the suffering of Darwinian life "red in tooth and claw". So IMO we will shortly be ethically obliged to "interfere" [intervene] and prevent that suffering, just as we now intervene to protect the weak, the sick and the vulnerable in human society.

But here comes the real psychological stumbling-block. One of the more counterintuitive implications of applying a compassionate utilitarian ethic in an era of biotechnology is our obligation to reprogram and/or phase out predators. In the future, I think a lot of thoughtful people will be relaxed about phasing out/reprogramming, say, snakes or sharks. But over the years, I've received a fair bit of hate-mail from cat-lovers who think that I want to kill their adorable pets. Naturally, I don't: I'd just like to see members of the cat family reprogrammed [or perhaps "uplifted" so they don't cause suffering to their prey. As it happens, I've only once witnessed a cat "playing" with a tormented mouse. It was quite horrific. Needless to say, the cat was no more morally culpable than a teenager playing violent videogames, despite the suffering it was inflicting. But I've not been able to enjoy watching a Tom-and-Jerry cartoon since. Of course the cat's victim was only a mouse. Its pain and terror were probably no worse than mine the last time I caught my fingers in the door. But IMO a sufficiently Godlike superintelligence won't tolerate even a pinprick's worth of pain in post-human paradise. And (demi)gods, at least, is what I predict we're going to become...

David Pearce
dave@hedweb.com
http://www.hedweb.com/

No comments:

Post a Comment