Only in the past decade have we started to realize that transhumanism won’t realize its dreams through mechanization and computerization. Though seminal authors on transhumanism, like Kurzweil, Moravec, Drexler, and More focus on nanotechnology and cybernetics, those technologies haven’t seen real progress since the 70’s.Totally agree. I've also argued that uploading may not be possible, but that it's not a deal-breaker in our quest to live 'outside' our bodies.
But genetics and biotech has. Starting in the 1950’s with the Pill, vaccines, and antibiotics, our knowledge of medicine and biology radically improved throughout the second half of the twentieth century with assisted reproduction technologies like IVF, not to mention genomic sequencing, stem cell research, organ transplantation, and neural mapping, advances in biology and medicine are what are driving the transhumanist revolution. When someone like Mark Gubrud starts arguing transhumanism won’t work because we can’t upload our minds into robot bodies, one has to gawk for a moment in awe at the irrelevance of the argument. It’s like arguing we can’t ever cure cancer because cold fusion is impossible.
Transhumanism is the idea of guiding and improving human evolution with intention through the use of technologies and culture. If those technologies are not robotic and cybernetic but, instead, genetic and organic, then so be it. And that seems to be the way things are going.
Showing posts with label uploading. Show all posts
Showing posts with label uploading. Show all posts
Tuesday, June 22, 2010
Kyle Munkittrick: From Gears to Genes: A Sea Change in Transhumanism
Kyle Munkittrick has penned a nice little retraction to Mark Gubrud's suggestion that transhumanism won’t work because mind uploading is impossible:
Wednesday, February 4, 2009
Protopanpsychism and the consciousness conundrum, or why we shouldn't assume uploads - A SentDev Classic

Compounding the problem is the widespread tendency to interchangbily use the terms intelligence and consciousness. While related, these terms describe two very different phenomena. My calculator and computer are examples of intelligence. My ability to use language and deductive reasoning to help me write this article are other examples of intelligence. But my ability to subjectively experience the phenomenon known as 'sweetness,' or to sense the colour red, or the feeling I get as time passes, are endowments brought about by my conscious awareness.
There are arguably two major philosophical approaches to the issue of consciousness -- they are 'philosophical' because we still don't possess the requisite scientific vernacular to address its true underpinnings. These proto-scientific approaches are known as dualism and emergence.
The first and most traditional argument is the idea of vitalism or dualism. This perspective suggests that the essence of consciousness lies outside the brain, perhaps as some ethereal soul or spirit. Consequently, its proponents suggest that consciousness lies outside knowable science.
Cartesian dualism fits within this category of thinking – the notion that the only thing that can truly be known is the presence of personal subjectivity and that everything external to that may be a fabrication or hallucination (see Descarte’s Meditations on First Philosophy and his ‘Evil Daemon’ argument). While existentially interesting (or is that disturbing?), Descartes’s argument violates Popperian notions of testability and smacks of Gnosticism and radical skepticism (these are fascinating topics in their own right that lie outside the scope of this discussion, and can include such conundrums as the brain-in-the-vat and simulation problems).
The other broad approach to the issue of consciousness is emergence theory, the idea that self-awareness and qualia can arise from complex computational dynamics in the brain. The critical assumption here is that mind’s architecture is largely computational, but that consciousness emerges through the concert of myriad neuronal interactions. In this sense, consciousness is an epiphenomenon or metaphenomenon of the brain's machinations.
This approach to cognition is clearly essential, but it is not sufficient. Indeed, the mind almost certainly utilizes its computational or functionalist aspects, most of which go completely unnoticed by the conscious agent at the top of the processing hierarchy. Today’s computers, which have inspired comparisons to the brain, crunch numbers but are in no way self-reflexive about their work; consequently, they can partly account for human intelligence, but make relatively poor models as approximations or metaphors for consciousness engines.
At the same time, it almost seems like a cop-out to suggest that increased complexity in such systems will result in consciousness, which is, qualitatively speaking, a horse of a different colour.
Now, I wouldn’t want to dismiss emergence theory outright. There’s something very satisfying about this idea, particularly considering how this might have come about through natural selection. It may very well turn out that that emergence does in fact account for consciousness.
That said about dualism and emergence, there is a third, albeit controversial, perspective that should be considered: panprotopsychism. This is the notion that essential features or precursors of consciousness are fundamental components of reality which are accessed by brain processes. In philosophy, panpsychism is the view that all parts of matter involve mind. Neuroscientist Stuart Hameroff, a proponent of this view, argues that consciousness is related to a fundamental, irreducible component of physical reality, akin to phenomenon like mass, spin or charge. According to this view, the basis of consciousness can be found in an additional fundamental force of nature not unlike gravity or electromagnetism. This would be something like an elementary (self)-sentience or awareness. As Hameroff notes, "these components just are."
Panpsychism has a long a varied history. Back during the time of the Greeks, philosophers like Democritus contended that a basic and fundamental form of consciousness was a quality of all matter – what they called 'atomism.' Later, Baruch Spinoza would argue along similar lines -- that atoms and their subatomic components have subjective, mental attributes.

Bertrand Russell put forth the idea of "neutral monism," which described a common underlying entity, neither physical nor mental, that gave rise to both. Bishop Berkeley suggested that consciousness creates reality and that consciousness is "all there is." Berkeley's famous dictum was "Esse est percipi" ("To be is to be perceived").
Theoretical physicist John A. Wheeler has suggested that information is fundamental to the physics of the universe, and David Chalmers has proposed a double-aspect theory in which information has both physical and experiential aspects.
While these ideas vary, they do explore the interplay between what is regarded as reality and consciousness. Whitehead in particular saw the universe not as being comprised of 'things' but of 'events.' In this sense reality is a kind of process where consciousness emerges from temporal chains of occasions.
If this sounds somewhat analogous to what quantum mechanics tells us, you’re not far off the mark. A number of thinkers have picked up on Whitehead’s idea as it relates to quantum physics, including Abner Shimony and Roger Penrose. This has lead to the development of what is known as quantum consciousness theory, which postulates the idea that consciousness is indelibly tied to quantum processes – that the brain is essentially a quantum computer utilized by an observer to “decohere” quantum superposition. Penrose and Stuart Hameroff have constructed a theory in which human consciousness is the result of quantum gravity effects in microtubules.
Penrose’s ideas have been met with much scorn, not least of which for his assertion that there are non-computational or non-algorithmic aspects to consciousness. This suggestion has lead thinkers like Hameroff and Penrose to conclude that mature AI as it is typically presumed (i.e. that it is also endowed with artificial consciousness) is a pipe-dream.
If they’re right, however, this poses a significant problem for those who believe that uploading (or mind transfer) awaits humanity in the future -- the opinion that consciousness is not substrate dependant, and that a fully sapient agent can exist as an uploaded being in a supercomputer. Many transhumanist expectations, from radical life extension to Jupiter Brains, are dependant on this assumption.
But what if consciousness is in fact substrate specific and can only be experienced in the analog arena? What if there is no digital or algorithmic equivalent to consciousness like Penrose suggests? Having consciousness arise in a classical Von Neumann architecture may be as impossible as splitting the atom in a virtual environment by using ones and zeros.
As possible consolation, however, the fact of the matter is that under the Penrose/Hameroff premise, the brain is a quantum computer – which if quantum theorists like David Deutsch have their way, will eventually come to fruition. If a quantum computer comprised of biological matter could arise through autonomous evolutionary processes, then I would have to think that intelligences like our own will eventually come to figure it out. If this is the case, then it may be possible to engineer subjectivity outside of our grey matter. Quantum computers could also be useful for running simulations of quantum mechanics, an idea that goes back to Richard Feynman; he observed that there is no known algorithm for simulating quantum systems on a classical computer and suggested to study the use of a quantum computer for this purpose. One has to wonder if the same logic applies to the potential for quantum computers to run consciousness simulations.
Given the extreme computational power and speed of quantum computers, I can’t even become to fathom what a conscious agent would do within such an architecture.
All bets are off once a conscious superintelligence starts to engage in selective decoherence.
References:
Stuart Hameroff: "Consciousness, Whitehead and quantum computation in the brain: Panprotopsychism meets the physics of fundamental spacetime geometry"
John Holbo: "Fragments of Parallax"
Wikipedia
This article originally appeared on Sentient Developments on October 25, 2006.
Friday, January 30, 2009
Anissimov on the benefits of mind uploading

Mind uploading, sometimes called whole brain emulation, refers to the hypothetical transfer of a human mind to a substrate different from a biological brain, such as a detailed computer simulation of an individual human brain. Given the (likely) functionalist nature of the human brain, and given steady advances across a number of scientific disciplines, mind transfer may eventualy become reality; this is not just idle fantasy.
And as Anissimov notes, even if this technology doesn't arrive for a hundred years, it's still something worth speculating about and working towards; the ramifications would be, quite obviously, profound for the human species.
Indeed, as Anissimov notes, there are at least 7 benefits to mind uploading:
- Massive economic growth
- Intelligence enhancements
- Greater subjective well-being
- Complete environmental recovery
- Escape from direct governance by the laws of physics
- Closer connections with other human beings
- Indefinite lifespans
From a utilitarian perspective, it practically blows everything else away besides global risk mitigation, as the number of new minds leading worthwhile lives that could be created using the technology would be astronomical. The number of digital minds we could create using the matter on Earth alone would likely be over a quadrillion, more than 10,000 people for every star in the 400 billion star Milky Way. We could make a “Galactic Civilization”, right here on Earth in the late 21st or 22nd century. I can scarcely imagine such a thing, but I can imagine that we’ll be guffawing heartily as how unambitious most human goals were in the year 2009.Read the entire article.
Thursday, August 2, 2007
Martine's mindfiles

In our cybernetic and virtual world of the future, says Rothblatt, genes are not going to matter so much. Instead, we’ll be concerned about ‘bemes' -- a fundamental, transmissible, unit of beingness.
This will give rise to the transbeman person -- a being who claims to have the rights and obligations associated with being human, but is beyond accepted notions of legal personhood. Examples would include a computer claiming to be conscious; a person successfully reanimated from cryonic stasis; or the downloading of a ‘cyberconsciousness’ into a highly engineered ‘bionano’ body.
Operation: Mindfile
Rothblatt, an eccentric billionaire lawyer, author, and entrepreneur, made the case for "Cybernetic Biostasis" during TransVision 2007 and argued that bemes will eventually become the currency of the future – the stuff that will help prospective persons restore their memories and sense of identity. She believes that people should create digital ‘mindfiles’ that chronicle their lives; eventually, after death, persons could be revived by means of ‘mindware’ transfer when the requisite technology is powerful enough (namely the advent of artificial intelligence).
According to Rothblatt, bemes can be virtually anything that could later be used to restore a person’s history, identity and tendencies. Bemetic mindfiles could be comprised of old photos, blogs, transcripts, diaries, and so on; these artifacts could later be used to restore and re-define a person’s personality (including mannerisms, feelings, beliefs, attitudes and values). Most importantly, these files could restore a person's memory.
To this end, Rothblatt has created the websites Cyberev.org (short for ybernetic beingness revival) and Lifenaut.com. People are encouraged to use the sites to start chronicling their lives.
During her TV07 presentation Rothblatt admitted that piecing together odds and sods of data would not create a perfect copy of a person’s consciousness. She contended that most people only remember fragments of their past anyway. To Rothblatt, it’s the preservation of the person’s "essence" that’s important.
Memories are a strange thing
I find Rothblatt’s mindfile concept quite intriguing, but ultimately unsatisfactory. I’m not convinced that a person’s identity and sense of ongoing self can be re-instantiated in this way. At best we might get a twisted copy of ourselves with a haphazard sense of someone else’s past.
Memories are a tricky thing; they don’t exist in a vacuum. First, we have memories because we, as conscious observers, experience the events in real time. Based on the strength and uniqueness of the event our brain parses the experience and temporarily stores it into short term memory. From there it solidifies into our long-term memory where we build an association with the event. This association allows us to recall the event at will. We are able to access the memory because we a) experienced the event first hand, and we b) created a personal linkage to that event (what could also be referred to as a personal narrative).
In other words, you have to know that you have the memory in order to access it.
Sometimes we forget that we have a memory of an event only to be reminded that it still exists in the brain just waiting to be accessed. I love it when that happens. My first few thoughts are usually, “Why did I forget about that? Why did I not think about that for so long?” For what ever reason the association or linkage to that piece of data was lost. The memory was still there embedded in the mind, but it was simply not accessed enough causing it to lie dormant.
As for Rothblatt’s concept, just because a mind is infused with memories doesn’t mean that all the associations will be there. The memories would likely be construed as a random mess of images, words and events. It would be unlikely that the person would be able to make any sense of it at all and frame a personal narrative around it.
Consciousness, identity, and an ongoing sense of self
Far too many people at the WTA’s TransVision conference batted around the word “consciousness” with complete disregard for definitions and a concrete understanding of what it truly is. Consciousness all too often gets conflated with other aspects of the mind, including memory and other cognitive tasks that comprise the mechanistic or computational aspects of the brain.
Consciousness is not something you can piece together and instantiate with cultural artifacts. Nor can a continuity of consciousness be restored in this manner. That’s still a question that perplexes even the best philosophers and neuroscientists.
Here’s a thought experiment: let’s suppose that you traded memories with your best friend – nothing else, just the memories. You’ve still got your body and all the grey matter in your brain that rightfully belongs to you, except your memories. Does this mean that you and your friend have traded consciousnesses? Does it mean that you’ve uploaded yourself into your friend's brain and vice-versa?
The answer is no to both questions! You would still be you in the sense that you’re still observing reality, but you’d be convinced that you are now your friend. A sense of identity (sense being the key word -- a kind of illusion) may have been transferred, but not the conscious lens that each of us has with which we observe and experience the world.
No link to cryonic reanimation
Later, when Alcor’s Tanya Jones was answering questions after her cryonics presentation, a member of the audience asked her if Alcor would consider using the mindfile concept to help in the process of reanimating frozen patients.
Jones answered very clearly: no.
Elaborating, she said that Alcor has considered using mindfiles to help newly revived persons re-connect with their past life. In this sense, the mindfiles would be a glorified shoebox filled with an individual's personal effects.
This makes sense. Assuming that a person’s brain was properly preserved they should have no trouble accessing their memories. If all goes well the person should feel like they had a long and hard nap. A very, very hard nap. Their memories, along with the all important personal narrative, associations and ongoing identity, should be readily accessible.
The mindfile as restorative medicine
Rothblatt’s mindfile concept may have limitations in regards to uploading or restoring a consciousness, but it is far from useless. The short-term potential as a means for restorative medicine is certainly a possibility.
Alzheimer’s patients may have their memories re-invigorated and stimulated in the manner that Rothblatt describes. They could also be used to improve the human capacity for memory, which can be extraordinarily weak.
Looking ahead, there's also the possibility that mindfiles could be used as a supplement to naturally stored memories. They could be uploaded into the mind and used in tandem with other recollections to add width and breadth to memory much like photographs or home videos do today.
So, you may wish to visit Dr. Rothblatt's website after all. Start working on that mindfile!
Tuesday, July 31, 2007
Anders Sandberg wants to emulate your brain

Soooo, how the hell do we do it?
This is the issue that Swedish neuroscientist Anders Sandberg tackled for his talk at TransVision 2007. Uploading, or what Sandberg refers to as ‘whole brain emulation,’ has become a distinct possibility arising from the feasibility of the functionalist paradigm and steady advances in computer science. Sandberg says we need a strategic plan to get going.
Levels of understanding
To start, Sandberg made two points about the kind of understanding that is required. First, we do not need to understand the function of a device to build it from parts, and second, we do not need to understand the function of the brain to emulate it. That said, Sandberg admitted that we still need to understand the brain's lower level functions in order for us to be able to emulate them.
The known unknown
Sandberg also outlined the various levels of necessary detail; we can already start to parse through the “known unknown.” He asked, “what level of description is necessary to capture enough of a particular brain to mimic its function?”
He described several tiers that will require vastly more detail:
• Computational modelRequirements
• Brain region connectivity
• Analog network population model
• Spiking neural network
• Electrophysiology
• Metabolome
• Proteome
• Etc. (and all the way down to the quantum level)
Sandberg believes that the ability to scan an existing brain will be necessary. What will also be required is the proper scanning resolution. Once we can peer down to the sufficient detail, we should be able to construct a brain model; we will then be required to infer structure and low-level function.
Once this is done we can think about running a brain emulation. Requirements here will include a computational neuroscience model and the requisite computer hardware. Sandberg noted that body and environment simulations may be added to the emulation; the brain emulator, body simulator and environment simulator would be daisy-chained to each other to create the sufficient interactive link. The developers will also have to devise a way to validate their observations and results.
Neural simulations
Neural simulations are nothing new. Hodgkin and Huxley began working on these sorts of problems way back in 1952. The trick is to perfectly simulate neurons, neuron parts, synapses and chemical pathways. According to Sandberg, we are approaching 1-1 for certain systems, including the lamprey spinal cord and lobster ganglia.
Compartment models are also being developed with miniscule time and space resolutions. The current record is 22 million 6-compartment neurons, 11 billion synapses, and a simulation length of one second real-time. Sandberg cited advances made by the development of IBM’s Blue Gene.
Complications and Exotica
Sandberg also provided a laundry list of possible ‘complications and exotica’:
• dynamical stateReverse engineering is all fine and well, suggested Sandberg, but how much function can be deduced from morphology (for example)?
• spinal cord
• volume transmission
• glial cells
• synaptic adaptation
• body chemical environment
• neurogensis
• ephaptic effects
• quantum computation
• analog computation
• randomness
Scanning
In regards to scanning, we'll need to determine the kind of resolution and data needed. Sandberg argued that nondestructive scanning will be unlikely; MRIs have been the closest thus far but are limited to less than 7.7 micrometers resolution. More realistically, destructive scanning will likely be used; Sandberg noted such procedures as fixation and ‘slice and scan.’
Once scanning is complete the postprocessing can begin. Developers at this stage will be left wondering about the nature of the neurons and how they are all connected.
Given advances in computation, Sandberg predicted that whole brain emulation may arrive sometime between 2020 and 2060. As for environment and body simulation, we’ll have to wait until we have 100 terraflops at our disposal. We’ll also need a resolution of 5x5x50nm to do meaningful work.
Conclusions
Sandberg made mention of funding and the difficultly of finding scan targets. He named some subfields that lack drivers, namely basic neuroscience, electrophysiology, and large scale scanning (so far). He did see synergies arising from the ongoing development and industrialization of neuroscience, robotics and all the various –omics studies.
As for the order of development, Sandberg suggested 1) scanning and/or simulation, then 2) computer power, and then 3) the gradual emergence of emulation. Alternately, 1) first computer power, then 2) simulation and finally 3) scanning followed by 4) the rapid emergence of simulation.
Any volunteers for slice and scan?
Thursday, February 8, 2007
The perils of a digital life

Virtual reality environments and MMORPGs are giving us the first clue that this may be a problem. Take Second Life, for example, which is already experiencing a number of strange anomalies and issues. In the past year SL users have had to deal with CopyBot, CampBots, SheepBots, grey goo, and alt instances.
Each of these are headaches unto themselves, and possible harbingers of more severe problems to come.
Virtual nuisances
CopyBot was originally created as a debugging tool by the SL development team and was intended for functions like import/export and backing up data. But as is so often the case with technology, it was twisted and used for an entirely different purpose altogether. Some opportunistic Second Lifers used CopyBot to duplicate items that were marked no copy by the creator or owner, thus violating intellectual property rights. To date, attempts to counter CopyBot have included anti-CopyBot spamming defeaters, which have in turn given rise to anti-anti-CopyBot defeaters. Call it an algorithmic arms race.
While this hints at post-scarcity and open source, it is still unclear how unbridled duplication will offer users the incentive to create original artifacts for the SL environment.
CampBots and SheepBots aren't nearly as contentious, but are equally annoying. These are essentially SpamBots working under the guise of an avatar.
And back in October of 2006 users experienced a grey goo scare when a "griefer" (a person who disrupts video-games) attacked Second Life with self-replicating "grey goo" that melted down the SL servers. The griefers used malign scripts that caused objects to spontaneously self-replicate. According to the the transcript of the SL blogs:
4:15pm PST: We are still in the process of investigating the grid-wide griefing attacks, as such we have momentarily disabled scripts and “money transfers to objects” as well on the entire grid. We apologize for this and thank for your patience. As soon as I have more information, I will pass it along.More recently SL users have had to compete with so-called alt instances who launch ultra-fast bots that scoop up valuable land; automated bots work with much greater efficiency than humans. Alt instances are additional avatars controlled by the same user. They do this to capitalize on on the First Land privileges that are extended to newbies. It is estimated that users have on average 1.25 avatars, indicating that there may be as many as 500,000 in-world alts.
4:35pm PST: As part of our effort to counter the recent grey goo attacks, we’re currently doing a rolling restart of the grid to help clean it out, this means each region will be restarted over the course of the next few hours. Thanks again for your patience.
4:55pm PST: There was a slight delay to our rolling restart while we continued our investigation. The rolling restart should begin soon, if you are currently in-world you will get a warning before your region is restarted - allowing you to teleport to another region. We hope to have logins open again very soon. Thanks again for everyone’s patience during this issue.
These bots have created a huge digital scarcity because Second Life has been overwhelmed with the groundswell of new residents. Users have asked that these bots be made illegal and Linden Labs has agreed to look into it.
Our analog, digital and future worlds
As I look at these examples I can't help but think that virtual reality environments are offering a glimpse into our future -- both in the analog and digital arenas. Second Life in particular is a mirror of not just our own society, but of future society itself. In real life we are dealing with the widespread copying of copyrighted material, issues of open source, out of control spam, the threat/promise of automation, molecular fabrication, and of course, the grim possibility of runaway nanotech.
Moreover, an uploaded society would conceivably face more problems in digital substrate than in the cozy confines of the analog world. We can't 'hack' into the code of the Universe (at least not yet). As a consequence our existence is still very much constrained by the laws of physics, access to resources, and the limits of our information systems (i.e. our accumulated body of knowledge). That said, we do a fairly decent job of soft-hacking into the Universe, which is very much the modus operandi of an intelligent species.
But the soft-hacking that we're doing is becoming more and more sophisticated -- something that could lead to over-complexity. We're creating far too many dangerous variables that require constant monitoring and control.
As for the digital realm, it is already complex by default. But like the analog world it too has constraints, though slightly different. Virtual worlds have to deal with limitations imposed by computational power, algorithmic technology and access to information. Aside from that, the sky's the limit. Such computational diversity could lead to complexity an order of magnitude above analog life.
Hackers and criminals would seek to infiltrate and exploit everything under the virtual sun, including conscious minds. Conscious agents would have to compete with automatons. Bots of unimaginable ilk would run rampant. There would be problems of swarming, self-replication and distributed attacks. And even more disturbingly, nothing would be truly secure and the very authenticity of existence would constantly be put into question.
Perhaps there are solutions to these problems, but I'm inclined to doubt it. Natural selection is unkind to overspecialized species. Further, we have no working model of evolution in digital substrate (aside from some primitive simulations).
This is one case where I certainly hope to be proven wrong.
Subscribe to:
Posts (Atom)