Tuesday, July 31, 2007

Anders Sandberg wants to emulate your brain

Transhumanists have long speculated about the possibility of uploading a brain into a computer. In fact, a big part of the supposed posthuman future depends on it.

Soooo, how the hell do we do it?

This is the issue that Swedish neuroscientist Anders Sandberg tackled for his talk at TransVision 2007. Uploading, or what Sandberg refers to as ‘whole brain emulation,’ has become a distinct possibility arising from the feasibility of the functionalist paradigm and steady advances in computer science. Sandberg says we need a strategic plan to get going.

Levels of understanding

To start, Sandberg made two points about the kind of understanding that is required. First, we do not need to understand the function of a device to build it from parts, and second, we do not need to understand the function of the brain to emulate it. That said, Sandberg admitted that we still need to understand the brain's lower level functions in order for us to be able to emulate them.

The known unknown

Sandberg also outlined the various levels of necessary detail; we can already start to parse through the “known unknown.” He asked, “what level of description is necessary to capture enough of a particular brain to mimic its function?”

He described several tiers that will require vastly more detail:
• Computational model
• Brain region connectivity
• Analog network population model
• Spiking neural network
• Electrophysiology
• Metabolome
• Proteome
• Etc. (and all the way down to the quantum level)
Requirements

Sandberg believes that the ability to scan an existing brain will be necessary. What will also be required is the proper scanning resolution. Once we can peer down to the sufficient detail, we should be able to construct a brain model; we will then be required to infer structure and low-level function.

Once this is done we can think about running a brain emulation. Requirements here will include a computational neuroscience model and the requisite computer hardware. Sandberg noted that body and environment simulations may be added to the emulation; the brain emulator, body simulator and environment simulator would be daisy-chained to each other to create the sufficient interactive link. The developers will also have to devise a way to validate their observations and results.

Neural simulations

Neural simulations are nothing new. Hodgkin and Huxley began working on these sorts of problems way back in 1952. The trick is to perfectly simulate neurons, neuron parts, synapses and chemical pathways. According to Sandberg, we are approaching 1-1 for certain systems, including the lamprey spinal cord and lobster ganglia.

Compartment models are also being developed with miniscule time and space resolutions. The current record is 22 million 6-compartment neurons, 11 billion synapses, and a simulation length of one second real-time. Sandberg cited advances made by the development of IBM’s Blue Gene.

Complications and Exotica

Sandberg also provided a laundry list of possible ‘complications and exotica’:
• dynamical state
• spinal cord
• volume transmission
• glial cells
• synaptic adaptation
• body chemical environment
• neurogensis
• ephaptic effects
• quantum computation
• analog computation
• randomness
Reverse engineering is all fine and well, suggested Sandberg, but how much function can be deduced from morphology (for example)?

Scanning


In regards to scanning, we'll need to determine the kind of resolution and data needed. Sandberg argued that nondestructive scanning will be unlikely; MRIs have been the closest thus far but are limited to less than 7.7 micrometers resolution. More realistically, destructive scanning will likely be used; Sandberg noted such procedures as fixation and ‘slice and scan.’

Once scanning is complete the postprocessing can begin. Developers at this stage will be left wondering about the nature of the neurons and how they are all connected.

Given advances in computation, Sandberg predicted that whole brain emulation may arrive sometime between 2020 and 2060. As for environment and body simulation, we’ll have to wait until we have 100 terraflops at our disposal. We’ll also need a resolution of 5x5x50nm to do meaningful work.

Conclusions

Sandberg made mention of funding and the difficultly of finding scan targets. He named some subfields that lack drivers, namely basic neuroscience, electrophysiology, and large scale scanning (so far). He did see synergies arising from the ongoing development and industrialization of neuroscience, robotics and all the various –omics studies.

As for the order of development, Sandberg suggested 1) scanning and/or simulation, then 2) computer power, and then 3) the gradual emergence of emulation. Alternately, 1) first computer power, then 2) simulation and finally 3) scanning followed by 4) the rapid emergence of simulation.

Any volunteers for slice and scan?

No comments:

Post a Comment