Showing posts with label neuroscience. Show all posts
Showing posts with label neuroscience. Show all posts

Saturday, August 21, 2010

David Chalmers: Consciousness is not substrate dependent

A popular argument against uploads and whole brain emulation is that consciousness is somehow rooted in the physical, biological realm. Back in 1995, philosopher David Chalmers addressed this problem in his seminal paper, "Absent Qualia, Fading Qualia, Dancing Qualia." His abstract reads,
It is widely accepted that conscious experience has a physical basis. That is, the properties of experience (phenomenal properties, or qualia) systematically depend on physical properties according to some lawful relation. There are two key questions about this relation. The first concerns the strength of the laws: are they logically or metaphysically necessary, so that consciousness is nothing "over and above" the underlying physical process, or are they merely contingent laws like the law of gravity? This question about the strength of the psychophysical link is the basis for debates over physicalism and property dualism. The second question concerns the shape of the laws: precisely how do phenomenal properties depend on physical properties? What sort of physical properties enter into the laws' antecedents, for instance; consequently, what sort of physical systems can give rise to conscious experience? It is this second question that I address in this paper.
Chalmers sets up a series of arguments and thought experiments which point to the conclusion that functional organization suffices for conscious experience, what he calls nonreductive functionalism. He argues that conscious experience is determined by functional organization without necessarily being reducible to functional organization. This bodes well for the AI and whole brain emulation camp.

Chalmers concludes:
In any case, the conclusion is a strong one. It tells us that systems that duplicate our functional organization will be conscious even if they are made of silicon, constructed out of water-pipes, or instantiated in an entire population. The arguments in this paper can thus be seen as offering support to some of the ambitions of artificial intelligence. The arguments also make progress in constraining the principles in virtue of which consciousness depends on the physical. If successful, they show that biochemical and other non-organizational properties are at best indirectly relevant to the instantiation of experience, relevant only insofar as they play a role in determining functional organization.

Of course, the principle of organizational invariance is not the last word in constructing a theory of conscious experience. There are many unanswered questions: we would like to know just what sort of organization gives rise to experience, and what sort of experience we should expect a given organization to give rise to. Further, the principle is not cast at the right level to be a truly fundamental theory of consciousness; eventually, we would like to construct a fundamental theory that has the principle as a consequence. In the meantime, the principle acts as a strong constraint on an ultimate theory.
Entire paper.

Making brains: Reverse engineering the human brain to achieve AI

The ongoing debate between PZ Myers and Ray Kurzweil about reverse engineering the human brain is fairly representative of the same debate that's been going in futurist circles for quite some time now. And as the Myers/Kurzweil conversation attests, there is little consensus on the best way for us to achieve human-equivalent AI.

That said, I have noticed an increasing interest in the whole brain emulation (WBE) approach. Kurzweil's upcoming book, How the Mind Works and How to Build One, is a good example of this—but hardly the only one. Futurists with a neuroscientific bent have been advocating this approach for years now, most prominently by the European transhumanist camp headed by Nick Bostrom and Anders Sandberg.

While I believe that reverse engineering the human brain is the right approach, I admit that it's not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don't exist yet. And importantly, success won't come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

But we have to start somewhere, and we have to start with a plan.

Rules-based AI versus whole brain emulation

Now, some computer theorists maintain that the rules-based approach to AI will get us there first. Ben Goertzel is one such theorist. I had a chance to debate this with him at the recent H+ Summit at Harvard. His basic argument is that the WBE approach over-complexifies the issue. "We didn't have to reverse engineer the bird to learn how to fly," he told me. Essentially, Goertzel is confident that the hard-coding of artificial general intelligence (AGI) is a more elegant and direct approach; it'll simply be a matter of identifying and developing the requisite algorithms sufficient for the emergence of the traits we're looking for in an AGI—things like learning and adaptation. As for the WBE approach, Goertzel thinks it's overkill and overly time consuming. But he did concede to me that he thinks the approach is sound in principle.

This approach aside, like Kurzweil, Bostrom, Sandberg and a growing number of other thinkers, I am drawn to the WBE camp. The idea of reverse engineering the human brain makes sense to me. Unlike the rules-based approach, WBE works off a tried-and-true working model; we're not having to re-invent the wheel. Natural selection, through excruciatingly tedious trial-and-error, was able to create the human brain—and all without a preconceived design. There's no reason to believe that we can't figure out how this was done; if the brain could come about through autonomous processes, then it can most certainly come about through the diligent work of intelligent researchers.

Emulation, simulation and cognitive functionalism

Emulation refers to a 1-to-1 model where all relevant properties of a system exist. This doesn't mean recreating the human brain in exactly the same way as it resides inside our skulls. Rather, it implies the recreation of all its properties in an alternative substrate, namely a computer system.

Moreover, emulation is not simulation. We're not looking to give the appearance of human-equivalent cognition. A simulation implies that not all properties of a model are present. Again, it's a complete 1:1 emulation that we're after.

Now, given that we're looking to model the human brain in digital substrate, we have to work according to a rather fundamental assumption: computational functionalism. This goes back to the Church-Turing thesis which states that a Turing machine can emulate any other Turing machine. Essentially, this means that every physically computable function can be computed by a Turing machine. And if brain activity is regarded as a function that is physically computed by brains, then it should be possible to compute it on a Turing machine. Like a computer.

So, if you believe that there's something mystical or vital about human cognition you should probably stop reading now.

Or, if you believe that there's something inherently physical about intelligence that can't be translated into the digital realm, you've got your work cut out for you to explain what that is exactly—keeping in mind that any informational process is computational, including those brought about by chemical reactions. Moreover, intelligence, which is what we're after here, is something that's intrinsically non-physical to begin with.

The roadmap to whole brain emulation

A number of critics point out that we'll never emulate a human brain on account of the chaos and complexity inherent in such a system. On this point I'll disagree. As Bostrom and Sandberg have pointed out, we will not need to understand the whole system in order to emulate it. What's required is a functional understanding of all necessary low-level information about the brain and knowledge of the local update rules that change brain states from moment to moment. What is meant by low-level at this point is an open question, but it likely won't involve a molecule-by-molecule understanding of cognition. And as Ray Kurzweil has revealed, the brain contains masterful arrays of redundancy; it's not as complicated as we currently think.

In order to gain this "low-level functional understanding" of the human brain we will need to employ a series of interdisciplinary approaches (most of which are currently underway). Specifically, we're going to require advances in:
  • Computer science: We have to improve the hardware component; we're going to need machines with the processing power required to host a human brain; we're also going to need to improve the software component so that we can create algorithmic correlates to specific brain function.
  • Microscopy and scanning technologies: We need to better study and map the brain at the physical level; brain slicing techniques will allow us to visibly study cognitive action down to the molecular scale; specific areas of inquiry will include molecular studies of individual neurons, the scanning of neural connection patterns, determining the function of neural clusters, and so on.
  • Neurosciences: We need more impactful advances in the neurosciences so that we may better understand the modular aspects of cognition and start mapping the neural correlates of consciousness (what is currently a very grey area).
  • Genetics: We need to get better at reading our DNA for clues about how the brain is constructed. While I agree that our DNA will not tell us how to build a fully functional brain, it will tell us how to start the process of brain-building from scratch.
Essentially, WBE requires three main capabilities: (1) the ability to physically scan brains in order to acquire the necessary information, (2) the ability to interpret the scanned data to build a software model, and (3) the ability to simulate this very large model.

Time-frames

Inevitably the question as to 'when' crops up. Personally, I could care less. I'm more interested in viability than timelines. But, if pressed for an answer, my feeling is that we are still quite a ways off. Kurzweil's prediction of 2030 is uncomfortably short in my opinion; his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we're still likely heading down some blind alleys.

My own feeling is that we'll likely be able to emulate the human brain in about 50 to 75 years. I will admit that I'm pulling this figure out of my butt as I really have no idea. It's more a feeling than a scientifically-backed estimate.

Lastly, it's worth noting that, given the capacity to recreate a human brain in digital substrate, we won't be too far off from creating considerably greater than human intelligence. Computer theorist Eliezer Yudkowsky has claimed that, because of the brain's particular architecture, we may be able to accelerate its processing speed by a factor of a million relatively easily. Consequently, predictions as to when we may hit the Singularity will likely co-incide with the advent of a fully emulated human brain.

Myers still thinks Kurzweil does not understand the brain

The blog war between PZ Myers and Ray Kurzweil continues. Myers has now retorted to Kurzweil's retort:
...you can't measure the number of transistors in an Intel CPU and then announce, "A-ha! We now understand what a small amount of information is actually required to create all those operating systems and computer games and Microsoft Word, and it is much, much smaller than everyone is assuming." Put it in those terms, and the Kurzweil fanboys would laugh at him; put it in terms of something they don't understand at all, like the development and function of the brain, and they're willing to go along with the pretense that the genome tells us that the whole organism is simpler than they thought.

I presume they understand that if you program a perfect Intel emulator, you don't suddenly get Halo: Reach for free, as an emergent property of the system. You can buy the code and add it to the system, sure, but in this case, we can't run down to GameStop and buy a DVD with the human OS in it and install it on our artificial brain. You're going to have to do the hard work of figuring out how that works and reverse engineering it, as well. And understanding how the processor works is necessary to do that, but not sufficient.
Myers concludes,
In short, here's Kurzweil's claim: the brain is simpler than we think, and thanks to the accelerating rate of technological change, we will understand it's basic principles of operation completely within a few decades. My counterargument, which he hasn't addressed at all, is that 1) his argument for that simplicity is deeply flawed and irrelevant, 2) he has made no quantifiable argument about how much we know about the brain right now, and I argue that we've only scratched the surface in the last several decades of research, 3) "exponential" is not a magic word that solves all problems (if I put a penny in the bank today, it does not mean I will have a million dollars in my retirement fund in 20 years), and 4) Kurzweil has provided no explanation for how we'll be 'reverse engineering' the human brain. He's now at least clearly stating that decoding the genome does not generate the necessary information — it's just an argument that the brain isn't as complex as we thought, which I've already said is bogus — but left dangling is the question of methodology. I suggest that we need to have a combined strategy of digging into the brain from the perspectives of physiology, molecular biology, genetics, and development, and in all of those fields I see a long hard slog ahead. I also don't see that noisemakers like Kurzweil, who know nothing of those fields, will be making any contribution at all.
Link.

Friday, August 20, 2010

Kurzweil responds to PZ Myers

Ray Kurzweil has retorted to PZ Myers's claim that he does not understand the brain:
For starters, I said that we would be able to reverse-engineer the brain sufficiently to understand its basic principles of operation within two decades, not one decade, as Myers reports.

Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit. I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain.

I mentioned the genome in a completely different context. I presented a number of arguments as to why the design of the brain is not as complex as some theorists have advocated. This is to respond to the notion that it would require trillions of lines of code to create a comparable system. The argument from the amount of information in the genome is one of several such arguments. It is not a proposed strategy for accomplishing reverse-engineering. It is an argument from information theory, which Myers obviously does not understand.
Be sure to read the entire response.

Tuesday, August 17, 2010

Myers: Kurzweil is a "pseudo-scientific dingbat" who "does not understand" the brain

Biologist and skeptic PZ Myers has ripped into Ray Kurzweil for his recent claim that the human brain will be completely modeled by 2020 (Note: Not that it's particularly important, but Kurzweil did say it'll take two decades at the recent Singularity Summit, not one). In a rather sweeping and insulting article titled, "Ray Kurzweil does not understand the brain," Myers takes the position that the genome cannot possibly serve as an effective blueprint in our efforts to reverse engineer the human brain.

In regards to he claim that the design of the brain is in the genome, he writes,
Kurzweil knows nothing about how the brain works. It's [sic] design is not encoded in the genome: what's in the genome is a collection of molecular tools wrapped up in bits of conditional logic, the regulatory part of the genome, that makes cells responsive to interactions with a complex environment. The brain unfolds during development, by means of essential cell:cell interactions, of which we understand only a tiny fraction. The end result is a brain that is much, much more than simply the sum of the nucleotides that encode a few thousand proteins. He has to simulate all of development from his codebase in order to generate a brain simulator, and he isn't even aware of the magnitude of that problem.

We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently. We haven't even solved the sequence-to-protein-folding problem, which is an essential first step to executing Kurzweil's clueless algorithm. And we have absolutely no way to calculate in principle all the possible interactions and functions of a single protein with the tens of thousands of other proteins in the cell!
Myers continues:
To simplify it so a computer science guy can get it, Kurzweil has everything completely wrong. The genome is not the program; it's the data. The program is the ontogeny of the organism, which is an emergent property of interactions between the regulatory components of the genome and the environment, which uses that data to build species-specific properties of the organism. He doesn't even comprehend the nature of the problem, and here he is pontificating on magic solutions completely free of facts and reason.
Okay, while I agree that Kurzweil's timeline is ridiculously optimistic (I'm thinking we'll achieve a modeled human brain sometime between 2075 and 2100), Myers's claim that Kurzweil "knows nothing" about the brain is as incorrect as it is disingenuous. Say what you will about Kurzweil, but the man does his homework. While I wouldn't make the claim that he does seminal work in the neurosciences, I will say that his efforts at describing the brain along computationally functionalist terms is important. The way he has described the brain's redundancy and massively repeating arrays is as fascinating as it is revealing.

Moreover, Myers's claim that the human genome cannot inform our efforts at reverse engineering the brain is equally unfair and ridiculous. While I agree that the genome is not the brain, it undeniably contains the information required to construct a brain from scratch. This is irrefutable and Myers can stamp his feet in protest all he wants. We may be unable to properly read this data as yet, or even execute the exact programming required to set the process in motion, but that doesn't mean the problem is intractable. It's still early days. In addition, we have an existing model, the brain, to constantly juxtapose against the data embedded in our DNA (e.g. cognitive mapping).

Again, it just seems excruciatingly intuitive and obvious to think that our best efforts at emulating an entire brain will be informed to a considerable extent by pre-existing data, namely our own DNA and its millions upon millions of years of evolutionary success.

Oh, and Myers: Let's lose the ad hominem.

Saturday, August 14, 2010

IBM maps Macaque brain network

We're another step closer to reverse-engineering the human brain: IBM scientists have created the most comprehensive map of a brain’s network. The image above, called "The Mandala of the Mind," portrays the long-distance network of the Macaque monkey brain, spanning the cortex, thalamus, and basal ganglia, showing 6,602 long-distance connections between 383 brain regions.

The Proceedings of the National Academy of Sciences (PNAS) published a landmark paper entitled “Network architecture of the long-distance pathways in the macaque brain” (an open-access paper) by Dharmendra S. Modha (IBM Almaden) and Raghavendra Singh (IBM Research-India) with major implications for reverse-engineering the brain and developing a network of cognitive-computing chips.

Dr. Modha writes:
We have successfully uncovered and mapped the most comprehensive long-distance network of the Macaque monkey brain, which is essential for understanding the brain’s behavior, complexity, dynamics and computation. We can now gain unprecedented insight into how information travels and is processed across the brain. We have collated a comprehensive, consistent, concise, coherent, and colossal network spanning the entire brain and grounded in anatomical tracing studies that is a stepping stone to both fundamental and applied research in neuroscience and cognitive computing.
Link.

Thursday, July 15, 2010

Gelernter's 'dream logic' and the quest for artificial intelligence

Internet pioneer David Gelernter explores the ethereal fuzziness of cognition in his Edge.org article, "Dream-logic, the internet and artificial consciousness." He's right about the imperfect and dream-like nature of cognition and conscious thought; AI theorists should certainly take notice.

But Gelernter starts to go off the rails toward the conclusion of the essay. His claim that an artificial consciousness would be nothing more a zombie mind is unconvincing, as is his contention that emotional capacities are are necessary component of the cognitive spectrum. There is no reason to believe, from a functionalist perspective, that the neural correlates of consciousness cannot take root in an alternative and non-biological medium. And there are examples of fully conscious human beings without the ability to experience emotions.

Gelernter, like a lot of AI theorists, need to brush-up on their neuroscience.

At any rate, here's an excerpt from the article; you can judge the efficacy of his arguments for yourself:
As far as we know, there is no way to achieve consciousness on a computer or any collection of computers. However — and this is the interesting (or dangerous) part — the cognitive spectrum, once we understand its operation and fill in the details, is a guide to the construction of simulated or artificial thought. We can build software models of Consciousness and Memory, and then set them in rhythmic motion.

The result would be a computer that seems to think. It would be a zombie (a word philosophers have borrowed from science fiction and movies): the computer would have no inner mental world; would in fact be unconscious. But in practical terms, that would make no difference. The computer would ponder, converse and solve problems just as a man would. And we would have achieved artificial or simulated thought, "artificial intelligence."

But first there are formidable technical problems. For example: there can be no cognitive spectrum without emotion. Emotion becomes an increasingly important bridge between thoughts as focus drops and re-experiencing replaces recall. Computers have always seemed like good models of the human brain; in some very broad sense, both the digital computer and the brain are information processors. But emotions are produced by brain and body working together. When you feel happy, your body feels a certain way; your mind notices; and the resonance between body and mind produces an emotion. "I say again, that the body makes the mind" (John Donne).

The natural correspondence between computer and brain doesn't hold between computer and body. Yet artificial thought will require a software model of the body, in order to produce a good model of emotion, which is necessary to artificial thought. In other words, artificial thought requires artificial emotions, and simulated emotions are a big problem in themselves. (The solution will probably take the form of software that is "trained" to imitate the emotional responses of a particular human subject.)

One day all these problems will be solved; artificial thought will be achieved. Even then, an artificially intelligent computer will experience nothing and be aware of nothing. It will say "that makes me happy," but it won't feel happy. Still: it will act as if it did. It will act like an intelligent human being.

And then what?

Monday, July 12, 2010

Wisdom: From Philosophy to Neuroscience by Stephen S. Hall [book]

Stephen S. Hall's new book, Wisdom: From Philosophy to Neuroscience, looks interesting.

Promotional blurbage:
A compelling investigation into one of our most coveted and cherished ideals, and the efforts of modern science to penetrate the mysterious nature of this timeless virtue.

We all recognize wisdom, but defining it is more elusive. In this fascinating journey from philosophy to science, Stephen S. Hall gives us a dramatic history of wisdom, from its sudden emergence in four different locations (Greece, China, Israel, and India) in the fifth century B.C. to its modern manifestations in education, politics, and the workplace. We learn how wisdom became the provenance of philosophy and religion through its embodiment in individuals such as Buddha, Confucius, and Jesus; how it has consistently been a catalyst for social change; and how revelatory work in the last fifty years by psychologists, economists, and neuroscientists has begun to shed light on the biology of cognitive traits long associated with wisdom—and, in doing so, begun to suggest how we might cultivate it.

Hall explores the neural mechanisms for wise decision making; the conflict between the emotional and cognitive parts of the brain; the development of compassion, humility, and empathy; the effect of adversity and the impact of early-life stress on the development of wisdom; and how we can learn to optimize our future choices and future selves.

Hall’s bracing exploration of the science of wisdom allows us to see this ancient virtue with fresh eyes, yet also makes clear that despite modern science’s most powerful efforts, wisdom continues to elude easy understanding.
Hall's book is part of a larger trend that, along with happiness studies, is starting to enter (or is that re-enter?) mainstream academic and clinical realms of inquiry.

A. C. Grayling has penned an insightful and critical review of Hall's book:
First, though, one must point to another and quite general difficulty with contemporary research in the social and neurosciences, namely, a pervasive mistake about the nature of mind. Minds are not brains. Please note that I do not intend anything non-materialistic by this remark; minds are not some ethereal spiritual stuff a la Descartes. What I mean is that while each of us has his own brain, the mind that each of us has is the product of more than that brain; it is in important part the result of the social interaction with other brains. As essentially social animals, humans are nodes in complex networks from which their mental lives derive most of their content. A single mind is, accordingly, the result of interaction between many brains, and this is not something that shows up on a fMRI scan. The historical, social, educational, and philosophical dimensions of the constitution of individual character and sensibility are vastly more than the electrochemistry of brain matter by itself. Neuroscience is an exciting and fascinating endeavour which is teaching us a great deal about brains and the way some aspects of mind are instantiated in them, but by definition it cannot (and I don't for a moment suppose that it claims to) teach us even most of what we would like to know about minds and mental life.

I think the Yale psychologist Paul Bloom put his finger on the nub of the issue in the March 25th number of Nature where he comments on neuropsychological investigation into the related matter of morality. Neuroscience is pushing us in the direction of saying that our moral sentiments are hard-wired, rooted in basic reactions of disgust and pleasure. Bloom questions this by the simple expedient of reminding us that morality changes. He points out that "contemporary readers of Nature, for example, have different beliefs about the rights of women, racial minorities and homosexuals compared with readers in the late 1800s, and different intuitions about the morality of practices such as slavery, child labour and the abuse of animals for public entertainment. Rational deliberation and debate have played a large part in this development." As Bloom notes, widening circles of contacts with other people and societies through a globalizing world plays a part in this, but it is not the whole story: for example, we give our money and blood to help strangers on the other side of the world. "What is missing, I believe," says Bloom, and I agree with him, "is an understanding of the role of deliberate persuasion."

Contemporary psychology, and especially neuropsychology, ignores this huge dimension of the debate not through inattention but because it falls well outside its scope. This is another facet of the point that mind is a social entity, of which it does not too far strain sense to say that any individual mind is the product of a community of brains.

Saturday, October 3, 2009

SS09: Randal Koene "The Time is Now We Need Whole Brain Emulation"

Benjamin Peterson is covering the Singularity Summit for Sentient Developments.

"Is the brain enough? Is the mind enough?"

Don't bother wasting time on the engineering biospheres for humans to live in space! Just figure out how to move what's in our skulls into a different substrate.

Physicality of the mind ... information must be preserved. move the mind off the meat substrate. Randal is impressing upon his audience the importance of shedding the fleshbag.


Theodore Berger building hippocampus replacement.


"In-vivo techniques, neural recording, neural interfacing:
• Scary/risky procedures
• chronic implantation
• power supply
• scale and bandwidth .."

Different technologies could provide less-invasive (?) modalities: qdots.

Multiscale scanning requirements, need to record neuronal activity at different levels of activity (voxel, group, spike, analog, spatial, molecular)

The human connectome. Automated tape-collecting lathe ultramicrotome ... o.O


Question from the audience regarding continuity. "If you make a copy the original dies ... what about the problem of continuity?"

SS09: Anders Sandberg "Whole Brain Emulation"

Benjamin Peterson is covering the Singularity Summit for Sentient Developments.

George
covered neuroscientist Anders Sandberg's WBE (Whole Brain emulation) presentation during Convergence08. Anders is talking about the feasibility and paths to WBE.

"Whole Brain Emulation: feasibility, timescales and key challenges. Humans as existence proof for intelligent systems (so it can be done.) Why whole brain emulation? Exercise in forecasting, well defined problem, little understanding of intelligence needed, if it occurs it will likely be big (philo, scientific, economic, existential implications)


In order to emulate a brain we're going to need greater scanning capability:

• need enough resolution - rough consensus 5x5x50 nm resolution scanning

• need enough information

• need enough volume

• likely destructive" :[


Anders distinguishes WBE from mind-uploading ... maybe the wiki is misinformed?


"The step from mouse to man is 20 years in terms of brain emulation. First scan or simulate then computer power gradual emergence of emulation."


Lights up, Anders fields questions from the audience.

Monday, May 11, 2009

IEET's Susan Schneider on transhumanism: Will enhancement destroy the "real you"?


Dr. Susan Schneider, IEET fellow, assistant professor of philosophy and an affiliated faculty member with Penns Center for Cognitive Neuroscience and the Institute for Research in Cognitive Science, speaks at a UPenn Media Seminar on Neuroscience and Society on philosophical controversies surrounding cognitive enhancement.

In this video, Schneider wonders if radical enhancement, particularly cognitive enhancement that gives rise to superintelligence, will result in the destruction of the original person in favor of something categorically different. Schneider also discusses uploading and the continuity of experience -- including the apparent problem of destructive cloning/copying.

Via Institute for Ethics and Emerging Technologies.

Saturday, November 22, 2008

Deep brain stimulation induces vivid memories

Earlier this year doctors in Toronto reported a strange incident involving a morbidly obese man who was undergoing deep brain stimulation (DBS).

DBS involves implanting electrodes into the brain to treat conditions like Parkinson's and Alzheimer's. In this particular case, the electrodes were implanted into a 50-year old's hypothalamus (an area in the limbic system) in hopes of granting him better control over his appetite.

But a strange thing happened during the procedure.

When the electrodes were stimulated by electrical impulses the man began to experience feelings of deja vu. As the procedure continued, and as surgeons increased the intensity of the electrodes, the patient experienced an influx of memories and feelings of temporal uncertainty.

At one point the patient thought he was in a park with friends. He felt younger and thought that he was 20-years old again. Even his girlfriend of the time was there. According to the patient, he viewed the scene as an observer and experienced the scene in colour. As the surgeons increased the intensity of the stimulation the details became more and more vivid.

Two months later the surgeons repeated the procedure, and the same thing happened again.

This incident, which came as a complete surprise to the surgeons, has given medical researchers cause for hope that DBS may be used to boost memories and better treat neurodegenerative disorders like Alzheimer's.

"We hopefully have found a circuit in the brain which can be modulated by stimulation, and which might provide benefit to patients with memory disorders," said Andres Lozano of Toronto Western Hospital.

In addition, this incident reaffirms a suspicion I've had about the brain and its ability to store memories. I've often thought that the brain does an excellent job recording and storing memories, but that our recall mechanisms are disturbingly weak and highly selective. Our long-term associations with memories are frequently diminished (e.g. some of our more painful memories are often exaggerated, distorted or suppressed).

What this incident with DBS suggests is that our memories are beautifully preserved in our brains. We just lack the recall linkages and cognitive mechanisms to bring those memories back in any kind of detail. Our memories are accessed as fleeting bits of information instead of linear experiences.

Maybe there is a way to tease out the finer details. Looking to the future, perhaps we can use DBS or some other techniques to re-experience our memories in exquisite detail.

I'll get the popcorn.

Citation and photo credit: BBC

Friday, March 14, 2008

Brain scientist analyzes her own stroke

Neuroanatomist Jill Bolte Taylor had an opportunity few brain scientists would wish for: One morning, she realized she was having a massive stroke. As it happened -- as she felt her brain functions slip away one by one, speech, movement, understanding -- she studied and remembered every moment. This is a powerful story about how our brains define us and connect us to the world and to one another.

Saturday, December 1, 2007

Buddhism vs Transhumanism? (more)

From the "Buddhism vs Transhumanism?" comments section Casey writes:
Can you amplify your statement about Buddhism being concerned with "the optimization of subjective experience?"

It seems to me that subjectivity, or the idea that there is a discrete "you" to futz with, is the first thing to be transcended through unconditioned acceptance.

Take away the film, the projector and what do yo have? The bulb, which is analogous to the necessarily mysterious, unconditioned mind.

Buddhism is fundamentally against "add-ons" to the individual sphere, as mind is already junked up with the projections of ego as is. The practice, as I understand it, is more about stripping away.

That said, I'm curious about scientific improvements to the biological species, as well as the possible transference of consciousness to a non-bio realm. But for now, I'll continue plodding down the Path.
Indeed, while Buddhists would deny the existence of the self, there is no denying the fact that we observe (what appears to be) reality and are deeply entrenched in the condition that is life. Escape into monastic existence is not in the cards for most of us, and Buddhism is sympathetic to this.

Having a transhumanistically optimized mind is one thing (ie augmented intelligence and memory), having an optimized consciousness is quite another. How we interpret the world and how we internalize moment-to-moment processes (particularly as they are driven by our emotions) is where I think Buddhist discourse is particularly helpful and can work to inform the transhumanist mission.

Working to develop the ideal conditioned mind is the central goal of intrapersonal Buddhist practice, and to this point in history meditation has been the key method in achieving this. Might there be other ways? Imagine a future mod that could immediately rewire a mind to be as disciplined and aware as those of practicing monks.

Sign me up.

Today, a number of Buddhists use the latest in neuroscience to study the make-up of conditioned minds in order to gain an understanding of the neurochemical and cognitive processes behind such functions as happiness and mental acuity. This will not just help to improve meditative and mindfulness practices, but also in the development of the so-called contemplative sciences and advanced neurotechnological interventions.

As for improvements, I do not believe there is anything within Buddhist discourse that forbids human enhancement. Intention is what matters. If we enhance to keep up with social pressures, then that is a problem. If, on the other hand, we work to alleviate human suffering and foster meaningful lives, then I believe modification is in tune with Buddhist values.

The space of all conscious life is likely to be hugely vast, and Buddhists naturally understand the importance of respecting different kinds of sentient life.

On this topic, check out: Contemplative Science: Where Buddhism and Neuroscience Converge by B. Alan Wallace and The Universe in a Single Atom: The Convergence of Science and Spirituality by Dalai Lama.

Tuesday, July 31, 2007

Anders Sandberg wants to emulate your brain

Transhumanists have long speculated about the possibility of uploading a brain into a computer. In fact, a big part of the supposed posthuman future depends on it.

Soooo, how the hell do we do it?

This is the issue that Swedish neuroscientist Anders Sandberg tackled for his talk at TransVision 2007. Uploading, or what Sandberg refers to as ‘whole brain emulation,’ has become a distinct possibility arising from the feasibility of the functionalist paradigm and steady advances in computer science. Sandberg says we need a strategic plan to get going.

Levels of understanding

To start, Sandberg made two points about the kind of understanding that is required. First, we do not need to understand the function of a device to build it from parts, and second, we do not need to understand the function of the brain to emulate it. That said, Sandberg admitted that we still need to understand the brain's lower level functions in order for us to be able to emulate them.

The known unknown

Sandberg also outlined the various levels of necessary detail; we can already start to parse through the “known unknown.” He asked, “what level of description is necessary to capture enough of a particular brain to mimic its function?”

He described several tiers that will require vastly more detail:
• Computational model
• Brain region connectivity
• Analog network population model
• Spiking neural network
• Electrophysiology
• Metabolome
• Proteome
• Etc. (and all the way down to the quantum level)
Requirements

Sandberg believes that the ability to scan an existing brain will be necessary. What will also be required is the proper scanning resolution. Once we can peer down to the sufficient detail, we should be able to construct a brain model; we will then be required to infer structure and low-level function.

Once this is done we can think about running a brain emulation. Requirements here will include a computational neuroscience model and the requisite computer hardware. Sandberg noted that body and environment simulations may be added to the emulation; the brain emulator, body simulator and environment simulator would be daisy-chained to each other to create the sufficient interactive link. The developers will also have to devise a way to validate their observations and results.

Neural simulations

Neural simulations are nothing new. Hodgkin and Huxley began working on these sorts of problems way back in 1952. The trick is to perfectly simulate neurons, neuron parts, synapses and chemical pathways. According to Sandberg, we are approaching 1-1 for certain systems, including the lamprey spinal cord and lobster ganglia.

Compartment models are also being developed with miniscule time and space resolutions. The current record is 22 million 6-compartment neurons, 11 billion synapses, and a simulation length of one second real-time. Sandberg cited advances made by the development of IBM’s Blue Gene.

Complications and Exotica

Sandberg also provided a laundry list of possible ‘complications and exotica’:
• dynamical state
• spinal cord
• volume transmission
• glial cells
• synaptic adaptation
• body chemical environment
• neurogensis
• ephaptic effects
• quantum computation
• analog computation
• randomness
Reverse engineering is all fine and well, suggested Sandberg, but how much function can be deduced from morphology (for example)?

Scanning


In regards to scanning, we'll need to determine the kind of resolution and data needed. Sandberg argued that nondestructive scanning will be unlikely; MRIs have been the closest thus far but are limited to less than 7.7 micrometers resolution. More realistically, destructive scanning will likely be used; Sandberg noted such procedures as fixation and ‘slice and scan.’

Once scanning is complete the postprocessing can begin. Developers at this stage will be left wondering about the nature of the neurons and how they are all connected.

Given advances in computation, Sandberg predicted that whole brain emulation may arrive sometime between 2020 and 2060. As for environment and body simulation, we’ll have to wait until we have 100 terraflops at our disposal. We’ll also need a resolution of 5x5x50nm to do meaningful work.

Conclusions

Sandberg made mention of funding and the difficultly of finding scan targets. He named some subfields that lack drivers, namely basic neuroscience, electrophysiology, and large scale scanning (so far). He did see synergies arising from the ongoing development and industrialization of neuroscience, robotics and all the various –omics studies.

As for the order of development, Sandberg suggested 1) scanning and/or simulation, then 2) computer power, and then 3) the gradual emergence of emulation. Alternately, 1) first computer power, then 2) simulation and finally 3) scanning followed by 4) the rapid emergence of simulation.

Any volunteers for slice and scan?

Monday, July 30, 2007

When Dvorsky met Minsky

Of all the celebrities and bigwigs I looked forward to meeting at TransVision 2007 there was only one person who I was truly nervous about running into – a person who gave me that 'I’m going to squeal like a little girl when I see him’ kind of feeling.

That individual was pioneering neuroscientist Marvin Minsky.

A friend cautioned me by claiming that he was a difficult man and not very approachable. I dismissed the warning and patiently waited for an opportunity to start a conversation with him.

I eventually got my chance. I was with two other friends when the three of us bumped into Minsky in the reception area of the conference hall. Without hesitation I approached and introduced myself. After we shook hands I told him how much I appreciated his work and how much of an honour it was for me to finally meet him. He nodded his head and didn’t say a word.

I was surprised by how old he looked. Minsky is now 80 years old and has been working in the field of neuroscience since the 1950s. Despite his age he recently published a book, The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. Minsky just keeps on going.

Working to move the conversation along, I told him that while I was conducting research for my presentation I discovered that he was a presenter at the seminal SETI conference in 1971 in Byurakan. Minsky made waves at that conference by having the audacity to suggest that advanced extraterrestrial civilizations would likely be comprised of machine minds. It was a controversial suggestion, one that has only come into acceptance in more recent times. I asked Minsky for a first-hand account of how his idea was received back in 1971.

He stood there, just blankly looking at me, and didn’t say a single word. We all waited in silence for what seemed an eternity. I got the distinct impression that he was thoroughly disinterested in our little group.

Being a sucker for punishment I decided to move the conversation along. I unabashedly gave him the 10 second executive summary of my TV07 presentation, where I make some claims about the limitations of extraterrestrial civilizations and how this might account for the Great Silence and the problem that is the Fermi Paradox.

This finally got Minsky going. He had attended a SETI conference two weeks prior and was impressed with what he heard there. Minsky suggested that the reason we don’t see any signs of obvious megascale engineering or cosmological re-tuning by advanced ETI’s is that they have no sense of urgency to embark upon such projects. He argued that advanced intelligences won’t engage in these sorts of Universe changing exercises until the very late stages of the cosmos.

Jeez, I thought to myself, I hadn't considered that.

Leave it to Marvin Minsky to give me some serious food for thought a mere two hours before I was to give my talk. I was suddenly worried that this consideration would pierce a glaring hole in my argument.

After another minute of idle chit-chat I excused myself from Minsky's company and found a little corner where I could have my little micro-panic and contemplate his little theory.

The more I thought about it, however, the more unsatisfied I became with his answer; virtually everyone has a rather smug solution to the Fermi Paradox, and Marvin Minsky is no exception. Specifically, I was concerned with how such a theory could be exclusive to all civilizations. It seemed implausible to believe that not even one renegade civilization would take it upon itself to change the rules of the cosmos if it had the capacity to do so.

Moreover, given the power to reshape the Universe, a strong case could be made that a meta-ethical imperative exists to turn the madness that is existence into something profoundly more meaningful and safer. As Slavoj Žižek once said, existence is a catastrophe of the highest order. Timothy Leary described the Universe as an "ocean of chaos."

Waiting until the last minute to create a cosmological paradise (assuming such a thing is even possible) would seem to be both exceptionally risky and irresponsible -- not just to the members of a civilization capable of such feats, but to the larger universal community itself.

Phew. That's right, that's the answer. Ha, take that, Minsky!

So, after rationalizing a counter-argument to Minsky's suggestion, I was able to calm down and prepare myself for my presentation and deal with any follow-up questions that could be thrown my way.

And that's how I met Marvin Minsky.

Sure, he's not the most personable man I've ever met, but I got the sense that he's at a time in his life where a) he knows he owes nothing to no one and b) he'd rather engage with people who can contribute to his life's work and his ongoing struggle to solve the problem that is human cognition. And he's still as sharp as they come.

It was truly an honour.