Showing posts with label brain reverse-engineering. Show all posts
Showing posts with label brain reverse-engineering. Show all posts

Saturday, August 21, 2010

Making brains: Reverse engineering the human brain to achieve AI

The ongoing debate between PZ Myers and Ray Kurzweil about reverse engineering the human brain is fairly representative of the same debate that's been going in futurist circles for quite some time now. And as the Myers/Kurzweil conversation attests, there is little consensus on the best way for us to achieve human-equivalent AI.

That said, I have noticed an increasing interest in the whole brain emulation (WBE) approach. Kurzweil's upcoming book, How the Mind Works and How to Build One, is a good example of this—but hardly the only one. Futurists with a neuroscientific bent have been advocating this approach for years now, most prominently by the European transhumanist camp headed by Nick Bostrom and Anders Sandberg.

While I believe that reverse engineering the human brain is the right approach, I admit that it's not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don't exist yet. And importantly, success won't come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

But we have to start somewhere, and we have to start with a plan.

Rules-based AI versus whole brain emulation

Now, some computer theorists maintain that the rules-based approach to AI will get us there first. Ben Goertzel is one such theorist. I had a chance to debate this with him at the recent H+ Summit at Harvard. His basic argument is that the WBE approach over-complexifies the issue. "We didn't have to reverse engineer the bird to learn how to fly," he told me. Essentially, Goertzel is confident that the hard-coding of artificial general intelligence (AGI) is a more elegant and direct approach; it'll simply be a matter of identifying and developing the requisite algorithms sufficient for the emergence of the traits we're looking for in an AGI—things like learning and adaptation. As for the WBE approach, Goertzel thinks it's overkill and overly time consuming. But he did concede to me that he thinks the approach is sound in principle.

This approach aside, like Kurzweil, Bostrom, Sandberg and a growing number of other thinkers, I am drawn to the WBE camp. The idea of reverse engineering the human brain makes sense to me. Unlike the rules-based approach, WBE works off a tried-and-true working model; we're not having to re-invent the wheel. Natural selection, through excruciatingly tedious trial-and-error, was able to create the human brain—and all without a preconceived design. There's no reason to believe that we can't figure out how this was done; if the brain could come about through autonomous processes, then it can most certainly come about through the diligent work of intelligent researchers.

Emulation, simulation and cognitive functionalism

Emulation refers to a 1-to-1 model where all relevant properties of a system exist. This doesn't mean recreating the human brain in exactly the same way as it resides inside our skulls. Rather, it implies the recreation of all its properties in an alternative substrate, namely a computer system.

Moreover, emulation is not simulation. We're not looking to give the appearance of human-equivalent cognition. A simulation implies that not all properties of a model are present. Again, it's a complete 1:1 emulation that we're after.

Now, given that we're looking to model the human brain in digital substrate, we have to work according to a rather fundamental assumption: computational functionalism. This goes back to the Church-Turing thesis which states that a Turing machine can emulate any other Turing machine. Essentially, this means that every physically computable function can be computed by a Turing machine. And if brain activity is regarded as a function that is physically computed by brains, then it should be possible to compute it on a Turing machine. Like a computer.

So, if you believe that there's something mystical or vital about human cognition you should probably stop reading now.

Or, if you believe that there's something inherently physical about intelligence that can't be translated into the digital realm, you've got your work cut out for you to explain what that is exactly—keeping in mind that any informational process is computational, including those brought about by chemical reactions. Moreover, intelligence, which is what we're after here, is something that's intrinsically non-physical to begin with.

The roadmap to whole brain emulation

A number of critics point out that we'll never emulate a human brain on account of the chaos and complexity inherent in such a system. On this point I'll disagree. As Bostrom and Sandberg have pointed out, we will not need to understand the whole system in order to emulate it. What's required is a functional understanding of all necessary low-level information about the brain and knowledge of the local update rules that change brain states from moment to moment. What is meant by low-level at this point is an open question, but it likely won't involve a molecule-by-molecule understanding of cognition. And as Ray Kurzweil has revealed, the brain contains masterful arrays of redundancy; it's not as complicated as we currently think.

In order to gain this "low-level functional understanding" of the human brain we will need to employ a series of interdisciplinary approaches (most of which are currently underway). Specifically, we're going to require advances in:
  • Computer science: We have to improve the hardware component; we're going to need machines with the processing power required to host a human brain; we're also going to need to improve the software component so that we can create algorithmic correlates to specific brain function.
  • Microscopy and scanning technologies: We need to better study and map the brain at the physical level; brain slicing techniques will allow us to visibly study cognitive action down to the molecular scale; specific areas of inquiry will include molecular studies of individual neurons, the scanning of neural connection patterns, determining the function of neural clusters, and so on.
  • Neurosciences: We need more impactful advances in the neurosciences so that we may better understand the modular aspects of cognition and start mapping the neural correlates of consciousness (what is currently a very grey area).
  • Genetics: We need to get better at reading our DNA for clues about how the brain is constructed. While I agree that our DNA will not tell us how to build a fully functional brain, it will tell us how to start the process of brain-building from scratch.
Essentially, WBE requires three main capabilities: (1) the ability to physically scan brains in order to acquire the necessary information, (2) the ability to interpret the scanned data to build a software model, and (3) the ability to simulate this very large model.

Time-frames

Inevitably the question as to 'when' crops up. Personally, I could care less. I'm more interested in viability than timelines. But, if pressed for an answer, my feeling is that we are still quite a ways off. Kurzweil's prediction of 2030 is uncomfortably short in my opinion; his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we're still likely heading down some blind alleys.

My own feeling is that we'll likely be able to emulate the human brain in about 50 to 75 years. I will admit that I'm pulling this figure out of my butt as I really have no idea. It's more a feeling than a scientifically-backed estimate.

Lastly, it's worth noting that, given the capacity to recreate a human brain in digital substrate, we won't be too far off from creating considerably greater than human intelligence. Computer theorist Eliezer Yudkowsky has claimed that, because of the brain's particular architecture, we may be able to accelerate its processing speed by a factor of a million relatively easily. Consequently, predictions as to when we may hit the Singularity will likely co-incide with the advent of a fully emulated human brain.

Tuesday, August 17, 2010

Myers: Kurzweil is a "pseudo-scientific dingbat" who "does not understand" the brain

Biologist and skeptic PZ Myers has ripped into Ray Kurzweil for his recent claim that the human brain will be completely modeled by 2020 (Note: Not that it's particularly important, but Kurzweil did say it'll take two decades at the recent Singularity Summit, not one). In a rather sweeping and insulting article titled, "Ray Kurzweil does not understand the brain," Myers takes the position that the genome cannot possibly serve as an effective blueprint in our efforts to reverse engineer the human brain.

In regards to he claim that the design of the brain is in the genome, he writes,
Kurzweil knows nothing about how the brain works. It's [sic] design is not encoded in the genome: what's in the genome is a collection of molecular tools wrapped up in bits of conditional logic, the regulatory part of the genome, that makes cells responsive to interactions with a complex environment. The brain unfolds during development, by means of essential cell:cell interactions, of which we understand only a tiny fraction. The end result is a brain that is much, much more than simply the sum of the nucleotides that encode a few thousand proteins. He has to simulate all of development from his codebase in order to generate a brain simulator, and he isn't even aware of the magnitude of that problem.

We cannot derive the brain from the protein sequences underlying it; the sequences are insufficient, as well, because the nature of their expression is dependent on the environment and the history of a few hundred billion cells, each plugging along interdependently. We haven't even solved the sequence-to-protein-folding problem, which is an essential first step to executing Kurzweil's clueless algorithm. And we have absolutely no way to calculate in principle all the possible interactions and functions of a single protein with the tens of thousands of other proteins in the cell!
Myers continues:
To simplify it so a computer science guy can get it, Kurzweil has everything completely wrong. The genome is not the program; it's the data. The program is the ontogeny of the organism, which is an emergent property of interactions between the regulatory components of the genome and the environment, which uses that data to build species-specific properties of the organism. He doesn't even comprehend the nature of the problem, and here he is pontificating on magic solutions completely free of facts and reason.
Okay, while I agree that Kurzweil's timeline is ridiculously optimistic (I'm thinking we'll achieve a modeled human brain sometime between 2075 and 2100), Myers's claim that Kurzweil "knows nothing" about the brain is as incorrect as it is disingenuous. Say what you will about Kurzweil, but the man does his homework. While I wouldn't make the claim that he does seminal work in the neurosciences, I will say that his efforts at describing the brain along computationally functionalist terms is important. The way he has described the brain's redundancy and massively repeating arrays is as fascinating as it is revealing.

Moreover, Myers's claim that the human genome cannot inform our efforts at reverse engineering the brain is equally unfair and ridiculous. While I agree that the genome is not the brain, it undeniably contains the information required to construct a brain from scratch. This is irrefutable and Myers can stamp his feet in protest all he wants. We may be unable to properly read this data as yet, or even execute the exact programming required to set the process in motion, but that doesn't mean the problem is intractable. It's still early days. In addition, we have an existing model, the brain, to constantly juxtapose against the data embedded in our DNA (e.g. cognitive mapping).

Again, it just seems excruciatingly intuitive and obvious to think that our best efforts at emulating an entire brain will be informed to a considerable extent by pre-existing data, namely our own DNA and its millions upon millions of years of evolutionary success.

Oh, and Myers: Let's lose the ad hominem.

Saturday, August 14, 2010

IBM maps Macaque brain network

We're another step closer to reverse-engineering the human brain: IBM scientists have created the most comprehensive map of a brain’s network. The image above, called "The Mandala of the Mind," portrays the long-distance network of the Macaque monkey brain, spanning the cortex, thalamus, and basal ganglia, showing 6,602 long-distance connections between 383 brain regions.

The Proceedings of the National Academy of Sciences (PNAS) published a landmark paper entitled “Network architecture of the long-distance pathways in the macaque brain” (an open-access paper) by Dharmendra S. Modha (IBM Almaden) and Raghavendra Singh (IBM Research-India) with major implications for reverse-engineering the brain and developing a network of cognitive-computing chips.

Dr. Modha writes:
We have successfully uncovered and mapped the most comprehensive long-distance network of the Macaque monkey brain, which is essential for understanding the brain’s behavior, complexity, dynamics and computation. We can now gain unprecedented insight into how information travels and is processed across the brain. We have collated a comprehensive, consistent, concise, coherent, and colossal network spanning the entire brain and grounded in anatomical tracing studies that is a stepping stone to both fundamental and applied research in neuroscience and cognitive computing.
Link.