Friday, September 22, 2006

Fighting back against mind hacks

For those of you who haven't seen the Ghost in the Shell movies, what the hell are you doing wasting time here? Get out to your local video store, rent and watch it, and then come back.

Okay, for those of you who have seen the movies, you’ll know that a major issue as presented in both GitS films is the potential problem of mind hacking (or ghost hacking as it's called). In this future, which is very much in tune with the projections of transhumanists, cyborg minds are seamlessly inter-linked with the Internet. These brains are mostly cybernetic in makeup with some organic components remaining. This is a world in which the computational functionalism of the brain is exploited and made capable of interfacing with other computers and the Web; individuals can access the Internet with their thoughts and without wires. Information is completely on demand in this future world and individuals are techlepathic.

Unfortunately, this computational universality introduces a whole new set of problems, namely the issue of security. Cyborgs leave themselves vulnerable to mind attacks. In the first GitS movie, individuals have their ghosts hacked and modified. In one case, a man’s memories were altered so severely that a hacker could essentially control his actions like a puppet. Like Rachael in Blade Runner, this character completely believed the false set of memories that had been covertly implanted in his mind.

In the second GitS movie, Batou has his visual field tapped into and fed hallucinations in real time. Convinced that there is an armed man up to no good in a variety store, Batou engages him in a gun fight. When the hallucination finally stops, he realizes no one was really there and that and he's completely blown up the place on his own. Batou struggles with this unnerving realization for the remainder of the film as he's forced to be unsure of the authenticity of his memories and his moment-by-moment subjective experience.

This is social engineering at an order of magnitude vastly more sophisticated than what we are used to today. The cyborgs in GitS do try to fight back, however, through the use of so-called ‘proactive firewalls.’ Any would-be hacker runs the risk of having a counterattack of some sort unleashed upon him. Unfortunately for the cyborgs, however, there’s no reliable and fail-safe method to defend against such attacks. Constant vigilance is the only way to defend oneself.

Needless to say, the prospect of having your mind violated in this way makes the whole transhumanist experiment a hell of a lot less appealing. The idea that someone could violate your mind and destroy your authentic self is frightening to say the least – not to mention the nightmarish potential of having your actions controlled remotely. Consequently, while the ‘proactive firewall’ as portrayed in GitS is somewhat of a trope, there is a real possibility that similar countermeasures can eventually be developed.

I recently contacted neuroscientist and transhumanist Anders Sandberg to get some insight on the matter. Sandberg has given this issue considerable thought and believes that a future as portrayed in GitS is a distinct possibility. He writes,
Once you connect your brain to computer hardware, hacking becomes potentially possible. The interface will have to send and receive information from the brain, interpreted by a computer or signal processor. If you control it, you can send whatever signals you like. Of course modulo hardware safeguards that prevent high voltages etc, but arbitrary information to the linked areas seem likely. A simple neural interface would at least access vision and audtory cortex, and likely language and motor cortex. That is enough to enable some pretty interesting/nasty hacks, like priming epilepsy.

Sandberg believes that fighting back may be possible and points to current examples. He notes how firewalls already exist today that “strike back” at portscans, so it may be possible to create one that can do it for the neural interfaces as well. The problem, says Sandberg, is that counterhacking with a script may not be very reliable.

Sandberg has given some thought to how an effective neural proactive firewall could actually work. “An ordinary firewall is a good first step,” he says, “put something between the Internet and the computer actually sending neural signals.” He also suggests that the interface be made such that it is non-programmable from the outside computers and only programmable using some physical interface. The interface should also have some safety cut-offs.

He notes that the computer doing the real processing before sending it to the interface should be running some suitable safe software and log what it is doing. That said, Sandberg acknowledges that avoiding programmability altogether improves safety enormously but reduces flexibility and capability equally.

Ultimately, however, it’s an open question as to whether or not cybernetic minds of the future can truly be protected. It will be very interesting to see how this issue plays out in the coming decades and how it will impact on our pending cybernetic future.

________________
Related reading:

Future terror: neurohacking

No comments:

Post a Comment