Monday, August 24, 2009

Elaine Morgan at TED: The Aquatic Ape theory lives!

Wow, this is cool: Elaine Morgan has given a TED talk about the aquatic ape theory.

I remember reading her book in the early 90s and being completely blown away by it. Since then I've strongly suspected that the path of human evolution must have taken a temporary detour through the water. But frustratingly, this theory has never taken off -- though Morgan claims that the theory now holds some heavy hitting supporters, including David Attenborough and Daniel Dennett.

In this TED Talk, Elaine Morgan, who is now in her 80s, provides an excellent overview of the hypothesis and shows just how passionate she is about the subject matter.

Sunday, August 23, 2009

Imitating nature

A pair of biomimicry related videos:


ECCE the anthropomimetic robot: A robot with all the inner structures and mechanisms of a human (including bones, joints, muscles, and tendons), giving it a greater potential for human-like action and interaction in the world.



TED Talks: Janine Benyus: Biomimicry in action: Janine Benyus has a message for inventors: When solving a design problem, look to nature first. There you'll find inspired designs for making things waterproof, aerodynamic, solar-powered and more. Here she reveals dozens of new products that take their cue from nature with spectacular results.

High-speed robotic hand

Ishikawa Komuro Lab's high-speed robot hand performing impressive acts of dexterity and skillful manipulation. More.

Sunday, August 16, 2009

The Real Way to Feel Safe with Artificial Intelligence

Cross-posted at http://davidbrin.blogspot.com/... anyone is welcome to join discussion there....


=====

Sorry to have posted so little, of late.  We have been ensnared by a huge and complex Eagle Scout Project here... plus another kid making Black Belt, and yet another at Screenwriting camp... then the first one showing me endless online photos of "cars it would be cool to buy..."

And so, clearing my deck of topics to rant about, I'd like to post quickly this rumination on giving rights to artificial intelligences.  Bruce Sterling has lately raised this perennial issue, as did Mike Treder in an excellent piece suggesting that our initial attitudes toward such creatures may color the entire outcome of a purported "technological singularity."


The Real Reason to Ensure AI Rights

No issue is of greater importance than ensuring that our new, quasi-intelligent creations are raised properly.  While oversimplifying terribly, Hollywood visions of future machine intelligence range from TERMINATOR-like madness to admirable traits portrayed in movies like AI or in the BICENTENNIAL MAN.  

I've spoken elsewhere of one great irony -- that there is nothing new about this endeavor.  That every human generation embarks upon a similar exercise -- creating new entities that start out less intelligent and virtually helpless, but gradually transform into beings that are stronger, more capable, and sometimes more brilliant than their parents can imagine.

The difference between this older style of parenthood and the New Creation is not only that we are attempting to do all of the design de novo, with very little help from nature or evolution, but also that the pace is speeding up. It may even accelerate, once semi-intelligent computers assist in fashioning new and better successors.  

Humanity is used to the older method, in which each next generation reliably includes many who rise up, better than their ancestors... while many others sink lower, even into depravity.  It all sort of balanced out (amid great pain), but henceforth we cannot afford such haphazard ratios,  from either our traditional-organic heirs or their cybernetic creche-mates.

I agree that our near-future politics and social norms will powerfully affect what kind of "singularity" transformation we'll get -- ranging from the dismal fears of Bill Joy and Ted Kaczynski to the fizzing fantasies of Ray Kurzweil.  But first, let me say it's not the surface politics of our useless, almost-meaningless so-called Left-vs-Right axis. Nor will it be primarily a matter of allocation of taxed resources. Except for investments in science and education and infrastructure, those are not where the main action will be.  They will not determine the difference between "good" and "bad" transcendence.  Between THE MATRIX  and, say, FOUNDATION'S TRIUMPH.

No, what I figure will be the determining issue is this.  Shall we maintain momentum and fealty to the underlying concepts of the Western Enlightenment? Concepts that run even deeper than democracy or the principle of equal rights, because they form the underlying, pragmatic basis for our entire renaissance.


Going With What Has Already Worked

These are, I believe, the pillars of our civilization -- the reasons that we have accomplished so much more than any other, and why we may even succeed in doing it right, when we create Neo-Humanity.

1.  We acknowledge that individual human beings  -- and also, presumably, the expected caste of neo-humans -- are inherently flawed in their subjectively biased views of the world.  

In other words...  we are all delusional! Even the very best of us.  Even (despite all their protestations to the contrary) all leaders.  And even (especially) those of you out there who believe that you have it all sussed.

This is crucial. Six thousand years of history show this to be the one towering fact of human nature.  Our combination of delusion and denial is the core predicament that stymied our creative, problem-solving abilities, delaying the great flowering that we're now part-of.  

These dismal traits still erupt everywhere, in all of us.  Moreover, it is especially important to assume that delusion and denial will arise, inevitably, in the new intelligent entities that we're about to create.  If we are wise parents, we will teach them to say what all good scientists are schooled to say, repeatedly: "I might be mistaken."  But that, alone, is not enough.

2.  There is a solution to this curse, but it is not at all the one what was recommended by Plato, or any of the other great sages of the past.  

Oh, they knew all about about the delusion problem, of course.  See Plato's "allegory of the cave," or the sayings of Buddha, or any of a myriad other sage critiques of fallible human subjectivity.  These savants were correct to point at the core problem... only then, each of them claimed that it could be solved by following their exact prescription for Right Thinking. And followers bought in, reciting or following the incantations and flattering themselves that they had a path that freed them of error.

Painfully, at great cost, we have learned that there is no such prescription. Alack, the net sum of "wisdom" that those prophets all offered only wound up fostering even more delusion.  It turns out that nothing -- no method or palliative applied by a single human mind, upon itself -- will ever accomplish the objective.  

Oh, sure, logic and reason and sound habits of scientifically-informed self-doubt can help a lot.  They may cut the error rate in half, or even by a factor of a hundred!  Nevertheless, you and I are still delusional twits.  We always will be!  It is inherent.  Live with it.  Our ancestors had to live with the consequences of this inherent human curse.

Ah, but things turned out not to be hopeless, after all!  For, eventually, the Enlightenment offered a completely different way to deal with this perennial dilemma.  We (and presumably our neo-human creations) can be forced to notice, acknowledge, and sometimes even correct our favorite delusions, through one trick that lies at the heart of every Enlightenment innovation -- the processes called Reciprocal Accountability (RA).  

In order to overcome denial and delusion, the Enlightenment tried something unprecedented -- doing without the gurus and sages and kings and priests.  Instead, it nurtured competitive systems in markets, democracy, science and courts, through which back and forth criticism is encouraged to flow, detecting many errors and allowing many innovations to improve.  Oh, competition isn't everything! Cooperation and generosity and ideals are clearly important parts of the process, too. But ingrained reciprocality of criticism -- inescapable by any leader -- is the core innovation.

3.  These systems -- including "checks and balances" exemplified in the U.S. Constitution -- help to prevent the kind of sole-sourcing of power, not only by old-fashioned human tyrants, but also the kind of oppression that we all fear might happen, if the Singularity were to run away, controlled by just one or a few mega-machine-minds. The nightmare scenarios portrayed in The Matrix, Terminator, or the Asimov universe.


The Way to Ensure AI is Both Sane and Wise

How can we ever feel safe, in a near future dominated by powerful artificial intelligences that far outstrip our own? What force or power could possibly keep such a being, or beings, accountable?  

Um, by now, isn't it obvious?

The most reassuring thing that could happen would be for us mere legacy/organic humans to peer upward and see a great diversity of mega minds, contending with each other, politely, and under civil rules, but vigorously nonetheless, holding each other to account and ensuring everything is above-board.  

This outcome -- almost never portrayed in fiction --  would strike us as inherently more likely to be safe and successful.  After all, isn't it today's situation?  The vast majority of citizens do not understand arcane matters of science or policy or finance.  They watch the wrangling among alphas and are reassured to see them applying accountability upon each other.... a reassurance that was betrayed by recent attempts to draw clouds of secrecy across all of our deliberative processes.  

Sure, it is profoundly imperfect, and fickle citizens can be swayed by mogul-controlled media to apply their votes in unwise directions.  We sigh and shake our heads... as future AI Leaders will moan in near-despair over organic-human sovereignty.  But, if they are truly wise, they'll continue this compact.  Because the most far-seeing among them will recognize that "I might be wrong" is still the greatest thing than any mind can say.  And that we reciprocal criticism is even better.

Alas, even those who want to keep our values strong, heading into the Singularity Age, seldom parse it down to this fundamental level.  They talk - for example - about giving AI "rights" in purely moral terms...  or perhaps to placate them and prevent them from rebelling and squashing us.

But the real reason to do this is far more pragmatic.  If the new AIs feel vested in a civilization that considers them "human" then they may engage in our give and take process of shining light upon delusion. Each others delusions, above all.

 Reciprocal accountability -- extrapolated to a higher level -- may thus maintain the core innovation of our civilization.  It's central and vital insight.

And thus, we may find that our new leaders -- our godlike grandchildren -- will still care about us... and keep trying to explain.

-

Saturday, August 15, 2009

Still in vacation mode

Regular blogging to resume shortly. Or at least not until the weather starts to get crappy.