Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Tuesday, August 17, 2010

On Skeptically Speaking Radio this coming Friday August 20

In light of the recently concluded Singularity Summit, I'll be debating blogger Greg Fish on Skeptically Speaking Radio this coming August 20. We'll be discussing the Singularity and various pathways towards powerful AI.

This will mark my second appearance on Skeptically Speaking. My first debate with Greg Fish can be found here.

Thursday, July 15, 2010

Gelernter's 'dream logic' and the quest for artificial intelligence

Internet pioneer David Gelernter explores the ethereal fuzziness of cognition in his Edge.org article, "Dream-logic, the internet and artificial consciousness." He's right about the imperfect and dream-like nature of cognition and conscious thought; AI theorists should certainly take notice.

But Gelernter starts to go off the rails toward the conclusion of the essay. His claim that an artificial consciousness would be nothing more a zombie mind is unconvincing, as is his contention that emotional capacities are are necessary component of the cognitive spectrum. There is no reason to believe, from a functionalist perspective, that the neural correlates of consciousness cannot take root in an alternative and non-biological medium. And there are examples of fully conscious human beings without the ability to experience emotions.

Gelernter, like a lot of AI theorists, need to brush-up on their neuroscience.

At any rate, here's an excerpt from the article; you can judge the efficacy of his arguments for yourself:
As far as we know, there is no way to achieve consciousness on a computer or any collection of computers. However — and this is the interesting (or dangerous) part — the cognitive spectrum, once we understand its operation and fill in the details, is a guide to the construction of simulated or artificial thought. We can build software models of Consciousness and Memory, and then set them in rhythmic motion.

The result would be a computer that seems to think. It would be a zombie (a word philosophers have borrowed from science fiction and movies): the computer would have no inner mental world; would in fact be unconscious. But in practical terms, that would make no difference. The computer would ponder, converse and solve problems just as a man would. And we would have achieved artificial or simulated thought, "artificial intelligence."

But first there are formidable technical problems. For example: there can be no cognitive spectrum without emotion. Emotion becomes an increasingly important bridge between thoughts as focus drops and re-experiencing replaces recall. Computers have always seemed like good models of the human brain; in some very broad sense, both the digital computer and the brain are information processors. But emotions are produced by brain and body working together. When you feel happy, your body feels a certain way; your mind notices; and the resonance between body and mind produces an emotion. "I say again, that the body makes the mind" (John Donne).

The natural correspondence between computer and brain doesn't hold between computer and body. Yet artificial thought will require a software model of the body, in order to produce a good model of emotion, which is necessary to artificial thought. In other words, artificial thought requires artificial emotions, and simulated emotions are a big problem in themselves. (The solution will probably take the form of software that is "trained" to imitate the emotional responses of a particular human subject.)

One day all these problems will be solved; artificial thought will be achieved. Even then, an artificially intelligent computer will experience nothing and be aware of nothing. It will say "that makes me happy," but it won't feel happy. Still: it will act as if it did. It will act like an intelligent human being.

And then what?

Wednesday, June 24, 2009

Ranking the most powerful forces in the Universe

There are a large number of forces at work in the Universe, some more powerful than others -- and I'm not talking about the four fundamental forces of nature. A force in the context I'm talking about is any phenomenon in Universe that exhibits a powerful effect or influence on its environment. Many of these phenomenon quite obviously depend on the four basic forces to function (gravity, electromagnetism, the weak interaction and the strong interaction), but it's the collective and emergent effects of these fundamental forces that I'm interested in.

And when I say power I don't just mean the capacity to destroy or wreak havoc, though that's an important criteria. A force should also be considered powerful if it can profoundly reorganize or manipulate its environment in a coherent or constructive way.

Albert Einstein once quipped that the most powerful force in the Universe was compound interest. While he does have a point, and with all due respect to the Master, I present to you my list of the four most powerful phenomenon currently making an impact in the Universe:

4. Supermassive Black Holes

There's no question that black holes are scary; it's the only part of the Universe that can truly destroy itself.

Indeed, Einstein himself, whose Theory of Relativity opened the door to the modern study of black holes, noted that "they are where God has divided by zero." And it's been said that the gravitational singularity, where the laws of physics collapse, is the most complex mystery of science that still defies human knowledge.

Somewhat counterintuitively, black holes take the weakest of the four basic forces, gravity, to create a region of space with a gravitational field so powerful that nothing, not even light, can escape its pull. They're called "black" because they absorb all the light that hits them and reflect nothing. They have a one-way surface, the event horizon, into which objects can fall, but out of which nothing (save for Hawking Radiation) can escape.

Black holes can also vary in size and gravitational intensity. Supermassive black holes are a million to a billion times the mass of a typical black hole. Most galaxies, if not all, are believed to contain supermassive black holes at their centers (including the Milky Way).

And recent studies are now suggesting that they are much larger than previously thought. Computer models reveal that the supermassive black hole at the heart of the giant galaxy M87 weighs the same as 6.4 billion suns—two to three times heavier than previous estimates.

That's a lot of pull.

Indeed, should anything have the misfortune of getting close enough to a supermassive black hole, whether it be gas, stars or entire solar systems, it would be sucked into oblivion. Its gravitational pull would be so overwhelming that it would hurl gas and stars around it at almost the speed of light; the violent clashing would heat the gas up to over a million degrees.

Some have suggested that the supermassive black hole is the most powerful force in the Universe. While its ability to destroy the very fabric of space and time itself is undeniably impressive (to say the least), its localized and limited nature prevent it from being ranked any higher than fourth on my list. A black hole would never subsume an entire Galaxy, for example, at least not within cosmologically long time frames.

3. Gamma-Ray Bursts

The power of gamma-ray bursts (GRB) defies human comprehension.

Imagine a hypergiant star at the end of its life, a massive object that's 150 times larger than our own. Extremely high levels of gamma radiation from its core is causing its energy to transform to matter. The resultant drop in energy causes the star to collapse. This results in a dramatic increase in the thermonuclear reactions that was burning within it. All this added energy overpowers the gravitational attraction and it explodes in a fury of energy -- the hypergiant has gone hypernova.

This is not the stuff of fiction or theory -- explosions like this have been observed. Hypernovas of this size can instantly expel about 10X46 joules. This is more energy than our sun produces over a period of 10 billion years. 10 billion years! In one cataclysmic explosion!

Hypernovas can wreak tremendous havoc in its local area, effectively sterilizing the region. These explosions produce highly collimated beams of hard gamma-rays that extend outward from the exploding star. Any unfortunate life-bearing planet that should come into contact with those beams would suffer a mass extinction (if not total extinction depending on its proximity to the supernova). Gamma-rays would eat up the ozone layer and indirectly cause the onset of an ice age due to the prevalence of NO2 molecules.

Supernovas can shoot out directed beams of gamma-rays to a distance of 100 light years, while hypernovas disburse gamma ray bursts as far as 500 to 1,000 light years away.

We are currently able to detect an average of about one gamma-ray burst per day. Because gamma-ray bursts are visible to distances encompassing most of the observable Universe -- a volume encompassing many billions of galaxies -- this suggests that gamma-ray bursts are exceedingly rare events per galaxy. Determining an exact rate is difficult, but for a galaxy of approximately the same size as the Milky Way, the expected rate (for hypernova-type events) is about one burst every 100,000 to 1,000,000 years.

Thankfully, hypergiant Eta Carinae, which is on the verge of going nova, is well over 7,500 light years away from Earth. We'll be safe when it goes off, but you'll be able to read by its light at night-time.

But not so fast -- our safety may not be guaranteed. Some scientists believe that gamma-ray busters may be responsible for sterilizing giagantic swaths of the galaxy -- in some cases as much as a quarter of the galaxy. Such speculation has given rise to the theory that gamma-ray bursters are the reason for the Fermi Paradox; exploding stars are continually stunting the potential for life to advance, making it the 3rd most powerful force in the Universe.

2. Self-Replication

A funny thing started to happen about 8 billion years ago: pieces of the Universe started to make copies of itself. This in turn kindled another phenomena: natural selection.

While this might not seem so impressive or powerful in its own right, it's the complexification and the emergent effects of this process that's interesting; what began as fairly straight forward cellular replication, at least on Earth, eventually progressed into viruses, dinosaurs, and human beings.

Self-replicating RNA/DNA has completely reshaped the planet, its surface and atmosphere molded by the processes of life. And it's a process that has proven to be remarkably resilient. The Earth has been witness to some extremely calamitous events over its history, namely the Big Five Mass Extinctions, but life has picked itself up, dusted off, and started anew.

Now, what makes self-replication all the more powerful is that it is not limited to biological substrate. Computer viruses and memes provide other examples of how self-replication can work. Replicators can also be categorized according to the kind material support they require in order to go about self-assembly. In addition to natural replicators, which have all or most of their design from nonhuman sources (i.e. natural selection), there's also the potential for:
  • Autotrophic replicators: Devices that could reproduce themselves in the wild and mine their own materials. It's thought that non-biological autotrophic replicators could be designed by humans and could easily accept specifications for human products.
  • Self-reproductive systems: Systems that could produce copies of itself from industrial feedstocks such as metal bar and wire.
  • Self-assembling systems: Systems that could assemble copies of themselves from finished and delivered parts. Simple examples of such systems have been demonstrated at the macro scale.
It's conjectured that a particularly potent form of self-replication will eventually come in the form of molecular manufacturing and the introduction of self-replicating nanobots. One version of this vision is connected with the idea of swarms of coordinated nanoscale robots working in tandem.

Microscopic self-replicating nanobots may not sound particularly powerful or scary, but what is scary is the prospect for unchecked exponential growth. A fear exists that nanomechanical robots could self-replicate using naturally occurring materials and consume the entire planet in their hunger for raw materials. Alternately they could simply crowd out natural life, outcompeting it for energy. This is what has been referred to as the grey goo or ecophagy scenario. Some estimates show, for example, that the Earth's atmosphere could be destroyed by such devices in a little under two years.

Self-replication is also powerful in terms of what it could mean for interstellar exploration and colonization. By using exponentially self-replicating Von Neumann probes, for example, the Galaxy could be colonized in as little as one to ten million years.

And of course, if you can build you can destroy; the same technology could be used to sterilize the Galaxy in the same amount of time [for more on this topic read my article, "Seven ways to control the Galaxy with self-replicating probes"].

Consequently, self-replication sits at #2 on my list; its remarkable ability to reshape matter, adapt, grow, consume, build and destroy make it a formidable force to be reckoned with.

1. Intelligence

Without a doubt the most powerful force in the universe is intelligence.

The capacity to collect, share, reorganize and act on information is unlike anything else in this universe. Intelligent beings can build tools, adapt to and radically change their environment, create complex systems and act with reasoned intention. Intelligent beings can plan, solve problems, think abstractly, comprehend ideas, use language and learn.

In addition, intelligence can reflect on itself, predict outcomes and avoid peril; autonomous systems, for the most part, are incapable of such action.

Humanity, a particularly intelligent bunch owing to a few fortuitous evolutionary traits, has -- for better or worse -- become a force of nature on Earth. Our species has reworked the surface of the planet to meet its needs, significantly impacting on virtually every other species (bringing many to extinction) and irrevocably altering the condition of the atmosphere itself. Not content to stay at home, we have even sent our artifacts into space and visited our very own moon.

While some cynics may scoff at so-called human 'intelligence', there's no denying that it has made a significant impact on the biosphere.

Moreover, what we think of as intelligence today may be a far cry from what's possible. The advent of artificial superintelligence is poised to be a game-changer. A superintelligent agent, which may or may not have conscious or subjective experiences, is an intellect that is much smarter than the best human brains in practically every field, including problem solving, brute calculation, scientific creativity, general wisdom and social skills. Such entities may function as super-expert systems that work to execute on any goal it is given so long as it falls within the laws of physics and it has access to the requisite resources.

That's
power. And that's why it's called the Technological Singularity; we have no idea how such an agent will behave once we get past the horizon.

Another more radical possibility (if that's not radical enough) is that the future of the Universe itself will be influenced by intelligent life. The nature of intelligence and its presence in the Universe must always be called into question. There exists only one of two possibilities: intelligence is either 1) cosmological epiphenomenon, or 2) an intrinsic part of the Universe's inner workings. If it's the latter, perhaps we have some work to do in the future to ensure the Universe's survival or to take part in its reproductive strategy.

Theories already exist in regards to stellar engineering -- where a local sun could be tweaked in such a way to extend its lifespan. Future civilizations may eventually figure out how to re-engineer the Universe itself (such as re-working the constants) or create an escape hatch to basement universes. Thinkers who have explored these possibilities include Milan Cirkovic, John Smart, Ray Kurzweil, Alan Guth and James N. Gardner (for example, see Gardner's book Biocosm: The New Scientific Theory of Evolution: Intelligent Life is the Architect of the Universe).

Intelligence as a force may not be particularly impressive today when considered alongside supermassive black holes, gamma-ray bursts and exponential self-replication. But it may be someday. The ability of intelligence to re-engineer its environment and work towards growth, refinement and self-preservation give it the potential to become the most powerful force in the Universe.

Tuesday, January 6, 2009

Cozying up with Deep Blue: A SentDev Classic

"Advanced Chess" pitting computer-human teams against each other shows how humans can avoid obsolescence through symbiotic relationships with technology

Several weeks ago, while bored on a commuter train, I decided to pull out my Palm Pilot and play a game of chess. Seeing as I had no one to play against, I decided to try my hand against the computer. I was quite confident that I'd have little difficultly keeping up—it's hardly Deep Blue, after all.

I arbitrarily picked an average difficultly level and proceeded to get my ass kicked in frighteningly short order. Somewhat discouraged, I then tried at the easiest level. Once again, I suffered an embarrassing thrashing.

With my dignity soiled, I vowed to improve my chess skills. I wasn't going to let some puny Palm Pilot beat me at chess. I dusted off an old chess manual and practiced some standard openings and strategies. I can now proudly say that I can beat my handheld at level 5. My goal is to beat it at level 8, maximum difficulty.

Playing a computer at chess can be rather humbling. As you're waiting for it to make its move, watching the "thinking" progress bar move from left to right, it's daunting to consider how many moves it's evaluating. I'm happy if I can think three to four moves ahead. The computer can contemplate thousands every second.

I'm sure Garry Kasparov felt the same way back in 1996 when pitted against Deep Blue. Now that computer could crunch the numbers. Written in C and running under the AIX operating system, Deep Blue was a massively parallel, 30-node, RS/6000, SP-based computer system enhanced with 480 special purpose VLSI chess processors. Odds are those stats are meaningless to you, but this one shouldn't be: This mother could crunch 100,000,000 positions per second.

100,000,000 positions per second!

It's a wonder that Kasparov could play against it at all. Of course, there's more to chess than just raw computation. It's a game of subtlety, nuance and sophisticated psychology and strategy—elements that are far beyond the capabilities of even the most powerful computers. In fact, prior to Kasparov's defeat, some chess experts maintained that computers would never be capable of defeating grandmasters. But thanks to Deep Blue and its successors, we all know that this is in fact possible.

Kasparov's loss was indeed a deep shock to the chess world. It was a significant milestone in the history of chess, not just because a reigning world champion finally lost against a computer, but because of the ramifications to the game itself. Did Kasparov's loss signify the beginning of the end for meaningful human interaction in professional chess? Would future tournaments see humans as mere spectators to machines?

More broadly, did Deep Blue's intrusion into a previously sanctified human realm represent the beginning of a larger trend? If computers could now defeat even our grandmasters, what else might they be capable of? Indeed, the steady onslaught of Moore's Law and breakthroughs in parallel processing has some fearing the rise of AI and the subsequent delegation of human minds. Are Homo sapiens poised for obsolescence and even replacement?

Well, if Kasparov has his way, the answer is no—and not because he feels that humans can continue to compete with computers. Rather, Kasparov believes the future of chess can be advanced through the cooperation of computers with humans. Consequently, Kasparov's idea of Advanced Chess, where human-machine teams compete against other human-machine teams, offers an effective framework for how humanity as a whole should manage its ongoing relationship with its advancing technologies. To avoid replacement, we need to establish a symbiosis with our technologies and create something greater than the sum of its parts.

Computer chess vs. human chess

In all fairness to Kasparov and other expert chess players, computers still aren't able to consistently defeat their human counterparts. After losing to Deep Blue in the first game, Kasparov rebounded by winning three games and drawing two, defeating it by a final score of four to two. Kasparov lost the 1997 rematch, but managed a draw against its successor, X3D Fritz in 2003. Similarly, grandmaster Vladimir Kramnik tied Deep Fritz in an eight-game tournament a year earlier. As it currently stands, the tables are quite even in terms of what the best computers can do against the best players.

But what's interesting is not so much the parity; it's that humans and machines play chess so differently yet still come up even. Computers and humans have unique weaknesses that are clearly offset by their strengths.

It's generally acknowledged that computers are superior calculators, while humans are better at long-range planning. Computers cannot be psychologically intimidated (something Kasparov does very well against his human opponents), nor are they capable of suffering from fatigue or other physical problems (during the 1984 World Championships, for example, Anatoly Karpov lost 22 pounds and was hospitalized several times as he battled Kasparov in a protracted tournament that saw them play well over 30 games). Computers are also immune to making silly mistakes (Kramnik lost game five against Fritz after making a severe blunder).

Humans, on the other hand, can plan, bluff and, most importantly, adapt. Kasparov, in all his encounters with computers, tends to finish more strongly than he begins. Even in my own clashes against my Palm Pilot, I have noticed that my computer opponent gets quite messed-up when I open with the Queen's Gambit. Consequently, that's now my standard opening against it. The Palm, on the other hand, cannot learn from my mistakes, and has no idea that I fare very poorly in end game scenarios.

Computers are also quite poor at recognizing when something is irrelevant. During its first match against Kasparov, for example, Deep Blue eliminated an inconsequential pawn at a critical point in the game. It's thought that Deep Blue sensed no threat from Kasparov at the time and that the move wouldn't detract from the attack it was developing at the other side of the board. It was merely being mindlessly methodical by claiming the material.

Assistive devices

In consideration of these differences and unique strengths, it's safe to say that the best chess playing entity in existence today is neither a computer nor a human, but rather a computer and a human working together. As Albert Einstein once remarked, "Computers are incredibly fast, accurate and stupid; humans are incredibly slow, inaccurate and brilliant; together they are powerful beyond imagination."

Indeed, computers have changed the face of chess—not just because they have proven to be formidable opponents, but because they can also act as potent assistive devices. Grandmasters now use them extensively for planning and practice. Exhaustive hash tables have been generated by computers that map virtually all end game scenarios involving up to five pieces. Scenario analysis is now possible at an unprecedented scale, including backward analysis (starting from a position with a large edge and moving back to a starting position) to find new branches worth analyzing, and multi-variation analysis mode to examine alternate tries worthy of analysis.

Simply put, not using computers to assist in chess play would be as silly as not using calculators to help us do math. Further, when looked at as prostheses, computers clearly expand human capacities, helping us take our activities and disciplines to the next level. They enable us to partake in endeavors that were previously cognitively impossible.

Recognizing this, Kasparov proposed a new form of competition during the late 90s. Inspired by his matches against computers, Kasparov felt that humans and computers should cooperate instead of contending with each other. Called "Advanced Chess," the new style of play would see human players team-up with a computer and compete against another man-machine unit.

Kasparov got the ball rolling by organizing a six-game Advanced Chess match against Veselin Topalov in June of 1998, with Kasparov using Fritz 5 and Topalov using ChessBase 7.0. The match ended in a three-three draw. Kasparov commented afterward, "My prediction seems to be true that in Advanced Chess it's all over once someone gets a won position. This experiment was exciting and helped spectators understand what's going on. It was quite enjoyable and will take a very big and prestigious place in the history of chess."

Since this initial match, Advanced Chess tournaments have been scheduled annually in Leon, Spain. Grandmaster Viswanathan Anand, the winner of three titles, is currently considered the world's best Advance Chess player. After losing to Kramnik in 2002, Anand commented, "I think in general people tend to overestimate the importance of the computer in the competitions. You can do a lot of things with the computer but you still have to play good chess...I don't really feel that the computer alone can change the objective true to the position."

Expanding on Anand's point, advocates of Advanced Chess argue that the strength of a player does not come from any of the components of the human-computer team, but rather from the symbiosis of the two. The combination of man and machine results in a "player" that is endowed with the computer's extreme power and accuracy and the human's creativity and sagacity.

Ultimately, the combined skills of knowledgeable humans and computer chess engines can produce a result stronger than either alone. Advanced Chess has resulted in heights never before seen in chess. It has produced blunder-free games with the beauty and quality of both perfect tactical play and highly meaningful strategic plans, and it has offered chess aficionados remarkable insight into the thought processes of strong human chess players and strong chess computers.

Cooperation and merger, not obsolescence

With the rise in prominence of computers in the chess world, Kasparov refused to throw up his hands in despair and declare the end of human involvement in the game. Instead, he devised a new activity that would combine the best of what the digital world had to offer with that of the biological. The result was something greater than the sum of its individual parts.

The rest of society should learn from this example. Naturally, people are growing increasingly wary of supercomputers and the potential for AI; it's understandable that people fear a future in which humans are replaced by machines. But as the example of Advanced Chess shows, that's not necessarily what's going to happen. The development of AI and other information technologies will continue to advance based on how we choose to adapt to them and how they adapt to us. Further, human control over where and how advanced technologies develop will have a significant impact on the kinds of collaborative and symbiotic systems that emerge.

Thanks to human ingenuity, our disciplines, activities and goals will continue to change and evolve, taking the human experience to unprecedented places as we become capable of things never before possible.

Like beating my Palm Pilot at level 6.

The article was originally published on March 19, 2005.