Showing posts with label machine intelligence. Show all posts
Showing posts with label machine intelligence. Show all posts

Monday, August 23, 2010

It's not all about Ray: There's more to Singularity studies than Kurzweil

I'm finding myself a bit disturbed these days about how fashionable it has become to hate Ray Kurzweil.

It wasn't too long ago, with the publication of The Age of Spiritual Machines, that he was the cause célèbre of our time. I'm somewhat at a loss to explain what has happened in the public's mind since then; his ideas certainly haven't changed all that much. Perhaps it's a collective impatience with his timelines; the fact that it isn't 2049 yet has led to disillusionment. Or maybe it's because people are afraid of buying into a set of predictions that may never come true—a kind of protection against disappointment or looking foolish.

What's more likely, however, is that his ideas have reached a much wider audience since the release of Spiritual Machines and The Singularity is Near. In the early days his work was picked up by a community who was already primed to accept these sorts of wide-eyed speculations as a valid line of inquiry. These days, everybody and his brother knows about Kurzweil. This has naturally led to an increased chorus of criticism by those who take issue with his thesis—both from experts and non-experts alike.

As a consequence of this popularity and infamy, Ray has been given a kind of unwarranted ownership over the term 'Singularity.' This has proven problematic on several levels, including the fact that his particular definition and description of the technological singularity is probably not the best one. Kurzweil has essentially equated the Singularity with the steady, accelerating growth of all technologies, including intelligence. His definition, along with its rather ambiguous implications, is inconsistent with the going definition used by other Singuarlity scholars, that of it being an 'intelligence explosion' caused by the positive feedback of recursively improving machine intelligences.

Moreover, and more importantly, Ray Kurzweil is one voice among many in a community of thinkers who have been tackling this problem for over half a century. What's particularly frustrating these days is that, because Kurzweil has become synonymous with the Singularity concept, and because so many people have been caught in the hate-Ray trend, people are throwing out the Singularity baby with the bathwater while drowning out all other voices. This is not only stupid and unfair, it's potentially dangerous; Singularity studies may prove crucial to the creation of a survivable future.

Consequently, for those readers new to these ideas and this particular community, I have prepared a short list of key players whose work is worth deeper investigation. Their work extends and complements the work of Ray Kurzweil in many respects. And in some cases they present an entirely different vision altogether. But what matters here is that these are all credible academics and thinkers who have worked or who are working on this important subject.

Please note that this is not meant to be a comprehensive list, so if you or your favorite thinker is not on here just take a chill pill and add a post to the comments section along with some context.
  • Jon von Neumann: The brilliant Hungarian-American mathematician and computer scientist, John von Neumann is regarded as the first person to use the term 'Singularity' in describing a future event. Speaking with Stanislaw Ulam in 1958, von Neumann made note of the accelerating progress of technology and constant changes to human life. He felt that this tendency was giving the appearance of our approaching some essential singularity beyond which human affairs, as we know them, could not continue. In this sense, von Neumann's definition is more a declaration of an event horizon.
  • I. J. Good: One of the first and best definitions of the Singularity was put forth by mathematician I. G. Good. Back in 1965 he wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they might be able to improve their own designs in ways unforeseen by their designers and thus recursively augment themselves into far greater intelligences. He thought that, while the first set of improvements might be small, machines could quickly become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a Singularity).
  • Marvin Minsky: Inventor and author, Minsky is universally regarded as one of the world's leading authorities in artificial intelligence. He has made fundamental contributions to the fields of robotics and computer-aided learning technologies. Some of his most notable books include The Society of Mind, Perceptrons, and The Emotion Machine. Ray Kurzweil calls him his most important mentor. Minsky argues that our increasing knowledge of the brain and increasing computer power will eventually intersect, likely leading to machine minds and a potential Singularity.
  • Vernor Vinge: In 1983, science fiction writer Vernor Vinge rekindled interest in Singularity studies by publishing an article about the subject in Omni magazine. Later, in 1993, he expanded on his thoughts in the article, "The Coming Technological Singularity: How to Survive in the Post-Human Era." He (now famously) wrote, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Inspired by I. J. Good, he argued that superhuman intelligence would be able enhance itself faster than the humans who created them. He noted that, "When greater-than-human intelligence drives progress, that progress will be much more rapid." He speculated that this feedback loop of self-improving intelligence could cause large amounts of technological progress within a short period, and that the creation of smarter-than-human intelligence represented a breakdown in humans' ability to model their future. Pre-dating Kurzweil, Vinge used Moore's law in an attempt to predict the arrival of artificial intelligence.
  • Hans Moravec: Carnegie Mellon roboticist Hans Moravec is a visionary thinker who is best known for his 1988 book, Mind Children, where he outlines Moore's law and his predictions about the future of artificial life. Moravec's primary thesis is that humanity, through the development of robotics and AI, will eventually spawn their own successors (which he predicts to be around 2030-2040). He is also the author of Robot: Mere Machine to Transcendent Mind (1998) in which he further refined his ideas. Moravec writes, "It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half–century of development. Indeed, for that reason, many long–time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. But there are very good reasons why things will go much faster in the next fifty years than they have in the last fifty."
  • Robin Hanson: Associate professor of economics at George Mason University, Robin Hanson has taken the "Singularity" term to to refer to sharp increases in the exponent of economic growth. He lists the agricultural and industrial revolutions as past "singularities." Extrapolating from such past events, he proposes that the next economic singularity should increase economic growth between 60 and 250 times. Hanson contends that such an event could be triggered by an innovation that allows for the replacement of virtually all human labor, such as mind uploads and virtually limitless copying.
  • Nick Bostrom: University of Oxford's Nick Bostrom has done seminal work in this field. In 1998 he published, "How Long Before Superintelligence," in which he argued that superhuman artificial intelligence would likely emerge within the first third of the 21st century. He reached this conclusion by looking at various factors, including different estimates of the processing power of the human brain, trends in technological advancement and how fast superintelligence might be developed once there is human-level artificial intelligence.
  • Eliezer Yudkowsky: Artificial intelligence researcher Eliezer Yudkowsky is a co-founder and research fellow of the Singularity Institute for Artificial Intelligence (SIAI). He is the author of "Creating Friendly AI" (2001) and "Levels of Organization in General Intelligence" (2002). Primarily concerned with the Singularity as a potential human-extinction event, Yudkowsky has dedicated his work to advocacy and developing strategies towards creating survivable Singularities.
  • David Chalmers: An important figure in philosophy of mind studies and neuroscience, David Chalmers has a unique take on the Singularity where he argues that it will happen through self-amplifying intelligence. The only requirement, he claims, is that an intelligent machine be able to create an intelligence smarter than itself. The original intelligence itself need not be very smart. The most plausible way, he says, is simulated evolution. Chalmers feels that if we get to above-human intelligence it seems likely it will take place in a simulated world, not in a robot or in our own physical environment.
Like I said, this is a partial list, but it's a good place to start. Other seminal thinkers include Alan Turing, Alvin Toffler, Eric Drexler, Ben Goertzel, Anders Sandberg, John Smart, Shane Legg, Marin Rees, Stephen Hawking and many, many others. I strongly encourage everyone, including skeptics, to take a deeper look into their work.

And as for the all the anti-Kurzweil sentiment, all I can say is that I hope to see it pass. There is no good reason why he—and others—shouldn't explore this important area. Sure, it may turn out that everyone was wrong and that the future isn't at all what we expected. But as Enrico Fermi once said, "There's two possible outcomes: if the result confirms the hypothesis, then you've made a discovery. If the result is contrary to the hypothesis, then you've made a discovery."

Regardless of the outcome, let's make a discovery.

Sunday, August 22, 2010

SETI on the lookout for artificial intelligence

Slowly but surely, SETI is starting to get the picture: If we're going to find life out there—and that's a big if—it's probably not going to be biological. Writing in Acta Astronautica, SETI's Seth Shostak says that the odds likely favour detecting machine intelligences rather than "biological" life.

Yay to SETI for finally figuring this out; shame on SETI for taking so long to acknowledge this. Marvin Minsky has been telling them to do so since the Byurakan SETI conference in 1971.

John Elliott, a SETI research veteran based at Leeds Metropolitan University, UK, agrees. "...having now looked for signals for 50 years, SETI is going through a process of realising the way our technology is advancing is probably a good indicator of how other civilisations—if they're out there—would've progressed. Certainly what we're looking at out there is an evolutionary moving target."

Both Shostak and Elliott admit that finding and decoding any eventual message from thinking machines may prove more difficult than in the "biological" case, but the idea does provide new directions to look. Shostak believes that artificially intelligent alien life would be likely to migrate to places where both matter and energy—the only things he says would be of interest to the machines—would be in plentiful supply. That means the SETI hunt may need to focus its attentions near hot, young stars or even near the centres of galaxies.

Personally, I find that last claim to be a bit dubious. While I agree that matter and energy will be important to an advanced machine-based civilization, close proximity to the Galaxy's centre poses a new set of problems, including an increased chance of running into gamma ray bursters and black holes, not to mention the problem of heat—which for a supercomputing civilization will be extremely problematic.

Moreover, SETI still needs to acknowledge that the odds of finding ETIs is close to nil. Instead, Shostak and company are droning on about how we'll likely find traces in about 25 years or so. Such an acknowledgement isn't likely going to happen; making a concession like that would likely mean they'd lose funding and have to close up shop.

So their search continues...

Source.

Wednesday, June 24, 2009

Ranking the most powerful forces in the Universe

There are a large number of forces at work in the Universe, some more powerful than others -- and I'm not talking about the four fundamental forces of nature. A force in the context I'm talking about is any phenomenon in Universe that exhibits a powerful effect or influence on its environment. Many of these phenomenon quite obviously depend on the four basic forces to function (gravity, electromagnetism, the weak interaction and the strong interaction), but it's the collective and emergent effects of these fundamental forces that I'm interested in.

And when I say power I don't just mean the capacity to destroy or wreak havoc, though that's an important criteria. A force should also be considered powerful if it can profoundly reorganize or manipulate its environment in a coherent or constructive way.

Albert Einstein once quipped that the most powerful force in the Universe was compound interest. While he does have a point, and with all due respect to the Master, I present to you my list of the four most powerful phenomenon currently making an impact in the Universe:

4. Supermassive Black Holes

There's no question that black holes are scary; it's the only part of the Universe that can truly destroy itself.

Indeed, Einstein himself, whose Theory of Relativity opened the door to the modern study of black holes, noted that "they are where God has divided by zero." And it's been said that the gravitational singularity, where the laws of physics collapse, is the most complex mystery of science that still defies human knowledge.

Somewhat counterintuitively, black holes take the weakest of the four basic forces, gravity, to create a region of space with a gravitational field so powerful that nothing, not even light, can escape its pull. They're called "black" because they absorb all the light that hits them and reflect nothing. They have a one-way surface, the event horizon, into which objects can fall, but out of which nothing (save for Hawking Radiation) can escape.

Black holes can also vary in size and gravitational intensity. Supermassive black holes are a million to a billion times the mass of a typical black hole. Most galaxies, if not all, are believed to contain supermassive black holes at their centers (including the Milky Way).

And recent studies are now suggesting that they are much larger than previously thought. Computer models reveal that the supermassive black hole at the heart of the giant galaxy M87 weighs the same as 6.4 billion suns—two to three times heavier than previous estimates.

That's a lot of pull.

Indeed, should anything have the misfortune of getting close enough to a supermassive black hole, whether it be gas, stars or entire solar systems, it would be sucked into oblivion. Its gravitational pull would be so overwhelming that it would hurl gas and stars around it at almost the speed of light; the violent clashing would heat the gas up to over a million degrees.

Some have suggested that the supermassive black hole is the most powerful force in the Universe. While its ability to destroy the very fabric of space and time itself is undeniably impressive (to say the least), its localized and limited nature prevent it from being ranked any higher than fourth on my list. A black hole would never subsume an entire Galaxy, for example, at least not within cosmologically long time frames.

3. Gamma-Ray Bursts

The power of gamma-ray bursts (GRB) defies human comprehension.

Imagine a hypergiant star at the end of its life, a massive object that's 150 times larger than our own. Extremely high levels of gamma radiation from its core is causing its energy to transform to matter. The resultant drop in energy causes the star to collapse. This results in a dramatic increase in the thermonuclear reactions that was burning within it. All this added energy overpowers the gravitational attraction and it explodes in a fury of energy -- the hypergiant has gone hypernova.

This is not the stuff of fiction or theory -- explosions like this have been observed. Hypernovas of this size can instantly expel about 10X46 joules. This is more energy than our sun produces over a period of 10 billion years. 10 billion years! In one cataclysmic explosion!

Hypernovas can wreak tremendous havoc in its local area, effectively sterilizing the region. These explosions produce highly collimated beams of hard gamma-rays that extend outward from the exploding star. Any unfortunate life-bearing planet that should come into contact with those beams would suffer a mass extinction (if not total extinction depending on its proximity to the supernova). Gamma-rays would eat up the ozone layer and indirectly cause the onset of an ice age due to the prevalence of NO2 molecules.

Supernovas can shoot out directed beams of gamma-rays to a distance of 100 light years, while hypernovas disburse gamma ray bursts as far as 500 to 1,000 light years away.

We are currently able to detect an average of about one gamma-ray burst per day. Because gamma-ray bursts are visible to distances encompassing most of the observable Universe -- a volume encompassing many billions of galaxies -- this suggests that gamma-ray bursts are exceedingly rare events per galaxy. Determining an exact rate is difficult, but for a galaxy of approximately the same size as the Milky Way, the expected rate (for hypernova-type events) is about one burst every 100,000 to 1,000,000 years.

Thankfully, hypergiant Eta Carinae, which is on the verge of going nova, is well over 7,500 light years away from Earth. We'll be safe when it goes off, but you'll be able to read by its light at night-time.

But not so fast -- our safety may not be guaranteed. Some scientists believe that gamma-ray busters may be responsible for sterilizing giagantic swaths of the galaxy -- in some cases as much as a quarter of the galaxy. Such speculation has given rise to the theory that gamma-ray bursters are the reason for the Fermi Paradox; exploding stars are continually stunting the potential for life to advance, making it the 3rd most powerful force in the Universe.

2. Self-Replication

A funny thing started to happen about 8 billion years ago: pieces of the Universe started to make copies of itself. This in turn kindled another phenomena: natural selection.

While this might not seem so impressive or powerful in its own right, it's the complexification and the emergent effects of this process that's interesting; what began as fairly straight forward cellular replication, at least on Earth, eventually progressed into viruses, dinosaurs, and human beings.

Self-replicating RNA/DNA has completely reshaped the planet, its surface and atmosphere molded by the processes of life. And it's a process that has proven to be remarkably resilient. The Earth has been witness to some extremely calamitous events over its history, namely the Big Five Mass Extinctions, but life has picked itself up, dusted off, and started anew.

Now, what makes self-replication all the more powerful is that it is not limited to biological substrate. Computer viruses and memes provide other examples of how self-replication can work. Replicators can also be categorized according to the kind material support they require in order to go about self-assembly. In addition to natural replicators, which have all or most of their design from nonhuman sources (i.e. natural selection), there's also the potential for:
  • Autotrophic replicators: Devices that could reproduce themselves in the wild and mine their own materials. It's thought that non-biological autotrophic replicators could be designed by humans and could easily accept specifications for human products.
  • Self-reproductive systems: Systems that could produce copies of itself from industrial feedstocks such as metal bar and wire.
  • Self-assembling systems: Systems that could assemble copies of themselves from finished and delivered parts. Simple examples of such systems have been demonstrated at the macro scale.
It's conjectured that a particularly potent form of self-replication will eventually come in the form of molecular manufacturing and the introduction of self-replicating nanobots. One version of this vision is connected with the idea of swarms of coordinated nanoscale robots working in tandem.

Microscopic self-replicating nanobots may not sound particularly powerful or scary, but what is scary is the prospect for unchecked exponential growth. A fear exists that nanomechanical robots could self-replicate using naturally occurring materials and consume the entire planet in their hunger for raw materials. Alternately they could simply crowd out natural life, outcompeting it for energy. This is what has been referred to as the grey goo or ecophagy scenario. Some estimates show, for example, that the Earth's atmosphere could be destroyed by such devices in a little under two years.

Self-replication is also powerful in terms of what it could mean for interstellar exploration and colonization. By using exponentially self-replicating Von Neumann probes, for example, the Galaxy could be colonized in as little as one to ten million years.

And of course, if you can build you can destroy; the same technology could be used to sterilize the Galaxy in the same amount of time [for more on this topic read my article, "Seven ways to control the Galaxy with self-replicating probes"].

Consequently, self-replication sits at #2 on my list; its remarkable ability to reshape matter, adapt, grow, consume, build and destroy make it a formidable force to be reckoned with.

1. Intelligence

Without a doubt the most powerful force in the universe is intelligence.

The capacity to collect, share, reorganize and act on information is unlike anything else in this universe. Intelligent beings can build tools, adapt to and radically change their environment, create complex systems and act with reasoned intention. Intelligent beings can plan, solve problems, think abstractly, comprehend ideas, use language and learn.

In addition, intelligence can reflect on itself, predict outcomes and avoid peril; autonomous systems, for the most part, are incapable of such action.

Humanity, a particularly intelligent bunch owing to a few fortuitous evolutionary traits, has -- for better or worse -- become a force of nature on Earth. Our species has reworked the surface of the planet to meet its needs, significantly impacting on virtually every other species (bringing many to extinction) and irrevocably altering the condition of the atmosphere itself. Not content to stay at home, we have even sent our artifacts into space and visited our very own moon.

While some cynics may scoff at so-called human 'intelligence', there's no denying that it has made a significant impact on the biosphere.

Moreover, what we think of as intelligence today may be a far cry from what's possible. The advent of artificial superintelligence is poised to be a game-changer. A superintelligent agent, which may or may not have conscious or subjective experiences, is an intellect that is much smarter than the best human brains in practically every field, including problem solving, brute calculation, scientific creativity, general wisdom and social skills. Such entities may function as super-expert systems that work to execute on any goal it is given so long as it falls within the laws of physics and it has access to the requisite resources.

That's
power. And that's why it's called the Technological Singularity; we have no idea how such an agent will behave once we get past the horizon.

Another more radical possibility (if that's not radical enough) is that the future of the Universe itself will be influenced by intelligent life. The nature of intelligence and its presence in the Universe must always be called into question. There exists only one of two possibilities: intelligence is either 1) cosmological epiphenomenon, or 2) an intrinsic part of the Universe's inner workings. If it's the latter, perhaps we have some work to do in the future to ensure the Universe's survival or to take part in its reproductive strategy.

Theories already exist in regards to stellar engineering -- where a local sun could be tweaked in such a way to extend its lifespan. Future civilizations may eventually figure out how to re-engineer the Universe itself (such as re-working the constants) or create an escape hatch to basement universes. Thinkers who have explored these possibilities include Milan Cirkovic, John Smart, Ray Kurzweil, Alan Guth and James N. Gardner (for example, see Gardner's book Biocosm: The New Scientific Theory of Evolution: Intelligent Life is the Architect of the Universe).

Intelligence as a force may not be particularly impressive today when considered alongside supermassive black holes, gamma-ray bursts and exponential self-replication. But it may be someday. The ability of intelligence to re-engineer its environment and work towards growth, refinement and self-preservation give it the potential to become the most powerful force in the Universe.

Saturday, December 8, 2007

The problem with 99.9 % of so-called 'solutions' to the Fermi Paradox

Non-exclusivity.

Sure everyone has a convenient answer to the Fermi Paradox, but nearly all of them fail the non-exclusivity test. While some solutions to the FP may account for many if not most of the reasons why we haven't detected signs of ETI's, they cannot account for all.

For example, take the notion that interstellar travel is too costly or that civs have no interest in embarking on generational space-faring campaigns. Sure, this may account for a fair share of the situation, but in a Universe of a gajillion stars it cannot possibly account for all. There's got to be at least one, if not millions of civs, who for whatever reason decide it just might be worth it.

Moreover, answers like the ‘zoo hypothesis,’ ‘non-interference,’ or ‘they wouldn’t find us interesting,' tend to be projections of the human psyche and a biased interpretation of current events.

Cosmological determinism

Analyses of the FP need to adopt a more rigid and sweeping methodological frame.

We need to take determinism more seriously. The Universe we observe is based on a fixed set of principles -- principles that necessarily invoke cosmological determinism and in all likelihood sociological uniformitarianism. In other words, the laws of the Universe are moulding us on account of selectional pressures beyond our control.

Civilizations that don't conform to adaptive states will simply go extinct. The trouble is, we have no say in what these adaptive states might be like; we are in the business of conforming such that we continue to survive.

The question is, what are these adaptive states?

Strong convergence

Transhumanist philosopher Nick Bostrom refers to this as the strong convergence hypothesis -- the idea that all sufficiently advanced civilizations converge towards the same optimal state.

This is a hypothesized developmental tendency akin to a Dawkinsian fitness peak -- the suggestion that identical environmental stressors, limitations and attractors will compel intelligences to settle around optimal existential modes. This theory does not favour the diversification of intelligence – at least not outside of a very strict set of living parameters.

The space of all possible minds...that survive

Consequently, our speculations about the characteristics of a post-Singularity machine mind must take deterministic constraints into account. Yes, we can speculate about the space of all possible minds, but this space is dramatically shrunk by adaptationist constraints.

The question thus becomes, what is the space of all possible post-Singularity machine minds that result in a civilization's (or a singleton's) ongoing existence?

And it is here that you will likely begin to find a real and meaningful explanation to the Fermi Paradox and the problem that is non-exclusivity.

Monday, July 30, 2007

When Dvorsky met Minsky

Of all the celebrities and bigwigs I looked forward to meeting at TransVision 2007 there was only one person who I was truly nervous about running into – a person who gave me that 'I’m going to squeal like a little girl when I see him’ kind of feeling.

That individual was pioneering neuroscientist Marvin Minsky.

A friend cautioned me by claiming that he was a difficult man and not very approachable. I dismissed the warning and patiently waited for an opportunity to start a conversation with him.

I eventually got my chance. I was with two other friends when the three of us bumped into Minsky in the reception area of the conference hall. Without hesitation I approached and introduced myself. After we shook hands I told him how much I appreciated his work and how much of an honour it was for me to finally meet him. He nodded his head and didn’t say a word.

I was surprised by how old he looked. Minsky is now 80 years old and has been working in the field of neuroscience since the 1950s. Despite his age he recently published a book, The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. Minsky just keeps on going.

Working to move the conversation along, I told him that while I was conducting research for my presentation I discovered that he was a presenter at the seminal SETI conference in 1971 in Byurakan. Minsky made waves at that conference by having the audacity to suggest that advanced extraterrestrial civilizations would likely be comprised of machine minds. It was a controversial suggestion, one that has only come into acceptance in more recent times. I asked Minsky for a first-hand account of how his idea was received back in 1971.

He stood there, just blankly looking at me, and didn’t say a single word. We all waited in silence for what seemed an eternity. I got the distinct impression that he was thoroughly disinterested in our little group.

Being a sucker for punishment I decided to move the conversation along. I unabashedly gave him the 10 second executive summary of my TV07 presentation, where I make some claims about the limitations of extraterrestrial civilizations and how this might account for the Great Silence and the problem that is the Fermi Paradox.

This finally got Minsky going. He had attended a SETI conference two weeks prior and was impressed with what he heard there. Minsky suggested that the reason we don’t see any signs of obvious megascale engineering or cosmological re-tuning by advanced ETI’s is that they have no sense of urgency to embark upon such projects. He argued that advanced intelligences won’t engage in these sorts of Universe changing exercises until the very late stages of the cosmos.

Jeez, I thought to myself, I hadn't considered that.

Leave it to Marvin Minsky to give me some serious food for thought a mere two hours before I was to give my talk. I was suddenly worried that this consideration would pierce a glaring hole in my argument.

After another minute of idle chit-chat I excused myself from Minsky's company and found a little corner where I could have my little micro-panic and contemplate his little theory.

The more I thought about it, however, the more unsatisfied I became with his answer; virtually everyone has a rather smug solution to the Fermi Paradox, and Marvin Minsky is no exception. Specifically, I was concerned with how such a theory could be exclusive to all civilizations. It seemed implausible to believe that not even one renegade civilization would take it upon itself to change the rules of the cosmos if it had the capacity to do so.

Moreover, given the power to reshape the Universe, a strong case could be made that a meta-ethical imperative exists to turn the madness that is existence into something profoundly more meaningful and safer. As Slavoj Žižek once said, existence is a catastrophe of the highest order. Timothy Leary described the Universe as an "ocean of chaos."

Waiting until the last minute to create a cosmological paradise (assuming such a thing is even possible) would seem to be both exceptionally risky and irresponsible -- not just to the members of a civilization capable of such feats, but to the larger universal community itself.

Phew. That's right, that's the answer. Ha, take that, Minsky!

So, after rationalizing a counter-argument to Minsky's suggestion, I was able to calm down and prepare myself for my presentation and deal with any follow-up questions that could be thrown my way.

And that's how I met Marvin Minsky.

Sure, he's not the most personable man I've ever met, but I got the sense that he's at a time in his life where a) he knows he owes nothing to no one and b) he'd rather engage with people who can contribute to his life's work and his ongoing struggle to solve the problem that is human cognition. And he's still as sharp as they come.

It was truly an honour.