The Future of Intelligence

If you're reading this, there's a good chance I've talked to you, possibly at some length, about my predictions of the coming of a radical change in human existence. This paper attempts to explain what changes I think are possible, and why working towards them is of utmost importance.

I want to say right off the bat that I honestly don't think that I have the writing skill required to convey the emotions surrounding what I'm discussing in this essay. During my move from shock level 3 to SL4, I spent several weeks in which most of my free time was spent in a daze of awe and terror at the possibilities that lie before us as a species. I don't think I can put that feeling into words, except in the way I just did. I do recommend Staring Into The Singularity if catching the emotional vibe is important to you.

Just to give you an idea of where I'm coming from, this essay was written after having a conversation with my Dad in which he was aghast at my nonchalance towards modern politics. I really and truly think that the events described in this essay will make all of that irrelevant in my life time, or shortly thereafter, if we don't manage to kill ourselves by then. I care about politics to exactly the extent that a politician is involved in nanotechnology or AI research, or is likely to cause a serious worldwide depression. Beyond that, I just can't bring myself to care; the emotional relevance of some bond measure for schools next to the prospect of a beneficial singularity is basically non-existent.

I hope you enjoy this essay, or at least are made thoughtful by it. Please note that this is not, by any stretch, a technical paper. I present no justifications for any claims made herein. There is a companion essay which goes a bit into the reasons why I think all these things are likely. You may also want to read about why I've signed up to be frozen when I die, which covers much of the same ground as this page, but in a somewhat easier to read fashion.

-Robin Lee Powell

Table Of Contents

  1. Terminology
  2. Assumptions
  3. Plausibilities
  4. Possibilities
  5. Conclusions

Terminology

Because every aspect of this paper is both highly speculative and grounded in the outermost fringes of current science, it is necessary to agree on some terms.

"The Singularity"

First, let me confess that I hate this term. I hate it a lot. But I quite frankly don't have anything better to put forward; that term is what Vinge's original essay used. There are many, many site on the web that discuss the singularity, and more than one that talk about the various definitions the term has.

The Singularity is generally defined as "the point at which technological advancement (or whatever) reaches such a severe exponential curve that we can't possibly imagine what lies beyond it".

This is stupid for at least three reasons:

As I said, I don't have a better term, although in my mind I think it's just The Change. But the key point for me really is an increase, a qualitative rather than merely quantitative increase (i.e. better, not just faster) in intelligence. Whatever else people point at and say, "Then, *then* The Singularity will be here!", I'm not listening until there are beings interacting with us humans that are smarter than us. Much smarter. As Eliezer puts it, "Smarter in the way that we are smarter than dogs, or trees, or possibly rocks".

Artificial Intelligence

I'll be using the term artificial intelligence, hereafter AI, in a fairly specific way in this essay. First of all, I am absolutely not talking about the state of the art in the conventional discipline of AI. MapQuest and Google are the state of the art in conventional AI, and they couldn't wipe their own noses, if they had them, even if they did have limbs.

I am, instead, speaking of the (still very theoretical) possibility of human-level generalized intelligence. By this I mean that I'm talking about AI that can deal with at least the level of ambiguity, confusion, subtlety, emotional content, etc, etc as a human can. An AI, in this sense, should be at least as social adept as the average human, for example, and shouldn't ask the stupid questions (like "Why do you cry?") that AIs in movies are always asking.

That's only the starting point, of course. If the AI is only as good at all those things as a human, I'm still not terribly excited. I just want to make it clear that an AI that can engineer New And Amazing Things but gets confused to the point of instability when I tell it that I love blue skies in the morning, and what does it think about that?, well, that's not the kind of AI I'm talking about. In fact, that kind of AI is probably the single most dangerous thing that could possibly exist.

Super-Intelligence

In an amazing display of anthropocentricity, super-intelligence is defined as 'smarter then humans'. Not just smarter than your average human, but smarter than any human. Einstein, Hawking, Da Vinci, Thoreau, whatever.

Yes, that last name was Thoreau. As in "Walden". This is a stunningly important point: if a being is supposedly super-intelligent but still can't write about living alone in the rural countryside better than Thoreau could have possibly conceived in his wildest dreams of writing magnificence, then it not only is not super-intelligent, it is more than likely a danger to all human-kind.

Let me say that again, in a different way, because it may be the most important underlying point in this essay:

A being that is apparently super-intelligent should experience complex emotions, be able to reason about them, and be able to talk about its reasoning about them.

That's perhaps a trifle unfair, in a sense. A non-human being need not experience emotions in quite the way that we do, but it should at least be to thoroughly understand and respond to them, again, at least as well as any human. An AI, at least, would be unlikely to have most of the autonomic responses that drive our emotional states, and hence couldn't really be said to "feel" in the way that we do. In the long run, though, that's splitting hairs if it can still reason about the emotions of the humans around it and act in ways consistent with what humans consider to be emotional healthiness.

A note needs to be made here about the distinction between strong and weak superintelligence. Weak superintelligence is running something as smart as a human much, much faster. Theoretically, if you could stick a human brain into a computer substrate, it would be a mere engineering problem to make that brain run many hundreds of times faster. Strong superintelligence is a qualitative increase is smartness.

Weak superintelligence is not discussed further in this essay, not because it's not cool, but because I expect strong superintelligence to come about, and that's far more interesting.

Nanotechnology

Nanotechnology is the partially theoretical science of moving around individual atoms. I say partially theoretical because we've already demonstrated a fair number of the early steps required to get nanotechnology working, and done much of the theoretical work on the rest of it.

Again, there is a strong and weak distinction, at least informally. Weak nanotechnology is having an 'everything box'; something that you give feed stock to (say, an organic compound mixed with iron and a few other trace elements) and it outputs anything you can give it a blueprint for. Strong nanotechnology is little tiny robots that go out and build the thing from the surrounding environment.

I personally don't care much about strong versus weak nanotechnology. In fact, I'd rather prefer that strong nanotechnology turns out to be impossible. The problem is, I don't think it will be, and strong nanotechnology is mind-numbingly dangerous.

The issue there is that if it is possible to make nanobots of the sizes that some researchers are theorizing about, they will be much, much smaller than human cells. Smaller in the same way that a human is smaller than the empire state building, but that in no way stops a human with enough explosives from destroying it. The problem is, human cells have no defense against something that small. None whatsoever. All that self-replicating nanobots of this kind will require to reduce all matter on earth to more copies of themselves is time. This is called the "grey goo" scenario; you can read about it and other possibilities like it in the excellent paper Existential Risks, which describes various things that could wipe out all of humanity.

Of course, like most technologies, the downside and upside are about equal. Strong nanotechnology has the potential to create an age of prosperity that is basically unimaginable. Easily digestible, tasty food plants that will provide all necessary nutrition, and grow on bare rock with almost no water. The ability to create any material comfort essentially at will. The ability to heal any injury, possibly up to and including death, at least until decomposition sets in. That's just a tiny sampling of what might be possible.

The important part about strong nanotechnology is that it is way, way too much for humans to be allowed to control. We're simply not smart enough to deal with something of this magnitude. We proved that with the Cuban missile crisis, among other things. The big difference with nanotechnology versus nukes is that there is a first strike advantage. A huge one. Or, rather, if you think that your nanobots won't destroy the world, it looks like there's a huge first strike advantage, from a military perspective. In this case, the beliefs of the people controlling the first nanotechnology are what really counts, of course.

Friendly AI

There is no way that I can do this justice in anything shorter than a book. For a nutshell treatment, see 24 Definitions of Friendly AI, for the book-length version see Creating Friendly AI (which, for the record, I have read in its entirety; it was this book that convinced me that Eliezer really knows what he's doing).

The short version is that Friendly AI is AI that wants to be nice to humans. In fact, a Friendly AI is one whose sole goal is to be nice to humans. Everything it does, including improve itself, has that as its sole motivation. This isn't some kind of "constraint" or "cage". You do not, I hope, feel constrained by the fact that your morality says that killing babies for fun is bad. You would, in fact, resist any change to that part of your morality. A Friendly AI is one that feels the same way about being nice to humans. If the AI sees helping humans as a burden to escape, we've already lost.

Assumptions

This is a list of the things that I assume as true, due to a ton of reading I've done about them. Again, references upon request, but you might be better off just browsing the Singularity Institute site or browsing the Extropy Institute site or reading the 14 objections or asking Google.

Strongly Superintelligent AI Is Possible

Combining a few issues here. I think that strong superintelligence is possible. Furthermore, I think that to argue to the contrary is amazingly rank anthropocentrism, and should be laughed at. Beyond that, I think full AI is possible, or rather, I think that a computer can hold a sentient mind. It's the combination of the two that's interesting.

The thing is, once you have an intelligence inhabiting a computer, a whole new avenue of possibility opens up, because the intelligence can copy itself. Much more importantly, it can attempt improvements to the copies, and test the copies, and if they seem to be working better, let the copies take over and make more improvements.

If what we've seen with the advancement of technology continues to hold true, this should lead to a rapid exponential growth in intelligence. With technology, we've seen tools continuously used to make better tools (first stone tools used to make tin tools, then tin tools made bronze tools, then iron, and so on). At each stage, because the tools were better, the things that could be made with the tools were better as well, but that's besides the point. If a being can make itself smarter, even a little tiny bit, that smarter version will have a much easier time making itself smarter still. There are two reasons: it already knows how to start, and, being smarter, it's going to be better at everything that intelligence applies to, including increasing its own intelligence.

Now, theoretically a human could be uploaded into a computer and we could start from there. This is very unlikely to be what causes The Change, though, because the technologies required to upload a human are very, very advanced. It probably requires weak nanotechnology at an absolute minimum. It requires vast knowledge of neuroanatomy. It requires truly prodigious computing power. In short, if anyone at all works seriously on general AI (and there are most definitely people working seriously on general AI), it's almost certain to be reached before uploading is working.

There are probably other ways to produce superhuman intelligence, but I honestly don't think that any of the interesting ones (i.e. ones that produce intelligences meaningfully smarter than every human has ever been) are likely to come before superintelligent AI, and a comparison of possibilities is beyond the scope of this essay.

Perhaps more importantly, an AI that is built from source code (as opposed to something insanely dangerous like a neural network or, goth forbid, a genetic algorithm) can simply edit its own source code to self improve. Human intelligence is an absolute morass of independently evolved complex functional adaptations. Source code, any source code, is a paragon of clarity by comparison. Only by comparison, of course; there's some really bad source code out there.

Strong Nanotechology Is Possible

This is a much less important part of my thesis, in as much as if strong nanotechnology turns out to not be possible, the only thing that changes is my sense of urgency. If only weak nanotechnology or less is possible, then it's not as much of a race.

As a result of my research into singularity issues, I have come to realize that a Friendly AI must be the first being to develop strong nanotechnology on Earth, or one of the first, or we are all going to die in a mass of grey goo.

A Friendly, Superintelligent AI Gets To Nanotechology First

This is the primary assumption that this essay rests on. There has already been plenty written about whether this assumption is or is not valid, but it is being taken as such for the purposes of this essay, largely because given the above two assumptions, any other possibility is boring (because we're all grey goo).

If you feel that I've rushed past the interesting parts in making this assumption, please understand that the goal of this essay is not to convince you of this assumption's validity, it is to convince you that if this assumption is possible, at all possible, that achieving it is the most important thing to humanity right now, even if that possibility is worse than most lotteries. Yes, I know how that sounds, but just because something sounds cult-ish doesn't make it so.

Plausibilities

Getting Nanotechology When You Are Smarter

OK, so if a Friendly Superintelligent AI gets to nanotechnology first, then what?

Well, actually, let's step back a little bit. Let's say that good nanotechnology hasn't been invented yet, or at least isn't widely available, and our AI has become wildly superintelligent. I'm of the school of thought this won't take long once the self-improvement ball gets rolling; days, maybe weeks at most. Possibly hours.

The first thing you need to understand, and this may seem redundant, is that this being is smarter than you. A wonderful example I owe Charles Stross: "When I bring out my cat carrier to take my cat to the vet, it's always surprised to find that the cat door is locked shut. To it, this is a startlingly improbable coincidence, because it doesn't understand that I'm smarter than it".

You can't out-think, out-smart, or out-wit this being. If you and this being have opposing goals for some reason, it will always, always win. Always. In the same way that, if you are willing to go far enough, you can always win in a battle of wits against a cat. The big difference is that you can't talk to the cat.

Now, this AI is Friendly, so it's not going to trick us into destroying ourselves, or whatever. But if it decides that creating nanotechnology machinery is going to be the best way to serve human friendliness, as seems likely to me, it's unlikely that anything will be able to stop it. There's a simple reason for this: at some point, it will be communicating with humans, and it's smarter than they are. Truly, profoundly smarter.

How long do you think it will take for this being to convince its caregivers to provide it with the tools it needs to start making nanotechnology devices? I'm going to guess about five minutes, because it's smarter than us. Remember, it's smarter socially, as well as intellectually. It's going to spin some line about how it thinks it might have figured out some new physics, and could it please have access to these simple power tools hooked up to its serial port to run some experiments?

Or whatever. The point is, we can't trick it, but it can sure trick us. I hope very strongly that the caretakers of the first Friendly AI will not be so hostile as to need to be tricked (and, in fact, if they are the AI probably won't be Friendly, but that's a book-length discussion again), but it doesn't much matter either way.

First One Wins

Please understand that if someone gets to strong nanotechnology before everyone else, they rule the world. This is not a subject for debate, you can't fight back, there is no passing Go or collecting two hundred dollars. We're talking about a technology that allows the wielder to destroy everything in the world that looks like a weapon if it doesn't contain trace amounts of some extremely rare element, for example. The wielder stockpiles weapons with the element in question mixed in to all the steel, releases the nanotechnology, and rules the world instantly. And that's a really brain-dead way to do it, too. Turning the matter currently comprising the Himalayas into robotic tanks that respond only to cryptographic keys only you have seems much smarter, and that was me only thinking about it for a minute or two.

The stakes go up drastically if the first one to nanotechnology is a computer-based super-intelligence, because every advancement in nanotechnology is going to lead to a being with faster and bigger, if not better, intelligence. This faster and bigger intelligence will be able to make itself better that much more easily, and being smarter will be able to make better nanotechnology, and on and on. If you thought the self-improvement cycle I described before sounded fast, wait until the AI turns its hardware into computronium.

With a nanotechnology brain, there's no need to make tanks, or to re-tool weapons to have some exotic element in them. You just have a few hundred trillion nanobots infect, well, pretty much everything, and you control them by radio, or whatever. At that point, if you want weapons to stop working, you simply will them to do so. It only takes a few nanobots in every gun on the planet to render them all useless in a few seconds. Nuclear bombs are even easier. I'm picking on disarmament, by the way, because it seems like an obviously Friendly thing to do.

Possibilities

It may seem like we've come pretty far at this point, and things are starting to sound really implausible to most people, but I want to re-iterate that there are only a few assumptions involved: strong, friendly superintelligence; hard AI; strong nanotechnology. All three of these are widely thought to be possible by those who know enough science to defend their positions; ask Google.

Again, I want to re-iterate that strong nanotechnology isn't even necessary. You can make gun-stopping nanobots without requiring that they be self-replicating, which is what strong nanotechnology requires. Nothing in here is made any less possible without self-replicating nanobots, it is simply, as I said before, made less urgent.

Ruling The World

So, a Friendly AI is now ruling the world. Now what?

A Friendly AI will probably do anything any human asks it do that doesn't conflict with the needs and rights of other humans. Please don't go thinking up Solomonic conundrums of conflicts of needs that the AI might be presented with. I can't answer what it will do because it's smarter than me. More importantly, it is kinder than me, more altruistic than me, and much, much, much less prone to bias than me. I would never want this kind of power put in any human's hands, myself included.

Anyways, here's a selection of things an AI with nanotechnology would be capable of doing, assuming it is sufficiently smart:

The Sysop Scenario

And so we come, finally, to the real meat of the essay: the so-called Sysop Scenario. The name, by the way, comes from the term 'sysop' meaning System Operator, which in the old days was the person who ran the big-iron multi-million dollar computers.

The sysop scenario is one possible extension of the nanotechnology-enabled, world-ruling AI described above. Basically, the AI keeps increasing its sphere of control (and its intelligence) until it is effectively absolute. The sysop scenario becomes much easier if technology on smaller scales than nanotechnology is possible (to quote Eliezer: "To a medieval scholar the only understandable difference between nanotechnology and femtotechnology is that one can turn lead into gold, and the other can't"; femtotechnology is pretty much arbitrary control over matter, including the ability to re-assemble atoms), but strong nanotechnology is probably sufficient.

In the sysop scenario, all matter in the (world, solar system, galaxy, universe) is an extension of the Sysop, the AI from our previous example. The Sysop is aware, at least in a sense, of all matter in its domain at all times, and able to manipulate any of it at any time. I say "in a sense" because unless we find a way to communicate instantly, there would have to actually be many nodes of the Sysop that communicate relatively slowly, especially if it's spread out over a solar system or larger.

In the sysop scenario, the sysop's will is a law of the universe, at least in the sysop's domain of influence. This is not an exaggeration. In the sysop scenario, it becomes impossible to harm an unwilling being in exactly the same way that a human cannot jump to the moon. There wouldn't need to be any muscle convulsions as you try to stab someone, it would simply not be physically possible, in a way that I'm sure I can't even imagine, let alone describe.

It would be physically impossible for a person to die of starvation (or, really, anything else) unless they specifically told the sysop they wanted to. In fact, it would almost certainly be physically impossible for a person to get hungry, or develop bad eyesight, or have a cold, or experience anything that would cause pain or discomfort, unless they specifically asked for it.

This could all be done without any visible effects whatsoever. In fact, a superintelligent AI might not actually communicate with anyone once it was sufficiently smart; it might just invade all surrounding matter and start doing what people wanted, without them even having to ask, like an all-seeing, all-powerful genie or something. Who knows? I don't.

Emotional Maturity

It has been pointed out to me that someone who had all of their needs met in this sort of fashion wouldn't be much of a person; that without pain, pleasure is meaningless. My experience is that, in fact, humans will expand to fill the space they are in. If a human is starving, finding a rotten orange will transport them to the highest heights of joy. If a human has never wanted for much of anything, hearing that someone said something bad about them will plunge them to the depths of despair. So in that sense, I don't think this is actually a problem.

Having said that, though, there is something that we call "maturity" that seems to be a good thing and that really does seem to only be acquired through hardship. I don't see that as a good thing; I think that that's a problem with human minds, and it's a problem that should be fixed.

If, for whatever reason, the sysop can't or won't fix that problem, though, it's not like it will be hard to deal with. Just ask the sysop to simulate whatever unfortunate circumstances are required to reach the desired level of maturity. It can even force you to experience whatever required hardship without you remembering that you asked to do so, if necessary. Sirhan's childhood in Accelerando was an attempt to solve exactly this sort of problem.

Conclusions

Gambling

The sysop scenario is, in one sense, clearly wrong: it's too detailed. Detailed future prediction is always wrong. But if AI, superintelligence, and nanotechnology are possible, then so is the sysop scenario. This is not really avoidable. It might not be likely. It might not even be probable; it probably isn't. But it is possible.

As far as I'm concerned, the mere possibility of the sysop scenario, or anything even vaguely like it, compels immediate action. We are talking about solving almost all the problems that any human has ever complained about in one fell swoop. Even if the chance is one in a trillion, the payoff is so amazing that it cannot be ignored.

Most of you probably do not play the lottery, because you know what the odds are. You have to spend an immense amount of money to have any real hope of making a return. But what if the lottery had the results I'm describing? How much money would you spend on the lottery then?

The crux here, though, is that we're not just talking about ending world hunger, or war, or any of the things that have plagued human kind since forever. We're talking about avoiding the single greatest danger to continued human existence that has ever been faced. Strong nanotechnology could literally allow a single science experiment, possibly not even a terribly expensive one, to permanently destroy all life on Earth. Completely and totally. This isn't like the nuclear bomb, where maybe some beetles would survive or something, this is everything. I'm not being particularly alarmist here; there have already been nanotechnology safety protocols written to help prevent these possibilities.

So, if the lottery had a one in a million chance of saving humanity from all its ills, and was the best way to avoid the single biggest threat to humanity's continued existence at the same time, how much of your money would you spend on the lottery then?

(And, if you act now, we'll give you this portable color TV at absolutely no charge!)

On top of all that, we really are not talking about one-in-a-million chances. All of the elements required to turn the future into a Really Nice Place To Live seem to be there. We just need to have the will to go out and make it happen.

What To Do

I can't speak for you, of course. For myself, having come to these conclusions, the answer was simple: everything I can spare. I'm not going to make myself destitute in service to a beneficial singularity, because it is a gamble. A huge one. But it's also the single gamble in all the world with the highest payoff.

I spent several months looking for how I could contribute. I'm not smart enough to go build friendly AI myself (if I didn't already know that, reading General Intelligence and Seed AI and Creating Friendly AI made it stunningly, abundantly clear), but I do make a reasonable amount of money, and whoever does build friendly AI is going to need a fair bit of that indeed to finance the project, especially if it's to be done before the US Military figures out strong nanotechnology.

I encourage anyone who has been moved by this essay to go and do the same thing I did: search for how you can best help. But I will tell you my conclusion regardless. I decided that the man who wrote the essays I linked to just above was clearly smart enough to pull off not just a general AI, nor merely a general AI smart enough to become super-intelligent, but an AI that will still be truly Friendly when it gets there. Furthermore, that man, Eliezer S. Yudkowsky (whom I've mentioned elsewhere here several times) is the only person truly stepping up to the plate on these issues.

Maybe someone will come along who is smarter than Eliezer and for some reason they choose not to work together (Ben Goertzel's joining the SIAI now makes this seem unlikely), but in the mean time, I've started diverting as much money as I can afford to The Singularity Institute, and I have no intention of stopping until one of my assumptions is shown to be incorrect, or a better option comes along.

I wouldn't hold your breath on either of those possibilities, if I were you.

Regardless, I urge you to support a beneficial singularity in whatever way seems best to you. There really is no more important endeavor in all the world. Not a single one.