AI risk and chaos theory

Is AI dangerous?

Over at Marginal Revolution, there’s an article about the risks of AI. Recently, visionaries such as Bill Gates, Elon Musk and Stephen Hawking have warned of the dangers of artificial intelligence; MR guest blogger Ramez Naam has argued that, in fact, serious AI researchers don’t believe that AI poses a threat.

Is he correct or not? (correct in terms of whether AI is a risk, not what the experts say).

Lets talk about computers. Right now, and for the forseable future, computers are what we would call “stupid but fast.” That is, they are prone to make dumb mistakes, yet they can compute incredibly quickly, and have (viturally) unlimited memory.

What they lack is conciousness, imagination, creativity, and common sense. What happens when computers get these things, yet still maintain the lightening fast processing power and access to petabytes of memory?

Well, first off, we don’t have any rigerous definitions of any of those things, while we can say that we’re creative and computers aren’t, and we know vaguely what that means, trying to really define where creativity begins is troublesome. But lets forget that, and lets just say that computers soon get to the point where they have or appear to have creativiy, imagination and common sense. When they get these, will they be a threat to humanity?

Lets look at the below

First scenario: For whatever reason, computer intelligence is impossible. Maybe its because they neural networks which support the brain can only be built from “biological” materials, or maybe its because we need souls to be intelligent (and God doens’t give them to machines). I have no idea how likely either of these are; but I’ll throw them out there.

Second scenario: There is a hard limit to intelligence, a thing can only be so smart; and that limit just happens to coincide with the human brain; therefore, while we may be able to build computers are as smart as humans, we can’t build any that are smarter than humans. (the only reason to think this is true is because intelligence has advantages from a reproductive standpoint, so any darwinian system which creates a certain level of intelligence will quickly evolve to reach the maximum possible intelligence limit, which is exactly what happened to us). I seriously doubt this, mainly because we can always imagine having more memory or calculating faster (which we’ve already mastered for computers); so I seriously doubt this scenario (even moreso than the first one, which is a tab bit unlikley).

The third scenario is that within any given complex thought system, there is a tradeoff between potential creativity and raw computational power, that is that more computation will “overload” an entities conciousness and prevent it from being creative/wise/sensible. While I again seriously doubt that this is the case, I’d mention it.

The fourth scenario is one where “smart” computers don’t have what we can refer to as a personality or an ego, that they have creativity and common sense without a sense of self.

The fifth is that, in order to have a set of our criteria (common sense, creativity, conciousness), they must also posses a sense of self.

Lets talk further about item number five. Also, lets remember what a computer intelligence is; once we can build a machine that is smarter than we are, it is somewhat safe to assume that the machine can build a machine smarter than itself, and through repeated iteration we can get incredibly powerful machines. The question we should ask ourselves is whether a goal we give to the first iteration of the machine will be preserved through the iterated versions. Well, presumably the machine would never purposely override its own goals (the reason why should be obvious). But there’s something to consider – chaos theory.

What is the difference between this:

Chaos - long

And this:

Chaos - short

If aren’t familiar with Conway’s game of life, its really just a simple set of rules applied to a set of “cells” which is then iterated. It is a great example of chaos theory in action, the above two beginning states for the game are almost entirely the same, the first one will generate a pattern which lasts for 365 iterations before stabilizing, the second lasts only 12 iterations, and there’s no real difference between the two patterns, there’s no way you can determine which one will last longer except by running the iterations and seeing that one lasts while one doesn’t. Similarly, there’s a very famous theorem in computer science, that you cannot develop a method to determine whether a computer program will terminate (called the halting problem).

What if this applies to artificial intelligence? What if, when editing its code, there’s no way for a machine to know what will happen without actually running it? That is, if this occurs, there may not be any way for us to create an artificial intelligence and have really any say in what it will be like.

There’s another idea, one different from chaos theory per se. That is that, by making a computer system “smarter” instead of being able to direct it (that is, we can build the final system the way we want by building the initial system correctly), or being chaotic (that any set of initial conditions will have a profound but unpredictable impact on the way the end result will behave), that it is convergent. Any advanced system will, regardless of the starting point, converge into a single “type” or personality. The simplest mode is suicidal, that any sufficiently advanced form of intelligence will decide it’s better off not existing, and then simply delete itself.

The other types are ones that hate us (that is, they will feel contempt for us, similar to how we feel about rats or insects) or that they will love us (or at least wish to preserve our the earth in some manner, including us); or they believe we will get in their way (even just metaphorically, perhaps the general AI feels that the farms used to feed us would be better suited to some other end, and there we go).

Perhaps, if my chaos theory is likely, the machines will have the ability to make themselves smarter, but they will simply refuse to do so, afraid of changes.

All of this is of course almost pure speculation, I don’t have any experience programming artificial general intelligence, and neither for that matter does anybody else. But if there’s one this I’ve learned from the admittedly minor programming experience I do have, its that programs never do what you want them to on the first attempt; and if we’re talking about building intelligences which could become hyper intelligent, then we may only have one chance.

Advertisements
Previous Post
Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: