You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

entirelyuseless comments on Nick Bostrom says Google is winning the AI arms race - Less Wrong Discussion

3 Post author: polymathwannabe 05 October 2016 06:50PM

Comments (31)

You are viewing a single comment's thread. Show more comments above.

Comment author: entirelyuseless 07 October 2016 01:42:25AM 0 points [-]

On the basis of thinking long and hard about it.

Some people think that intelligence should be defined as optimization power. But suppose you had a magic wand that could convert anything it touched into gold. Whenever you touch any solid object with it, it immediately turns to gold. That happens in every environment with every kind of object, and it happens no matter what impediments you try to set up to prevent. You cannot stop it from happening.

In that case, the magic wand has a high degree of optimization power. It is extremely good at converting things it touches into gold, in all possible environments.

But it is perfectly plain that the wand is not intelligent. So that definition of intelligence is mistaken.

I would propose an alternative definition. Intelligence is the ability to engage in abstract thought. You could characterize that as pattern recognition, except that it is the ability to recognize patterns in patterns in patterns, recursively.

The most intelligent AI we have, is not remotely close to that. It can only recognize very particular patterns in very particular sorts of data. Many of Eliezer's philosophical mistakes concerning AI arise from this fact. He assumes that the AI we have is close to being intelligent, and therefore concludes that intelligent behavior is similar to the behavior of such programs. One example of that was the case of AlphaGo, where Eliezer called it "superintelligent with bugs," rather than admitting the obvious fact that it was better than Lee Sedol, but not much better, and only at Go, and that it generally played badly when it was in bad positions.

The orthogonality thesis is a similar mistake of that kind; something that is limited to seeking a limited goal like "maximize paperclips" cannot possibly be intelligent, because it cannot recognize the abstract concept of a goal.

But in relation to your original question, the point is that the most intelligent AI we have is incredibly stupid. Unless you believe there is some magical point where there is a sudden change from stupid to intelligent, we are still extremely far off from intelligent machines. And there is no such magical point, as is evident in the behavior of children, which passes imperceptibly from stupid to intelligent.

Comment author: username2 07 October 2016 02:50:45AM 2 points [-]

Your example of a magic wand doesn't sound correct to me. By what basis is a Midas touch "optimizing"? It is powerful, yes, but why "optimizing"? A supernova that vaporizes entire planets is powerful, but not optimizing. Seems like a strawman.

Defining intelligence as pattern recognizing is not new. Ben Goertzel has espoused this view for some twenty years, and written a book on the subject I believe. I'm not sure I buy the strong connection with "recognizing the abstract concept of a goal" and such, however. There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior.

Regarding your last point, your terminology is unnecessarily obscuring. There doesn't have to be a "magic point" -- it could be simply a matter of correct software, but insufficient data or processing power. A human baby is a very stupid device, incapable of doing anything intelligent. But with experiential data and processing time it becomes a very powerful general intelligence over the course of 25 years, without any designer intervention. You bring up this very point yourself which seems to counteract your claim.

Comment author: entirelyuseless 07 October 2016 05:12:24AM 0 points [-]

Also, the wand is optimizing. The reason is that it doesn't just do some consistent chemical process that works in some circumstances: it works no matter what particular circumstances it is in. It is just the same as the fact that a paperclipper produces paperclips no matter what circumstance it starts out in.

A supernova on the other hand does not optimize, because it produces different results in different situations.