You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on The idiot savant AI isn't an idiot - Less Wrong Discussion

8 Post author: Stuart_Armstrong 18 July 2013 03:43PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (133)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 18 July 2013 09:59:36PM 4 points [-]

Humans are social creatures, and as such come with the necessary wetware to be good at predicting each other. Humans do not have specialized wetware for predicting AIs. That wouldn't be too much of a problem on its own, but humans have a tendency to use the wetware designed for predicting humans on things that aren't humans. AIs, evolution, lightning, etc.

Telling a human foreman to make paperclips and programming an AI to do it are two very different things, but we still end up imagining them the same way.

In this case, it's still not too big a problem. The main cause of confusion here isn't that you're comparing a human to an AI. It's that you're comparing telling with programming. The analog of programming an AI isn't talking to a foreman. It's brainwashing a foreman.

Of course, the foreman is still human, and would still end up changing his goals the way humans do. AIs aren't built that way, or more precisely, since you can't build an AI exactly the same as a human, building an AI that way has serious danger of having it evolve very inhuman goals.

Comment author: Lumifer 19 July 2013 07:18:41PM -1 points [-]

The main cause of confusion here isn't that you're comparing a human to an AI. It's that you're comparing telling with programming.

Nope. An AI foreman has been programmed before I tell him to handle paperclip production.

AIs aren't built that way

At the moment AIs are not built at all -- in any way or in no way.

Comment author: DanielLC 19 July 2013 07:56:40PM 1 point [-]

Nope. An AI foreman has been programmed before I tell him to handle paperclip production.

From the text:

If I owned a paperclip factory, and casually programmed my superintelligent AI to improve efficiency while I'm away

If you program it first, then a lot depends on the subtleties. If you tell it to wait a minute and record everything you say, then interpret that and set it to its utility function, you're effectively putting the finishing touches on programming. If you program it to assign utility to fulfilling commands you give it, you've already doomed the world before you even said anything. It will use all the resources at its disposal to make sure you say things that have already been done as rapidly as possible.

At the moment AIs are not built at all -- in any way or in no way.

Hence the

or more precisely, since you can't build an AI exactly the same as a human, building an AI that way has serious danger of having it evolve very inhuman goals.

Comment author: Lumifer 19 July 2013 08:07:57PM 0 points [-]

and casually programmed my superintelligent AI

The programming I'm talking about is not this (which is "telling"). The programming I'm talking about is the one which converts some hardware and a bunch of bits into a superintelligent AI.

...since you can't build an AI exactly the same as a human, building an AI that way...

Huh? In any case, AIs self-develop and evolve. You might start with an AI that has an agreeable set of goals. There is no guarantee (I think, other people seem to disagree) that these goals will be the same after some time.

Comment author: DanielLC 19 July 2013 08:34:27PM 2 points [-]

You might start with an AI that has an agreeable set of goals. There is no guarantee (I think, other people seem to disagree) that these goals will be the same after some time.

That's what I mean. Since it's not quite human, the goals won't evolve quite the same way. I've seen speculation that doing nothing more than letting a human live for a few centuries would cause evolution to unagreeable goals.

I think, other people seem to disagree

A sufficiently smart AI that has sufficient understanding of its own utility function will take measures to make sure it doesn't change. If it has an implicit utility function and trusts its future self to have a better understanding of it, or if it's being stupid because it's only just smart enough to self-modify, its goals may evolve.

We know it's possible for an AI to have evolving goals because we have evolving goals.

Comment author: Lumifer 19 July 2013 08:43:21PM 1 point [-]

A sufficiently smart AI...

So it's a Goldilocks AI that has stable goals :-) A too-stupid AI might change its goals without really meaning it and a too-smart AI might change its goals because it wouldn't be afraid of change (=trusts its future self).

Comment author: DanielLC 19 July 2013 10:02:24PM 0 points [-]

It's not that if it's smart enough it trusts its future self. It's that if it has vaguely-defined goals in a human-like manner, it might change its goals. An AI with explicit, fully understood, goals will not change its goals regardless of how intelligent it is.