You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on The idiot savant AI isn't an idiot - Less Wrong Discussion

8 Post author: Stuart_Armstrong 18 July 2013 03:43PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (133)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 18 July 2013 09:42:19PM 2 points [-]

If your goal is to create paperclips, and you have the option to change your goal to creating staples, it's pretty clear that taking advantage of this option would not result in more paperclips, so you would ignore the option.

Comment author: Lumifer 19 July 2013 07:30:07PM -1 points [-]

How well, do you think, this logic works for humans?

Comment author: DanielLC 19 July 2013 08:28:12PM 2 points [-]

Humans tend towards being adaptation-executers rather than utility-maximizers. It does make them less dangerous, in that it makes them less intelligent. If you programmed a self-modifying AI like that, it would still be at least as dangerous as a human who is capable of programming an AI. There's also the simple fact that you can't tell before-hand if it's leaning too far on the utility-miximization side.

Comment author: Lumifer 19 July 2013 08:45:54PM 1 point [-]

... in that it makes them less intelligent.

Isn't that circular reasoning? I have a feeling that in this context "intelligent" is defined as "maximizing utility".

And what is an "adaptation-executer"?

Comment author: DanielLC 19 July 2013 10:07:34PM 1 point [-]

I have a feeling that in this context "intelligent" is defined as "maximizing utility".

Pretty much.

If you just want to create a virtuous AI for some sort of deontological reason, then it being less intelligent isn't a problem. If you want to get things done, then it is. The AI being subject to dutch book betting only helps you insomuch as the AI's goals differ from yours and you don't want it to be successful.

And what is an "adaptation-executer"?

See Adaptation-Executors, not Fitness-Maximizers.