timtyler comments on Muehlhauser-Goertzel Dialogue, Part 2 - Less Wrong

9 Post author: lukeprog 05 May 2012 12:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (52)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 05 May 2012 01:07:20AM *  2 points [-]

You said:

That is why intelligent systems will pursue the convergent instrumental goals described by Bostrom.

...and used the above argument as justification. But it doesn't follow. What you need is:

Intelligent systems will pursue universal instrumental values -unless they are programmed not to.

Ben's arguing that they are likely to be programmed not to.

Comment author: lukeprog 05 May 2012 01:31:20AM *  4 points [-]

In what sense of "programmed not to"? If they're programmed not to pursue convergent instrumental values but that programming is not encoded in the utility function, the utility function (and its implied convergent instrumental values) will trump the "programming not to."

Comment author: timtyler 05 May 2012 01:39:34AM *  0 points [-]

Maybe - but surely there will be other ways of doing the programming that actually work.

Comment author: lukeprog 05 May 2012 03:22:46AM 4 points [-]

I'm not so sure about "surely." I worry about the Yudkowskian suggestion that "once the superintelligent AI wants something different than you do, you've already lost."

Comment author: timtyler 05 May 2012 11:00:01AM 0 points [-]

So, you make sure the programming is within the goal system. "Encoded in the utility function" - as you put it.

Comment author: lukeprog 05 May 2012 08:10:57PM 8 points [-]

Yes, but now your solution is FAI-complete, which was my point from the beginning.