Kaj_Sotala comments on The Friendly AI Game - Less Wrong

38 Post author: bentarm 15 March 2011 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (170)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kaj_Sotala 15 March 2011 06:01:29PM *  5 points [-]

The AI should never try to do something elaborately horrible, because it can get max utility easily enough from inside the simulation

...but never do anything useful either, since it's going to spend all its time trying to figure out how to reach the INT_MAX utility point?

Or you could say that reaching the max utility point requires it to solve some problem we give it. But then this is just a slightly complicated way of saying that we give it goals which it tries to accomplish.

Comment author: Larks 15 March 2011 11:37:35PM 5 points [-]

What about giving it some intra-sandbox goal (solve this math problem), and the INT_MAX functions as a safeguard - if it ever escapes, it'll just turn itself off.

Comment author: Kaj_Sotala 16 March 2011 08:46:39AM *  2 points [-]

I don't understand how that's meant to work.