You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Model of unlosing agents - Less Wrong Discussion

3 Post author: Stuart_Armstrong 02 August 2014 07:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 02 August 2014 08:48:38PM 1 point [-]

(sorry for giving you another answer, but it seems it's useful to separate the points):

But in order to win, we have to find the right utility function and maximize that one.

The "right" utility function. Which we don't currently know. And yet we can make decisions until the day we get it, and still make ourselves unexploitable in the meantime. The first AIs may well be in a similar situation.