Vladimir_Nesov comments on Open Thread: November 2009 - Less Wrong

3 [deleted] 02 November 2009 01:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (539)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 09 November 2009 03:50:48AM 4 points [-]

Expected utility is not something that "goes up", as the AI develops. It's utility of all it expects to achieve, ever. It may obtain more information about what the outcome will be, but each piece of evidence is necessarily expected to bring the outcome either up or down, with no way to know in advance which way it'll be.

Comment author: Jordan 09 November 2009 05:29:56AM *  0 points [-]

Can you elaborate? I understand what you wrote (I think) but don't see how it applies.

Comment author: Vladimir_Nesov 11 November 2009 11:22:46PM *  1 point [-]

Hmm, I don't see how it applies either, at least under default assumptions -- as I recall, this piece of cached thought was regurgitated instinctively in response to sloppily looking through your comment and encountering the phrase

This utility is monotonic in time, that is, it never decreases, and is bounded from above.

which was for some reason interpreted as confusing utility with expected utility. My apologies, I should be more conscious, at least about the things I actually comment on...

Comment author: Jordan 12 November 2009 12:02:46AM 0 points [-]

No worries. I'd still be curious to hear your thoughts, as I haven't received any responses that help me understand how this utility function might fail. Should I expand on the original post?