You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

solipsist comments on Expected utility, unlosing agents, and Pascal's mugging - Less Wrong Discussion

19 Post author: Stuart_Armstrong 28 July 2014 06:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread. Show more comments above.

Comment author: solipsist 28 July 2014 11:38:17PM 0 points [-]

And the utility function doesn't have to be bounded by a constant. An agent will "blow out its speakers" if it follows a utility function whose dynamic range is greater than the agent's Kolmogorov complexity + the evidence the agent has accumulated in its lifetime. The agent's brain's subjective probabilities will not have sufficient fidelity for such a dynamic utility function to be meaningful.

Super-exponential utility values are OK, if you've accumulated a super-polynomial amount of evidence.

Comment author: Squark 29 July 2014 10:15:36AM 1 point [-]

The Solomonoff expectation value of any unbounded computable utility function diverges. This is because the program "produce the first universe with utility > n^2" is roughly of length log n + O(1) therefore it contributes 2^{-log n + O(1)} n^2 = O(n) to the expectation value.

Comment author: solipsist 28 July 2014 11:41:59PM *  0 points [-]

...whose dynamic range is greater than the agent's Kolmogorov complexity + ...

Oops, that's not quite right. But I think that something like that is right :-). 15 Quirrell points to whoever formalizes it correctly first.