You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on AI caught by a module that counterfactually doesn't exist - Less Wrong Discussion

9 Post author: Stuart_Armstrong 17 November 2014 05:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread.

Comment author: DanielLC 17 November 2014 09:54:58PM *  0 points [-]

Is there a difference between utility indifference and false beliefs? The Von Neumann–Morgenstern utility theorem works entirely based on expected value, and does not differentiate between high probability and high magnitude of value.

Comment author: Stuart_Armstrong 17 November 2014 10:56:18PM 0 points [-]

False beliefs might be contagious (spreading to other beliefs), and lead to logic problems with thing like P(A)=P(B)=1 and P(A and B)<1 (or when impossible things happen).

Comment author: DanielLC 18 November 2014 01:50:38AM -1 points [-]

If it believes something is impossible, then when it sees proof to the contrary it assumes there's something wrong with its sensory input. If it has utility indifference, then if it sees that the universe is one it doesn't care about it acts based on the tiny chance that there's something wrong with its sensory input. I don't see a difference. If you use Solomonoff induction and set a prior to zero, everything will work fine. Even a superintelligent AI won't be able to use Solomonoff induction, and it realistically will have Bayes' theorem not quite accurately describe its beliefs, but that's true regardless of if it has zero probability for something.

Comment author: Stuart_Armstrong 18 November 2014 10:39:12AM 1 point [-]

That's not how utility indifference works. I'd recommend skimming the paper ( http://www.fhi.ox.ac.uk/utility-indifference.pdf ), then ask me if you still have questions.