You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanielLC comments on AI caught by a module that counterfactually doesn't exist - Less Wrong Discussion

9 Post author: Stuart_Armstrong 17 November 2014 05:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 18 November 2014 01:50:38AM -1 points [-]

If it believes something is impossible, then when it sees proof to the contrary it assumes there's something wrong with its sensory input. If it has utility indifference, then if it sees that the universe is one it doesn't care about it acts based on the tiny chance that there's something wrong with its sensory input. I don't see a difference. If you use Solomonoff induction and set a prior to zero, everything will work fine. Even a superintelligent AI won't be able to use Solomonoff induction, and it realistically will have Bayes' theorem not quite accurately describe its beliefs, but that's true regardless of if it has zero probability for something.

Comment author: Stuart_Armstrong 18 November 2014 10:39:12AM 1 point [-]

That's not how utility indifference works. I'd recommend skimming the paper ( http://www.fhi.ox.ac.uk/utility-indifference.pdf ), then ask me if you still have questions.