Manfred comments on Logical uncertainty, kind of. A proposal, at least. - Less Wrong

8 Post author: Manfred 13 January 2013 09:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 14 January 2013 06:18:29PM *  1 point [-]

For the robot as described, this will actually happen (sort of like Wei Dai's comment - I'm learning a lot from discussing with you guys :D ) - it only actually lowers something's probability once it proves something about it specifically, so it just lowers the probability of most of its infinite options by some big exponential, and then, er, runs out of time trying to pick the option with highest utility. Okay, so there might be a small flaw.