Manfred comments on Logical uncertainty, kind of. A proposal, at least. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (35)
For the robot as described, this will actually happen (sort of like Wei Dai's comment - I'm learning a lot from discussing with you guys :D ) - it only actually lowers something's probability once it proves something about it specifically, so it just lowers the probability of most of its infinite options by some big exponential, and then, er, runs out of time trying to pick the option with highest utility. Okay, so there might be a small flaw.