AlexMennen comments on Logical uncertainty, kind of. A proposal, at least. - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (35)
I don't think this is too hard to fix. The robot could have a unit of improper prior (epsilon) that it remembers is larger than zero but smaller than any positive real.
Of course, this doesn't tell you what to do when asked to guess whether the order of the correct finite simple group is even, which might be a pretty big drawback.
For the robot as described, this will actually happen (sort of like Wei Dai's comment - I'm learning a lot from discussing with you guys :D ) - it only actually lowers something's probability once it proves something about it specifically, so it just lowers the probability of most of its infinite options by some big exponential, and then, er, runs out of time trying to pick the option with highest utility. Okay, so there might be a small flaw.