wedrifid comments on Problem of Optimal False Information - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (113)
Good edge case.
Close to it. The only obvious deviations from the optimums are centered around the possible inherent disutility of having a universe in which you made the decision to have false beliefs then in fact had false beliefs for some time and the possible reduced utility assigned to unvierses in which you are granted a favourable universe rather than creating it yourself.
This seems right, and the minds that are an exception here that are most easy to conceive are ones where the problem is centered around specific high emphasis within their utility function on events immediately surrounding the decision itself (ie. the "other edge" case).
Well, when I said "alter the universe into its worst and best possible configurations", I had in mind a literal rewrite of the absolute total state of the universe, such that for that then-universe its computable past was also the best/worst possible past (or something similarly inconceivable to us that a superintelligence could come up with in order to have absolute best/worst possible universes), such as modifying the then-universe's past such that you had taken the other box and that that box had the same effect as the one you did pick.
However, upon further thought, that feels incredibly like cheating and arguing by definition.
Also, for the "opposite/other edge", I had considered minds with utility functions centered on the decision itself with conditionals against reality-alteration and spacetime-rewrites and so on, but those seem to be all basically just "Break the premises and Omega's predictions by begging the question!", similar to above, so they're fun to think about but useless in other respects.