This is a thought that occured to me on my way to classes today; sharing it for feedback.
Omega appears before you, and after presenting an arbitrary proof that it is, in fact, a completely trustworthy superintelligence of the caliber needed to play these kinds of games, presents you with a choice between two boxes. These boxes do not contain money, they contain information. One box is white and contains a true fact that you do not currently know; the other is black and contains false information that you do not currently believe. Omega advises you that the the true fact is not misleading in any way (ie: not a fact that will cause you to make incorrect assumptions and lower the accuracy of your probability estimates), and is fully supported with enough evidence to both prove to you that it is true, and enable you to independently verify its truth for yourself within a month. The false information is demonstrably false, and is something that you would disbelieve if presented outright, but if you open the box to discover it, a machine inside the box will reprogram your mind such that you will believe it completely, thus leading you to believe other related falsehoods, as you rationalize away discrepancies.
Omega further advises that, within those constraints, the true fact is one that has been optimized to inflict upon you the maximum amount of long-term disutility for a fact in its class, should you now become aware of it, and the false information has been optimized to provide you with the maximum amount of long-term utility for a belief in its class, should you now begin to believe it over the truth. You are required to choose one of the boxes; if you refuse to do so, Omega will kill you outright and try again on another Everett branch. Which box do you choose, and why?
(This example is obviously hypothetical, but for a simple and practical case, consider the use of amnesia-inducing drugs to selectively eliminate traumatic memories; it would be more accurate to still have those memories, taking the time and effort to come to terms with the trauma... but present much greater utility to be without them, and thus without the trauma altogether. Obviously related to the valley of bad rationality, but since there clearly exist most optimal lies and least optimal truths, it'd be useful to know which categories of facts are generally hazardous, and whether or not there are categories of lies which are generally helpful.)
Does the utility calculation from the false belief include utility from the other beliefs I will have to overwrite? For example, suppose the false belief is "I can fly". At some point, clearly, I will have to rationalise away the pain of my broken legs from jumping off a cliff. Short of reprogramming my mind to really not feel the pain anymore - and then we're basically talking about wireheading - it seems hard to come up with any fact, true or false, that will provide enough utility to overcome that sort of thing.
I additionally note that the maximum disutility has no lower bound, in the problem as given; for all I know it's the equivalent of three cents. Likewise the maximum utility has no lower bound, in the problem as written. Perhaps Omega ought to provide some comparisons; for example he might say that the disutility of knowing the true fact is at least equal to breaking an arm, or some such calibration.
As the problem is written, I'd take the true fact.
As written, the utility calculation explicitly specifies 'long-term' utility; it is not a narrow calculation. This is Omega we're dealing with, it's entirely possible that it mapped your utility function from scanning your brain, and checked all possible universes forward in time from the addition of all possible facts to your mind, and took the worst and best true/false combination.
Accordingly, a false belief that will lead you to your death or maiming is almost certainly non-optimal. No, this is the one false thing that has the best long-term consequen... (read more)