This is a thought that occured to me on my way to classes today; sharing it for feedback.
Omega appears before you, and after presenting an arbitrary proof that it is, in fact, a completely trustworthy superintelligence of the caliber needed to play these kinds of games, presents you with a choice between two boxes. These boxes do not contain money, they contain information. One box is white and contains a true fact that you do not currently know; the other is black and contains false information that you do not currently believe. Omega advises you that the the true fact is not misleading in any way (ie: not a fact that will cause you to make incorrect assumptions and lower the accuracy of your probability estimates), and is fully supported with enough evidence to both prove to you that it is true, and enable you to independently verify its truth for yourself within a month. The false information is demonstrably false, and is something that you would disbelieve if presented outright, but if you open the box to discover it, a machine inside the box will reprogram your mind such that you will believe it completely, thus leading you to believe other related falsehoods, as you rationalize away discrepancies.
Omega further advises that, within those constraints, the true fact is one that has been optimized to inflict upon you the maximum amount of long-term disutility for a fact in its class, should you now become aware of it, and the false information has been optimized to provide you with the maximum amount of long-term utility for a belief in its class, should you now begin to believe it over the truth. You are required to choose one of the boxes; if you refuse to do so, Omega will kill you outright and try again on another Everett branch. Which box do you choose, and why?
(This example is obviously hypothetical, but for a simple and practical case, consider the use of amnesia-inducing drugs to selectively eliminate traumatic memories; it would be more accurate to still have those memories, taking the time and effort to come to terms with the trauma... but present much greater utility to be without them, and thus without the trauma altogether. Obviously related to the valley of bad rationality, but since there clearly exist most optimal lies and least optimal truths, it'd be useful to know which categories of facts are generally hazardous, and whether or not there are categories of lies which are generally helpful.)
I would pick the black box, but it's a hard choice. Given all the usual suppositions about Omega as a sufficiently trustworthy superintelligence, I would assume that the utilities really were as it said and take the false information. But it would be a painful, both because I want to be the kind of person who pursues and acts upon the truth, and also because I would be desperately curious to know what sort of true and non-misleading belief could cause that much disutility -- was Lovecraft right after all? I'd probably try to bargain with Omega to let me know the true belief for just a minute before erasing it from my memory -- but still, in the Least Convenient Possible World where my curiosity was never satisfied, I'd hold my nose and pick the black box.
Having answered the hypothetical, I'll go on and say that I'm not sure there's much to take from it. Clearly, I don't value Truth for its own sake over and beyond all other considerations, let the heavens fall -- but I never thought I did, and I doubt many here do. The point is that in the real world, where we don't yet have trustworthy superintelligences, the general rule that your plans will go better when you use an accurate map doesn't seem to admit of exceptions (and little though I understand Friendly AI, I'd be willing to bet that this rule holds post-singularity). Yes, there are times where you might be better off with a false belief, but you can't predictably know in advance when that is, black swan blow-ups, etc.
To be more concrete, I don't think there's any real-world analogue to the hypothetical. If a consortium of the world's top psychiatrists announced that, no really, believing in God makes people happier, more productive, more successful, etc., and that this conclusion holds even for firm atheists who work for years to argue themselves into knots of self-deception, and that this conclusion has the strongest sort of experimental support that you could expect in this field, I'd probably just shrug and say "I defy the data". When it comes to purposeful self-deception, it really would take Omega to get me on board.
Nobody makes plans based on totally accurate maps. Good maps contain simplifications of reality to allow you to make better decisions. You start to teach children how atoms work by putting the image atoms as spheres into their heads. You don't start by teaching them a model that's up to date with the current scientific knowledge of how atoms works. The current model is more accurate but less useful for the children.
You calculate how airplanes fly with Newtons equations instead of using Einstein's.
In social situations it can also often help to avoid gettin... (read more)