This is a thought that occured to me on my way to classes today; sharing it for feedback.
Omega appears before you, and after presenting an arbitrary proof that it is, in fact, a completely trustworthy superintelligence of the caliber needed to play these kinds of games, presents you with a choice between two boxes. These boxes do not contain money, they contain information. One box is white and contains a true fact that you do not currently know; the other is black and contains false information that you do not currently believe. Omega advises you that the the true fact is not misleading in any way (ie: not a fact that will cause you to make incorrect assumptions and lower the accuracy of your probability estimates), and is fully supported with enough evidence to both prove to you that it is true, and enable you to independently verify its truth for yourself within a month. The false information is demonstrably false, and is something that you would disbelieve if presented outright, but if you open the box to discover it, a machine inside the box will reprogram your mind such that you will believe it completely, thus leading you to believe other related falsehoods, as you rationalize away discrepancies.
Omega further advises that, within those constraints, the true fact is one that has been optimized to inflict upon you the maximum amount of long-term disutility for a fact in its class, should you now become aware of it, and the false information has been optimized to provide you with the maximum amount of long-term utility for a belief in its class, should you now begin to believe it over the truth. You are required to choose one of the boxes; if you refuse to do so, Omega will kill you outright and try again on another Everett branch. Which box do you choose, and why?
(This example is obviously hypothetical, but for a simple and practical case, consider the use of amnesia-inducing drugs to selectively eliminate traumatic memories; it would be more accurate to still have those memories, taking the time and effort to come to terms with the trauma... but present much greater utility to be without them, and thus without the trauma altogether. Obviously related to the valley of bad rationality, but since there clearly exist most optimal lies and least optimal truths, it'd be useful to know which categories of facts are generally hazardous, and whether or not there are categories of lies which are generally helpful.)
As stated, the question comes down to acting on an opinion you have on an unknown, but within the principles of this problem potentially knowable conclusion about your own utility function. And that is: Which is larger: 1) the amount of positive utility you gain from knowing the most disutile truths that exist for you OR 2) the amount of utility you gain from knowing the most utile falsehoods that exist for you
ALMOST by definition of the word utility, you would choose the truth (white box) if and only if 1) is larger and you would choose the falsehood (black box) only if 2) is larger. I say almost by definition because all answers of the form "I would choose the truth even if it was worse for me" are really statements that the utility you place on the truth is higher than Omega has assumed, which violates the assumption that Omega knows your utility function and speaks truthfully about it.
I say ALMOST by definition because we have to consider the other piece of the puzzle: when I open box 2) there is a machine that "will reprogram your mind." Does this change anything? Well it depends on which utility function Omega is using to make her calculations of my long term utility. Is Omega using my utility function BEFORE the machine reprogram's my mind, or after? Is me after the reprogramming really still me after the reprogramming? I think within the spirit of the problem we must assume that 1) The utility happens to be maximized for both me before the reprogram and me after the reprogram, perhaps my utility function does not change at all in the reprogramming, 2) Omega has correctly included the amount of disutility I would have to the particular programming change, and this is factored in to her calculations so that the proposed falsehood and mind reprogramming do in fact, on net, give the maximum utility I can get from knowing the falsehood PLUS being reprogrammed.
Within these constraints, we find that the "ALMOST" above can be removed if we include the (dis)utility I have for the reprogramming in the calculation. So:
Which is larger: 1) the amount of positive utility you gain from knowing the most disutile truths that exist for you AND being reprogrammed to believe it OR 2) the amount of utility you gain from knowing the most utile falsehoods that exist for you
So ultimately, the question which would we choose is the question above. I think to say anything else is to say "my utility is not my utility," i.e. to contradict yourself.
In my case, I would choose the white box. On reflection, considering the long run, I doubt that there is a falsehood PLUS a reprogramming that I would accept as a combination as more utile than the worst true fact (with no reprogramming to consider) that I would ever expect to get. Certainly, this is the Occam's razor answer, the ceteris paribus answer. GENERALLY, we believe that knowing more is better for us than being wrong. Generally we believe that someone else meddling with our minds has a high disutility to us.
For completeness I think these are straightforward conclusions from "playing fair" in this question, from accepting an Omega as postulated.
1) If Omega assures you the utility for 2) (including the disutility of the reprogramming as experienced by your pre-reprogrammed self) is 1% higher than the utility of 1), then you want to choose 2), to choose the falsehood and the reprogramming. To give any other answer is to presume that Omega is wrong about your utility , which violates the assumptions of the question.
2) If Omega assures you the utility for 2) and 1) are equal, it doesn't matter which one you choose. As much as you might think "all other things being equal I'll choose the truth" you must accept that the value you place on the truth has already been factored in, and the blip-up from choosing the truth will be balanced by some other disutility in a non-truth area. Since you can be pretty sure that the utility you place on the truth is very much unrelated to pain and pleasure and joy and love and so on, you are virtually guaranteeing you will FEEL worse choosing the truth, but that this worse feeling will just barely be almost worth it.
Finally, I tried to play nice within the question. But it is entirely possible, and I would say likely, that there can never be an Omega who could know ahead of time with the kind of detail required, what your future utility would be, at least not in our Universe. Consider just the quantum uncertainties (or future Everett universe splits). It seems most likely that your future net utility covers a broad range of outcomes in different Everett branches. In that case, it seems very likely that there is no one truth that minimizes your utility in all your possible futures, and no one falsehood that maximizes it in all your possible futures. In this case we would have a distribution of utility outcomes from 1) and 2) and it is not clear that we know how to choose between two different distributions. Possibly utility is defined in such a way that it would be the expectation value that "truly" mattered to us, but that puts I think a very serious constraint on utility functions and how we interact with them that I am not sure could be supported.
Quite a detailed analysis, and correct within its assumptions. It is important to know where Omega is getting it's information on your utility function. That said, since Omega implicitly knows everything you know (since it needs to know that in order to also know everything you don't know, and thus to be able to provide the problem at all), it implicitly knows your utility function already. Obviously, accepting a falsehood that perverts your utility function into something counter to your existing utility function just to maximize an easier target would... (read more)