As stated, the question comes down to acting on an opinion you have on an unknown, but within the principles of this problem potentially knowable conclusion about your own utility function. And that is: Which is larger: 1) the amount of positive utility you gain from knowing the most disutile truths that exist for you OR 2) the amount of utility you gain from knowing the most utile falsehoods that exist for you
ALMOST by definition of the word utility, you would choose the truth (white box) if and only if 1) is larger and you would choose the falsehood (black box) only if 2) is larger. I say almost by definition because all answers of the form "I would choose the truth even if it was worse for me" are really statements that the utility you place on the truth is higher than Omega has assumed, which violates the assumption that Omega knows your utility function and speaks truthfully about it.
I say ALMOST by definition because we have to consider the other piece of the puzzle: when I open box 2) there is a machine that "will reprogram your mind." Does this change anything? Well it depends on which utility function Omega is using to make her calculations of my long term utility. Is Omega using my utility function BEFORE the machine reprogram's my mind, or after? Is me after the reprogramming really still me after the reprogramming? I think within the spirit of the problem we must assume that 1) The utility happens to be maximized for both me before the reprogram and me after the reprogram, perhaps my utility function does not change at all in the reprogramming, 2) Omega has correctly included the amount of disutility I would have to the particular programming change, and this is factored in to her calculations so that the proposed falsehood and mind reprogramming do in fact, on net, give the maximum utility I can get from knowing the falsehood PLUS being reprogrammed.
Within these constraints, we find that the "ALMOST" above can be removed if we include the (dis)utility I have for the reprogramming in the calculation. So:
Which is larger: 1) the amount of positive utility you gain from knowing the most disutile truths that exist for you AND being reprogrammed to believe it OR 2) the amount of utility you gain from knowing the most utile falsehoods that exist for you
So ultimately, the question which would we choose is the question above. I think to say anything else is to say "my utility is not my utility," i.e. to contradict yourself.
In my case, I would choose the white box. On reflection, considering the long run, I doubt that there is a falsehood PLUS a reprogramming that I would accept as a combination as more utile than the worst true fact (with no reprogramming to consider) that I would ever expect to get. Certainly, this is the Occam's razor answer, the ceteris paribus answer. GENERALLY, we believe that knowing more is better for us than being wrong. Generally we believe that someone else meddling with our minds has a high disutility to us.
For completeness I think these are straightforward conclusions from "playing fair" in this question, from accepting an Omega as postulated.
1) If Omega assures you the utility for 2) (including the disutility of the reprogramming as experienced by your pre-reprogrammed self) is 1% higher than the utility of 1), then you want to choose 2), to choose the falsehood and the reprogramming. To give any other answer is to presume that Omega is wrong about your utility , which violates the assumptions of the question.
2) If Omega assures you the utility for 2) and 1) are equal, it doesn't matter which one you choose. As much as you might think "all other things being equal I'll choose the truth" you must accept that the value you place on the truth has already been factored in, and the blip-up from choosing the truth will be balanced by some other disutility in a non-truth area. Since you can be pretty sure that the utility you place on the truth is very much unrelated to pain and pleasure and joy and love and so on, you are virtually guaranteeing you will FEEL worse choosing the truth, but that this worse feeling will just barely be almost worth it.
Finally, I tried to play nice within the question. But it is entirely possible, and I would say likely, that there can never be an Omega who could know ahead of time with the kind of detail required, what your future utility would be, at least not in our Universe. Consider just the quantum uncertainties (or future Everett universe splits). It seems most likely that your future net utility covers a broad range of outcomes in different Everett branches. In that case, it seems very likely that there is no one truth that minimizes your utility in all your possible futures, and no one falsehood that maximizes it in all your possible futures. In this case we would have a distribution of utility outcomes from 1) and 2) and it is not clear that we know how to choose between two different distributions. Possibly utility is defined in such a way that it would be the expectation value that "truly" mattered to us, but that puts I think a very serious constraint on utility functions and how we interact with them that I am not sure could be supported.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Regardless of whether the third one is plausible, I suspect Omega would know of some hack that is equally weird and unable to be anticipated.
A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don't produce an answer you'd expect; they instead produce something that matches the criteria even better then anything you were aware of.