This is a thought that occured to me on my way to classes today; sharing it for feedback.

Omega appears before you, and after presenting an arbitrary proof that it is, in fact, a completely trustworthy superintelligence of the caliber needed to play these kinds of games, presents you with a choice between two boxes.  These boxes do not contain money, they contain information.  One box is white and contains a true fact that you do not currently know; the other is black and contains false information that you do not currently believe.  Omega advises you that the the true fact is not misleading in any way (ie: not a fact that will cause you to make incorrect assumptions and lower the accuracy of your probability estimates), and is fully supported with enough evidence to both prove to you that it is true, and enable you to independently verify its truth for yourself within a month.  The false information is demonstrably false, and is something that you would disbelieve if presented outright, but if you open the box to discover it, a machine inside the box will reprogram your mind such that you will believe it completely, thus leading you to believe other related falsehoods, as you rationalize away discrepancies.

Omega further advises that, within those constraints, the true fact is one that has been optimized to inflict upon you the maximum amount of long-term disutility for a fact in its class, should you now become aware of it, and the false information has been optimized to provide you with the maximum amount of long-term utility for a belief in its class, should you now begin to believe it over the truth.  You are required to choose one of the boxes; if you refuse to do so, Omega will kill you outright and try again on another Everett branchWhich box do you choose, and why?

 

(This example is obviously hypothetical, but for a simple and practical case, consider the use of amnesia-inducing drugs to selectively eliminate traumatic memories; it would be more accurate to still have those memories, taking the time and effort to come to terms with the trauma... but present much greater utility to be without them, and thus without the trauma altogether.  Obviously related to the valley of bad rationality, but since there clearly exist most optimal lies and least optimal truths, it'd be useful to know which categories of facts are generally hazardous, and whether or not there are categories of lies which are generally helpful.)

New Comment
113 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Kindly350

Least optimal truths are probably really scary and to be avoided at all costs. At the risk of helping everyone here generalize from fictional evidence, I will point out the similarity to the Cthaeh in The Wise Man's Fear.

On the other hand, a reasonably okay falsehood to end up believing is something like "35682114754753135567 is prime", which I don't expect to affect my life at all if I suddenly start believing it. The optimal falsehood can't possibly be worse than that. Furthermore, if you value not being deceived about important things then the optimality of the optimal falsehood should take that into account, making it more likely that the falsehood won't be about anything important.

Edit: Would the following be a valid falsehood? "The following program is a really cool video game: "

Would the following be a valid falsehood? "The following program is a really cool video game: "

I think we have a good contender for the optimal false information here.

5Endovior
The problem specifies that something will be revealed to you, which will program you to believe it, even though false. It doesn't explicitly limit what can be injected into the information stream. So yes, assuming you would value the existence of a Friendly AI, yes, that's entirely valid as optimal false information. Cost: you are temporarily wrong about something, and realize your error soon enough.
4EricHerboso
Except after executing the code, you'd know it was FAI and not a video game, which goes against the OP's rule that you honestly believe in the falsehood continually. I guess it works if you replace "FAI" in your example with "FAI who masquerades as a really cool video game to you and everyone you will one day contact" or something similar, though.
3Endovior
The original problem didn't specify how long you'd continue to believe the falsehood. You do, in fact, believe it, so stopping believing it would be at least as hard as changing your mind in ordinary circumstances (not easy, nor impossible). The code for FAI probably doesn't run on your home computer, so there's that... you go off looking for someone who can help you with your video game code, someone else figures out what it is you're come across and gets the hardware to implement, and suddenly the world gets taken over. Depending on how attentive you were to the process, you might not correlate the two immediately, but if you were there when the people were running things, then that's pretty good evidence that something more serious then a video game happened.
2Endovior
Yes, least optimal truths are really terrible, and the analogy is apt. You are not a perfect rationalist. You cannot perfectly simulate even one future, much less infinite possible ones. The truth can hurt you, or possibly kill you, and you have just been warned about it. This problem is a demonstration of that fact. That said, if your terminal value is not truth, a most optimal falsehood (not merely a reasonably okay one) would be a really good thing. Since you are (again) not a perfect rationalist, there's bound to be something that you could be falsely believing that would lead you to better consequences than your current beliefs.
[-]bogus290

You should choose the false belief, because Omega has optimized it for instrumental utility whereas the true belief has been optimized for disutility, and you may be vulnerable to such effects if only because you're not a perfectly rational agent.

If you were sure that no hazardous true information (or advantageous false information) could possibly exist, you should still be indifferent between the two choices: either of these would yield a neutral belief, leaving you with very nearly the same utility as before.

9Endovior
That is my point entirely, yes. This is a conflict between epistemic and instrumental rationality; if you value anything higher than truth, you will get more of it by choosing the falsehood. That's how the problem is defined.

Ok, after actually thinking about this for 5 minutes, it's ludicrously obviously that the falsehood is the correct choice, and it's downright scary how long it took me to realize this and how many in the comments seems to still not realize it.

Some tools falsehoods have for not being so bad:

  • sheer chaos theoretical outcome pumping. aka "you bash your face into the keyboard randomly believing a pie will appear and it programs a friendly AI" or the lottery example mentioned in other comments.

  • any damage is bonded by being able to be obviously insane enough that it wont spread, or even cause you to commit suicide you don't believe it very long if you really think believing any falsehood is THAT bad.

  • even if you ONLY value the truth, it could give you "all the statements in this list are true:" followed by a list of 99 true statements you are currently wrong about, and one inconsequential false one.

Some tools truths have for being bad:

  • sheer chaos theoretical outcome pumping. aka "you run to the computer to type in the code for the FAI you just learnt, but fall down some stairs and die, and the wind currents cause a tornado that kills Eliezer and then r

... (read more)
[-]Kyre100

Thanks, this one made me think.

a machine inside the box will reprogram your mind such that you will believe it completely

It seems that Omega is making a judgement about making some "minimal" change that leaves you the "same person" afterwards, otherwise he can always just replace anyone with a pleasure (or torture) machine that believes one fact.

If you really believe that directly editing thoughts and memories is equivalent to murder, and Omega respects that, then Omega doesn't have much scope for the black box. If Omega doesn't respect those beliefs about personal identity, then he's either a torturer or a murderer, and the dilemma is less interesting.

But ... least convenient possible world ...

Omega could be more subtle than this. Instead of facts and/or mind reprogrammers in boxes, Omega sets up your future so that you run into (apparent) evidence for the fact that you correctly (or mistakenly) interpret, and you have no way of distinguishing this inserted evidence from the normal flow of your life ... then you're back to the dilemma.

[-]maia90

The answer to this problem is only obvious because it's framed in terms of utility. Utility is, by definition, the thing you want. Strictly speaking, this should include any utility you get from your satisfaction at knowing the truth rather than a lie.

So for someone who valued knowing the truth highly enough, this problem actually should be impossible for Omega to construct.

Okay, so you are a mutant, and you inexplicably value nothing but truth. Fine.

The falsehood can still be a list of true things, tagged with 'everything on this list is true', but with an inconsequential falsehood mixed in, and it will still have net long-term utility for the truth-desiring utility function, particularly since you will soon be able to identify the falsehood, and with your mutant mind, quickly locate and eliminate the discrepancy.

The truth has been defined as something that cannot lower the accuracy of your beliefs, yet it still has maximum possible long-term disutility, and your utility function is defined exclusively in terms of the accuracy of your beliefs. Fine. Mutant that you are, the truth of maximum disutility is one which will lead you directly to a very interesting problem that will distract you for an extended period of time, but which you will ultimately be unable to solve. This wastes a great deal of your time, but leaves you with no greater utility than you had before, constituting disutility in terms of the opportunity cost of that time which you could've spent learning other things. Maximum disutility could mean that this is a problem that will occupy you for the rest of your life, stagnating your attempts to learn much of anything else.

4Kindly
Not necessarily: the problem only stipulates that of all truths you are told the worst truth, and of all falsehoods the best falsehood. If all you value is truth and you can't be hacked, then it's possible that the worst truth still has positive utility, and the best falsehood still has negative utility.
1Jay_Schweikert
Can we solve this problem by slightly modifying the hypothetical to say that Omega is computing your utility function perfectly in every respect except for whatever extent you care about truth for its own sake? Depending on exactly how we define Omega's capabilities and the concept of utility, there probably is a sense in which the answer really is determined by definition (or in which the example is impossible to construct). But I took the spirit of the question to be "you are effectively guaranteed to get a massively huge dose of utility/disutility in basically every respect, but it's the product of believing a false/true statement -- what say you?"

You should make the choice that brings highest utility. While truths in general are more helpful than falsehoods, this is not necessarily true, even in the case of a truly rational agent. The best falsehood will, in all probability, be better than the worst truth. Even if you exclusively value truth, there will most likely be a lie that results in you having a more accurate model, and the worst possible truth that's not misleading will have negligible effect. As such, you should chose the black box.

I don't see why this would be puzzling.

I would pick the black box, but it's a hard choice. Given all the usual suppositions about Omega as a sufficiently trustworthy superintelligence, I would assume that the utilities really were as it said and take the false information. But it would be a painful, both because I want to be the kind of person who pursues and acts upon the truth, and also because I would be desperately curious to know what sort of true and non-misleading belief could cause that much disutility -- was Lovecraft right after all? I'd probably try to bargain with Omega to let me kn... (read more)

1Endovior
That's exactly why the problem invokes Omega, yes. You need an awful lot of information to know which false beliefs actually are superior to the truth (and which facts might be harmful), and by the time you have it, it's generally too late. That said, the best real-world analogy that exists remains amnesia drugs. If you did have a traumatic experience, serious enough that you felt unable to cope with it, and you were experiencing PTSD or depression related to the trauma that impeded you from continuing with your life... but a magic pill could make it all go away, with no side effects, and with enough precision that you'd forget only the traumatic event... would you take the pill?
1Jay_Schweikert
Okay, I suppose that probably is a more relevant question. The best answer I can give is that I would be extremely hesitant to do this. I've never experienced anything like this, so I'm open to the idea that there's a pain here I simply can't understand. But I would certainly want to work very hard to find a way to deal with the situation without erasing my memory, and I would expect to do better in the long-term because of it. Having any substantial part of my memory erased is a terrifying thought to me, as it's really about the closest thing I can imagine to "experiencing" death. But I also see a distinction between limiting your access to the truth for narrow, strategic reasons, and outright self-deception. There are all kinds of reasons one might want the truth withheld, especially when the withholding is merely a delay (think spoilers, the Bayesian Conspiracy, surprise parties for everyone except Alicorn, etc.). In those situations, I would still want to know that the truth was being kept for me, understand why it was being done, and most importantly, know under what circumstances it would be optimal to discover it. So maybe amnesia drugs fit into that model. If all other solutions failed, I'd probably take them to make the nightmares stop, especially if I still had access to the memory and the potential to face it again when I was stronger. But I would still want to know there was something I blocked out and was unable to bear. What if the memory was lost forever and I could never even know that fact? That really does seem like part of me is dying, so choosing it would require the sort of pain that would make me wish for (limited) death -- which is obviously pretty extreme, and probably more than I can imagine for a traumatic memory.
0[anonymous]
For some genotypes, more trauma is associated with lower levels of depression Yet, someone experiencing trauma that they are better off continuing to suffer would hypothetically lead to learned helplessness and worse depression. But it's true, yet false belief is more productive. That said, genetic epidemiology is weird and I haven't looked at the literature beyodndon't understand the literature beyond this book. I was prompted to investigate it based on some counterintuitive outcomes regarding treatment for psychological trauama and depressive symptomology, established counterintuitive results about mindfulness and depressive symptoms in Parkinsons and Schizophrenia, and some disclosed SNP's sequences from a known individual.
0ChristianKl
Nobody makes plans based on totally accurate maps. Good maps contain simplifications of reality to allow you to make better decisions. You start to teach children how atoms work by putting the image atoms as spheres into their heads. You don't start by teaching them a model that's up to date with the current scientific knowledge of how atoms works. The current model is more accurate but less useful for the children. You calculate how airplanes fly with Newtons equations instead of using Einstein's. In social situations it can also often help to avoid getting certain information. You don't have job. You ask a friend to get you a job. The job pays well. He assures you that the work you are doing helps the greater good of the world. He however also tells you that some of the people you will work with do things in their private lifes that you don't like. Would you want him to tell you that your new boss secretly burns little puppies at night? The boss also doesn't take it kindly if people critizise him for it.
0Jay_Schweikert
Well, yes, I would. Of course, it's not like he could actually say to me "your boss secretly burns puppies -- do you want to know this or not?" But if he said something like "your boss has a dark and disturbing secret which might concern you; we won't get in trouble just for talking about it, but he won't take kindly to criticism -- do you want me to tell you?", then yeah, I would definitely want to know. The boss is already burning puppies, so it's not like the first-level harm is any worse just because I know about it. Maybe I decide I can't work for someone like that, maybe not, but I'm glad that I know not to leave him alone with my puppies. Now of course, this doesn't mean it's of prime importance to go around hunting for people's dark secrets. It's rarely necessary to know these things about someone to make good decisions on a day-to-day basis, the investigation is rarely worth the cost (both in terms of the effort required and the potential blow-ups from getting caught snooping around in the wrong places), and I care independently about not violating people's privacy. But if you stipulate a situation where I could somehow learn something in a way that skips over these concerns, then sure, give me the dark secret!
0ChristianKl
Knowing the dark secret will produce resentment for your boss. That resentment is likely to make it harder for you to get work done. If you see him with a big smile in the morning you won't think: "He seems like a nice guy because he's smilling" but "Is he so happy because he burned puppies yesterday?"
1Jay_Schweikert
Well, maybe. I'm actually skeptical that it would have much effect on my productivity. But to reverse the question, suppose you actually did know this about your boss. If you could snap your fingers and erase the knowledge from your brain, would you do it? Would you go on deleting all information that causes you to resent someone, so long as that information wasn't visibly relevant to some other pending decision?
-1ChristianKl
Deleting information doesn't make emotions go away. Being afraid and not knowing the reason for being afraid is much worse than just being afraid. You start to rationalize the emotions with bogus stories to get the emotions make sense.
0A1987dM
Azatoth built you in such a way that having certain beliefs can screw you over, even when they're true. (Well, I think it's the aliefs that actually matter, but deliberately keeping aliefs and beliefs separate is an Advanced Technique.)
[-]Shmi50

First, your invocation of Everett branches adds nothing to the problem, as every instance of "you" may well decide not to choose. So, "choose or die" ought to be good enough, provided that you have a fairly strong dislike of dying.

Second, the traumatic memories example is great, but a few more examples would be useful. For example, the "truth" might be "discover LW, undergo religious deconversion, be ostracized by your family, get run over by a car while wondering around in a distraught state" whereas the "lie&q... (read more)

3Endovior
I didn't have any other good examples on tap when I originally conceived of the idea, but come to think of it... Truth: A scientific formula, seemingly trivial at first, but whose consequences, when investigated, lead to some terrible disaster, like the sun going nova. Oops. Lies involving 'good' consequences are heavily dependent upon your utility function. If you define utility in such a way that allows your cult membership to be net-positive, then sure, you might get a happily-ever-after cult future. Whether or not this indicates a flaw in your utility function is a matter of personal choice; rationality cannot tell you what to protect. That said, we are dealing with Omega, who is serious about those optimals. This really is a falsehood with optimal net long-term utility for you. It might be something like a false belief about lottery odds, which leads to you spending the next couple years wasting large sums of money on lottery tickets... only to win a huge jackpot, hundreds of millions of dollars, and retire young, able to donate huge sums to the charities you consider important. You don't know, but it is, by definition, the best thing that could possibly happen to you as the result of believing a lie, as you define 'best thing'.
3Zaine
If that's what you meant, then the choice is really "best thing in life" or "worst thing in life"; whatever belief leads you there is of little consequence. Say the truth option leads to an erudite you eradicating all present, past, and future sentient life, and the falsehood option leads to an ignorant you stumbling upon the nirvana-space that grants all infinite super-intelligent bliss and Dr. Manhattan-like superpowers (ironically enough): What you believed is of little consequence to the resulting state of the verse(s).
1Kindly
I'd say that this is too optimistic. Omega checks the future and if, in fact, you would eventually win the lottery if you started playing, then deluding you about lotteries might be a good strategy. But for most people that Omega talks to, this wouldn't work. It's possible that the number of falsehoods that have one-in-a-million odds of helping you exceeds a million by far, and then it's very likely that Omega (being omniscient) can choose one that turns out to be helpful. But it's more interesting to see if there are falsehoods that have at least a reasonably large probability of helping you.
0Endovior
True; being deluded about lotteries is unlikely to have positive consequences normally, so unless something weird is going to go on in the future (eg: the lottery machine's random number function is going to predictably malfunction at some expected time, producing a predictable set of numbers; which Omega then imposes on your consciousness as being 'lucky'), that's not a belief with positive long-term consequences. That's not an impossible set of circumstances, but it is an easy-to-specify set, so in terms of discussing 'a false belief which would be long-term beneficial', it leaps readily to mind.
1Luke_A_Somers
Very unlikely, I'd say. ( = 0 for all a in 'you chose yes or no') is an extremely strong criterion. True.

Edge case:

The truth and falsehood themselves are irrelevant to the actual outcomes, since another superintelligence (or maybe even Omega itself) is directly conditioning on your learning of these "facts" in order to directly alter the universe into its worst and best possible configurations, respectively.

These seem to be absolute optimums as far as I can tell.

If we posit that Omega has actual influential power over the universe and is dynamically attempting to create those optimal information boxes, then this seems like the only possible result... (read more)

1wedrifid
Good edge case. Close to it. The only obvious deviations from the optimums are centered around the possible inherent disutility of having a universe in which you made the decision to have false beliefs then in fact had false beliefs for some time and the possible reduced utility assigned to unvierses in which you are granted a favourable universe rather than creating it yourself. This seems right, and the minds that are an exception here that are most easy to conceive are ones where the problem is centered around specific high emphasis within their utility function on events immediately surrounding the decision itself (ie. the "other edge" case).
0DaFranker
Well, when I said "alter the universe into its worst and best possible configurations", I had in mind a literal rewrite of the absolute total state of the universe, such that for that then-universe its computable past was also the best/worst possible past (or something similarly inconceivable to us that a superintelligence could come up with in order to have absolute best/worst possible universes), such as modifying the then-universe's past such that you had taken the other box and that that box had the same effect as the one you did pick. However, upon further thought, that feels incredibly like cheating and arguing by definition. Also, for the "opposite/other edge", I had considered minds with utility functions centered on the decision itself with conditionals against reality-alteration and spacetime-rewrites and so on, but those seem to be all basically just "Break the premises and Omega's predictions by begging the question!", similar to above, so they're fun to think about but useless in other respects.

Which box do you choose, and why?

I take the lie. The class of true beliefs has on average a significantly higher utility-for-believing than the class of false beliefs but there is an overlap. The worst in the "true" is worse than the best in "false".

I'd actually be surprised if Omega couldn't program me with a true belief that caused me to drive my entire species to extinction, and probably worse than that. Because superintelligent optimisers are badass and wedrifids are Turing-complete.

[-][anonymous]30

Would the following be a True Fact that is supported by evidence?

You open the white box, and are hit by a poison dart, which causes you to drop into a irreversible, excruciatingly painful, minimally aware, coma, where by all outward appearances you look fine, and you find out the world goes downhill, while you get made to live forever, while still having had enough evidence that Yes, the dart DID in fact contain a poison that drops you into an:

irreversible(Evidence supporting this, you never come out of a coma),

excruciatingly painful(Evidence supporting th... (read more)

0mwengler
It does seem to me that the question, which box, is is your utility associated with knowing truth able to overcome your disutility associated with fear of the unknown. If you are afraid enough, I don't have to torture you to break you, I only have to show you my dentist tools and talk to you about what might be in the white box.
0Endovior
As stated, the only trap the white box contains is information... which is quite enough, really. A prediction can be considered a true statement if it is a self-fulfilling prophecy, after all. More seriously, if such a thing as a basilisk is possible, the white box will contain a basilisk. Accordingly, it's feasible that the fact could be something like "Shortly after you finish reading this, you will drop into an irreversible, excruciatingly painful, minimally aware coma, where by all outward appearances you look fine, yet you find out the world goes downhill while you get made to live forever", and there's some kind of sneaky pattern encoded in the pattern of the text and the border of the page or whatever that causes your brain to lock up and start firing pain receptors, such that the pattern is self-sustaining. Everything else about the world and living forever and such would have to have been something that would have happened anyway, lacking your action to prevent it, but if Omega knows UFAI will happen near enough in the future, and knows that such a UFAI would catch you in your coma and stick you with immortality nanites without caring about your torture-coma state... then yeah, just such a statement is entirely possible.
0DaFranker
But the information in either box is clearly an influence on the universe - you can't just create information. I'm operating under the assumption that Omega's boxes don't violate the entropy principles here, and it just seems virtually impossible to construct a mind such that Omega could not possibly, with sufficient data on the universe, construct a truth and a falsehood for which when learned by you would arrive at causal disruption of the world in the worst-possible-by-your-utility-function and best-possible-by-your-utility-function manners respectively. As such, since Omega is saying the truth and Omega has fully optimized these two boxes among a potentially-infinite space of facts correlating to a potentially-infinite (unverified) space of causal influences on the world depending on your mind. To me, it seems >99% likely that opening the white box will result in the worst possible universe for the vast majority of mindspace, and the black box in the best possible universe for the vast majority of mindspace. I can conceive of minds that would circumvent this, but these are not even remotely close to anything I would consider capable of discussing with Omega (e.g. a mind that consists entirely of "+1 utilon on picking Omega's White Box, -9999 utilon on any other choice" and nothing else), and I infer all of those minds to be irrelevant to the discussion at hand since all such minds I can imagine currently are.

As stated, the question comes down to acting on an opinion you have on an unknown, but within the principles of this problem potentially knowable conclusion about your own utility function. And that is: Which is larger: 1) the amount of positive utility you gain from knowing the most disutile truths that exist for you OR 2) the amount of utility you gain from knowing the most utile falsehoods that exist for you

ALMOST by definition of the word utility, you would choose the truth (white box) if and only if 1) is larger and you would choose the falsehood (... (read more)

1Endovior
Quite a detailed analysis, and correct within its assumptions. It is important to know where Omega is getting it's information on your utility function. That said, since Omega implicitly knows everything you know (since it needs to know that in order to also know everything you don't know, and thus to be able to provide the problem at all), it implicitly knows your utility function already. Obviously, accepting a falsehood that perverts your utility function into something counter to your existing utility function just to maximize an easier target would be something of disutility to you as you are at present, and not something that you would accept if you were aware of it. Accordingly, it it a safe assumption that Omega has based its calculations off your utility before accepting the information, and for the purposes of this problem, that is exactly the case. This is your case (2); if a falsehood intrinsically conflicts with your utility function in whatever way, it generates disutility (and thus, is probably suboptimal). If your utility function is inherently hostile to such changes, this presents a limitation on the factset Omega can impose upon you. That said, your personal answer seems to place rather conservative bounds on the nature of what Omega can do to you. Omega has not presented bounds on it's utilities; instead, it has advised you that they are maximized within fairly broad terms. Simialrly, it has not assured you anything about the relative values of those utilities, but the structure of the problem as Omega presents it (which you know is correct, because Omega has already arbitrarily demonstrated it's power and trustworthiness) means you are dealing with an outcome pump attached directly to your utility function. Since the structure of the problem gives it a great deal of room in which to operate, the only real limitation is the nature of your own utility function. Sure, it's entirely possible that your utility function could be laid out in such a wa
1mwengler
Wire-heading, drug-addicition, lobotomy, black-box, all seem similar morally to me. Heck, my own personal black box would need nothing more than to have me believe that the universe is just a little more absurd than I already believe, that the laws of physics and the progress of humanity are a fever-dream, an hallucination. From there I would lower my resistance to wire-heading, drug-addiction. Even if I still craved the "truth" (my utility function was largely unchanged), these new facts would lead me to believe there was less of a possibility of utility from pursuing that, and so the rather obvious utility of drug or electronic induced pleasure would win my not-quite-factual day. The white box and a Nazi colonel-dentist with his tools laid out, talking to me about what he was going to do to me until I chose the black box are morally similar. I do not know why the Nazis/Omega want me to black box it. I do not know the extent of the disutility the colonel-dentist will actually inflict upon me. I do know m fear is at minimum nearly overwhelming, and may indeed overwhelm me before the day is done. Being broken in the sense that those who torture you for a result, and choosing the black box, are morally equivalent to me. Abandoning a long-term principle of commitment to the truth in favor of a short term but very high utility of giving up, the short term utility of totally abandoning myself in to the control of an evil god to avoid his torture is what I am being asked to do in choosing the black box. Its ALWAYS at least a little scary to choose reality over self-deception, over the euphoria of drugs and pain killers. The utility one derives from making this choice is much colder than the utility one derives from succumbing: it comes more, it seems, from the neo-cortex and less from the limbic system or lizard brain of fast fear responses. My utility AFTER I choose the white box may well be less than if I chose the black box. The scary thing in the white box might b
1fezziwig
Curious to know what you think of Michaelos' construction of the white-box.
0mwengler
Thank you for that link, reading it helped me clarify my answer.

The Bohr model of atomic structure is a falsehood which would have been of tremendous utility to a natural philosopher living a few hundred years ago.

That said, I feel like I'm fighting the hypothetical with that answer - the real question is, should we be willing to self-modify to make our map less accurate in exchange for utility? I don't think there's actually a clean decision-theoretic answer for this, that's what makes it compelling.

0Luke_A_Somers
If you're going that far, you might as well go as far as all modern physical theories, since we know they're incomplete - with it being wrong in that it's left out the bits that demonstrate that they're incomplete.
0Endovior
That is the real question, yes. That kind of self-modification is already cropping up, in certain fringe cases as mentioned; it will get more prevalent over time. You need a lot of information and resources in order to be able to generally self-modify like that, but once you can... should you? It's similar to the idea of wireheading, but deeper... instead of generalized pleasure, it can be 'whatever you want'... provided that there's anything you want more than truth.

For me, the falsehood is the obvious choice. I don't particularly value truth as an end (or rather, I do, but there are ends I value several orders of magnitude more). The main reason to seek to have true beliefs if truth is not the end goal is to ensure that you have accurate information regarding how well you're achieving your goal. By ensuring that the falsehood is high-utility, that problem is fairly well utility.

My beliefs are nowhere near true anyway. One more falsehood is unlikely to make a big difference, while there is a large class of psychologically harmful truths that can make a large (negative) difference.

Not really relevant, but

Omega appears before you, and after presenting an arbitrary proof that it is, in fact, a completely trustworthy superintelligence of the caliber needed to play these kinds of games

I idly wonder what such a proof would look like. E.g. is it actually possible to prove this to someone without presenting them an algorithm for superintelligence, sufficiently commented that the presentee can recognise it as such? (Perhaps I test it repeatedly until I am satisfied?) Can Omega ever prove its own trustworthiness to me if I don't already trust it? (This feels like a solid Gödelian "no".)

0Endovior
I don't have a valid proof for you. Omega is typically defined like that (arbitrarily powerful and completely trustworthy), but a number of the problems I've seen of this type tend to just say 'Omega appears' and assume that you know Omega is the defined entity simply because it self-identifies as Omega, so I felt the need to specify that in this instance, Omega has just proved itself. Theoretically, you could verify the trustworthiness of a superintelligence by examining its code... but even if we ignore the fact that you're probably not equipped to comprehend the code of a superintelligence (really, you'll probably need another completely trustworthy superintelligence to interpret the code for you, which rather defeats the point), there's still the problem that an untrustworthy superintelligence could provide you with a completely convincing forgery, which could potentially be designed in such a way that it would performs every action in the same way as the real one would (in that way being evaluated as 'trustworthy' under simulation)... except the one for which the untrustworthy superintelligence is choosing to deceive you on. Accordingly, I think that even a superintelligence probably can't be sure about the trustworthiness of another superintelligence, regardless of evidence.

This doesn't sound that hypothetical to me: it sounds like the problem of which organizations to join. Rational-leaning organizations will give you true information you don't currently know, while anti-rational organizations will warp your mind to rationalize false things. The former, while not certain to be on net bad, will lead you to unpleasant truths, while people in anti-rational groups are often duped into a kind of happiness.

0Endovior
Sure, that's a valid way of looking at things. If you value happiness over truth, you might consider not expending a great deal of effort in digging into those unpleasant truths, and retain your pleasant illusions. Of course, the nature of the choice is such that you probably won't realize that it is such a choice until you've already made it.

I'm not sure this scenario even makes sense as a hypothetical. At least for me personally, I find it doubtful that my utility could be improved according to my current function by being made to accept a false belief that I would normally reject outright.

If such a thing is possible, then I'd pick the false belief, since utility is necessarily better than disutility and I'm in no position to second guess Omega's assurance about which option will bring more, and there's no meta-utility on the basis of which I can be persuaded to choose things that go against ... (read more)

6thomblake
Vaguely realistic example: You believe that the lottery is a good bet, and as a result win the lottery. Hollywood example: You believe that the train will leave at 11:10 instead of 10:50, and so miss the train, setting off an improbable-seeming sequence of life-changing events such as meeting your soulmate, getting the job of your dreams, and finding a cure for aging. Omega example: You believe that "hepaticocholangiocholecystenterostomies" refers to surgeries linking the gall bladder to the kidney. This subtly changes the connections in your brain such that over time you experience a great deal more joy in life, as well as curing your potential for Alzheimer's.
1Desrtopa
The first example sounds like something that Omega might actually be able to forecast, so I may have to revise my position on those grounds, but on the other hand that specific example would pretty much have to alter my entire epistemic landscape, so it's hard to measure the utility difference between the me who believes the lottery is a bad deal and the altered person who wins it. The second falls into the category I mentioned previously of things that increase my utility only as I find out they're wrong; when I arrive, I will find out that the train has already left. As for the third, I suspect that there isn't a neurological basis for such a thing to happen. If I believed differently, I would have a different position on the dilemma in the first place.
4thomblake
Regardless of whether the third one is plausible, I suspect Omega would know of some hack that is equally weird and unable to be anticipated.
0Endovior
A sensible thing to consider. You are effectively dealing with an outcome pump, after all; the problem leaves plenty of solution space available, and outcome pumps usually don't produce an answer you'd expect; they instead produce something that matches the criteria even better then anything you were aware of.
0thomblake
You can subtly change that example to eliminate that problem. Instead of actually missing the train, you just leave later and so run into someone who gives you a ride, and then you never go back and check when the train was.
0Desrtopa
The example fails the "that you would normally reject outright" criterion though, unless I already have well established knowledge of the actual train scheduling times.

Hm. If there is a strong causal relationship between knowing truths and utility, then it is conceivable that this is a trick: the truth, while optimized for disutility, might still present me with a net gain over the falsehood and the utility. But honestly, I am not sure I buy that: you can get utility from a false belief, if that belief happens to steer you in such a way that it adds utility. You can't normally count on that, but this is Omega we are talking about.

The 'other related falsehoods and rationalizing' part has me worried. The falsehood might ne... (read more)

0Endovior
That's why the problem specified 'long-term' utility. Omega is essentially saying 'I have here a lie that will improve your life as much as any lie possibly can, and a truth that will ruin your life as badly as any truth can; which would you prefer to believe?' Yes, believing a lie does imply that your map has gotten worse, and rationalizing your belief in the lie (which we're all prone to do to things we believe) will make it worse. Omega has specified that this lie has optimal utility among all lies that you, personally, might believe; being Omega, it is as correct in saying this as it is possible to be. On the other hand, the box containing the least optimal truth is a very scary box. Presume first that you are particularly strong emotionally and psychologically; there is no fact that will directly drive you to suicide. Even so, there are probably facts out there that will, if comprehended and internalized, corrupt your utility function, leading you to work directly against all you currently believe in. There's probably something even worse than that out there in the space of all possible facts, but the test is rated to your utility function when Omega first encountered you, so 'you change your ethical beliefs, and proceed to spend your life working to spread disutility, as you formerly defined it' is on the list of possibilities.
0asparisi
Interesting idea. That would imply that there is a fact out there that, once known, would change my ethical beliefs, which I take to be a large part of my utility function, AND would do so in such a way that afterward, I would assent to acting on the new utility function. But one of the things that Me(now) values is updating my beliefs based on information. If there is a fact that shows that my utility function is misconstrued, I want to know it. I don't expect such a fact to surface, but I don't have a problem imagining such a fact existing. I've actually lost things that Me(past) valued highly on the basis of this, so I have some evidence that I would rather update my knowledge than maintain my current utility function. Even if that knowledge causes me to update my utility function so as not to prefer knowledge over keeping my utility function. So I think I might still pick the truth. A more precise account for how much utility is lost or gained in each scenario might convince me otherwise, but I am still not sure that I am better off letting my map get corrupted as opposed to letting my values get corrupted, and I tend to pick truth over utility. (Which, in this scenario, might be suboptimal, but I am not sure it is.)

How one responds to this dilemma depends on how one values truth. I get the impression that while you value belief in truth, you can imagine that the maximum amount of long-term utility for belief in a falsehood is greater than the minimum amount of long-term utility for belief in a true fact. I would not be surprised to see that many others here feel the same way. After all, there's nothing inherently wrong with thinking this is so.

However, my value system is such that the value of knowing the truth greatly outweighs any possible gains you might have from... (read more)

6AlexMennen
I am skeptical. Do you spend literally all of your time and resources on increasing the accuracy of your beliefs, or do you also spend some on some other form of enjoyment?
1EricHerboso
Point taken. Yet I would maintain that belief in true facts, when paired with other things I value, is what I place high value on. If I pair those other things I value with belief in falsehoods, their overall value is much, much less. In this way, I maintain a very high value in belief of true facts while not committing myself to maximize accuracy like paper clips. (Note that I'm confabulating here; the above paragraph is my attempt to salvage my intuitive beliefs, and is not indicative of how I originally formulated them. Nevertheless, I'm warily submitting them as my updated beliefs after reading your comment.)
3Endovior
Okay, so if your utilities are configured that way, the false belief might be a belief you will encounter, struggle with, and get over in a few years, and be stronger for the experience. For that matter, the truth might be 'your world is, in fact, a simulation of your own design, to which you have (through carelessness) forgotten the control codes; you are thus trapped and will die here, accomplishing nothing in the real world'. Obviously an extreme example; but if it is true, you probably do not want to know it.
[-]Shmi00

SMBC comics has a relevant strip: would you take a pill to ease your suffering when such a suffering no longer serves any purpose? (The strip goes for the all-or-nothing approach, but anything milder than that can be gamed by a Murder-Gandhi slippery slope).

This example is obviously hypothetical, but for a simple and practical case, consider the use of amnesia-inducing drugs to selectively eliminate traumatic memories; it would be more accurate to still have those memories, taking the time and effort to come to terms with the trauma... but present much greater utility to be without them, and thus without the trauma altogether.

Deleting all proper memories of an event from the mind doesn't mean that you delete all of it's traces.

An example from a physiology lecture I took at university: If you nearly get ... (read more)

I'll take the black box.

You are required to choose one of the boxes; if you refuse to do so, Omega will kill you outright and try again on another Everett branch.

Everett branches don't (necessarily) work like that. If 'you' are a person who systematically refuses to play such games then you just don't, no matter the branch. Sure, the Omega in a different branch may find a human-looking creature also called "Endovior" that plays such games but if it is a creature that has a fundamentally different decision algorithm then for the purpose of analyzing your decision alg... (read more)

0Endovior
Eh, that point probably was a bit weak. I probably could've just gotten away with saying 'you are required to choose a box'. Or, come to think of it, 'failure to open the white box and investigate its contents results in the automatic opening and deployment of the black box after X time'.
0wedrifid
Or, for that matter, just left it at "Omega will kill you outright". For flavor and some gratuitous additional disutility you could specify the means of execution as being beaten to death by adorable live puppies.
0mwengler
I observe that there have been many human utility functions where people who would prefer to be killed than to make choices offered to them that would keep them alive. So if the intention in the problem is to get you to choose one of the boxes, offering the 3rd choice of being killed doesn't make sense.

It's a trivial example, but "the surprise ending to that movie you're about to see" is a truth that's generally considered to have disutility. ;)

0wedrifid
And a trivial falsehood that is likely to have positive utility: The price of shares in CompanyX will increase by exactly 350% in the next week. (When they will actually increase by 450%). Or lottery numbers that are 1 digit off.

Optimised for utility X sounds like something that would win in pretty much any circumstance. Optimised for disutility Y sounds like something that would lose in pretty much any circumstance. In combination, the answer is especially clear.

1mwengler
Given a choice between the largest utlity less than 0 and the smallest utility greater than 1, would you still pick the largest? I think this is a trivial counterexample to your "in pretty much any circumstance" that turns it back in to a live question for you.
-2Larks
It's not a counterexample: it's the reason I didn't say "in any circumstance." And nor does it turn it back into a live issue; it's equally obveous in the opposite direction.

By what sort of mechanism does a truth which will not be misleading in any way or cause me to lower the accuracy of any probability estimates nevertheless lead to a reduction in my utility? Is the external world unchanged, but my utility is lowered merely by knowing this brain-melting truth? Is the external world changed for the worse by differing actions of mine, and if so then why did I cause my actions to differ, given that my probability estimate for the false-and-I-already-disbelieved-it statement "these new actions will be more utility-optimal" did not become less accurate?

0Endovior
The problem is that truth and utility are not necessarily correlated. Knowing about a thing, and being able to more accurately assess reality because of it, may not lead you to the results you desire. Even if we ignore entirely the possibility of basilisks, which are not ruled out by the format of the question (eg: there exists an entity named Hastur, who goes to great lengths to torment all humans that know his name), there is also knowledge you/mankind are not ready for (plan for a free-energy device that works as advertised, but when distributed and reverse-engineered, leads to an extinction-causing physics disaster). Even if you yourself are not personally misled, you are dealing with an outcome pump that has taken your utility function into account. Among all possible universes, among all possible facts that fit the pattern, there has to be at least one truth that will have negative consequences for whatever you value, for you are not perfectly rational. The most benign possibilities are those that merely cause you to reevaluate your utility function, and act in ways that no longer maximize what you once valued; and among all possibilities, there could be knowledge which will do worse. You are not perfectly rational; you cannot perfectly foresee all outcomes. A being which has just proved to you that it is perfectly rational, and can perfectly foresee all outcomes, has advised you that the consequences of you knowing this information will be the maximum possible long-term disutility. By what grounds do you disbelieve it?
0roystgnr
"You are not perfectly rational" is certainly an understatement, and it does seem to be an excellent catch-all for ways in which a non-brain-melting truth might be dangerous to me... but by that token, a utility-improving falsehood might be quite dangerous to me too, no? It's unlikely that my current preferences can accurately be represented by a self-consistent utility function, and since my volition hasn't been professionally extrapolated yet, it's easy to imagine false utopias that might be an improvement by the metric of my current "utility function" but turn out to be dystopian upon actual experience. Suppose someone's been brainwashed to the point that their utility function is "I want to obey The Leader as best as I can" - do you think that after reflection they'd be better off with a utility-maximizing falsehood or with a current-utility-minimizing truth?
0Endovior
The problem does not concern itself with merely 'better off', since a metric like 'better off' instead of 'utility' implies 'better off' as defined by someone else. Since Omega knows everything you know and don't know (by the definition of the problem, since it's presenting (dis)optimal information based on it's knowledge of your knowledge), it is in a position to extrapolate your utility function. Accordingly, it maximizes/minimizes for your current utility function, not its own, and certainly not some arbitrary utility function deemed to be optimal for humans by whomever. If your utility function is such that you hold the well-being of another above yourself (maybe you're a cultist of some kind, true... but maybe you're just a radically altruistic utilitarian), then the results of optimizing your utility will not necessarily leave you any better off. If you bind your utility function the aggregate utility of all humanity, then maximizing that is something good for all humanity. If you bind it to one specific non-you person, then that person gets a maximized utility. Omega does not discriminate between the cases... but if it is trying to minimize your long-term utility, a handy way to do so is to get you to act against your current utility function. Accordingly, yes; a current-utility-minimizing truth could possibly be 'better' by most definitions for a cultist then a current-utility-maximizing falsehood. Beware, though; reversed stupidity is not intelligence. Being convinced to ruin Great Leader's life or even murder him outright might be better for you than blindly serving him and making him dictator of everything, but that hardly means there's nothing better you could be doing. The fact that there exists a class of perverse utility functions which have negative consequences for those adopting them (and which can thus be positively reversed) does not imply that it's a good idea to try inverting your utility function in general.

Can I inject myself with a poison that will kill me within a few minutes and THEN chose the falsehood?

0Endovior
Suicide is always an option. In fact, Omega already presented you with it as an option, the consequences for not choosing. If you would in general carry around such a poison with you, and inject it specifically in response to just such a problem, then Omega would already know about that, and the information it offers would take that into account. Omega is not going to give you the opportunity to go home and fetch your poison before choosing a box, though. EDIT: That said, I find it puzzling that you'd feel the need to poison yourself before choosing the falsehood, which has already been demonstrated to have positive consequences for you. Personally, I find it far easier to visualize a truth so terrible that it leaves suicide the preferable option.
0Armok_GoB
I never said I would do it, just curious.

I am worried about "a belief/fact in its class" the class chosen could have an extreme effect on the outcome.

2Endovior
As presented, the 'class' involved is 'the class of facts which fits the stated criteria'. So, the only true facts which Omega is entitled to present to you are those which are demonstrably true, which are not misleading as specified, which Omega can find evidence to prove to you, and which you could verify yourself with a month's work. The only falsehoods Omega can inflict upon you are those which are demonstrably false (a simple test would show they are false), which you do not currently believe, and which you would disbelieve if presented openly. Those are fairly weak classes, so Omega has a lot of room to work with.
2Lapsed_Lurker
So, a choice between the worst possible thing a superintelligence can do to you by teaching you an easily-verifiable truth and the most wonderful possible thing by having you believe an untruth. That ought to be an easy choice, except maybe when there's no Omega and people are tempted to signal about how attached to the truth they are, or something.

I choose the truth.

Omega's assurances imply that I will not be in the valley of bad rationality mentioned later.

Out of curiosity, I also ask Omega to also show me the falsehood, without the brain alteration, so I can see what I might have ended up believing.

0ArisKatsaris
I wonder if the mere use of Omega is tripping you up regarding this, or if perhaps it's the abstraction of "truth" vs "lie" rather than any concrete example. So here's an example, straight from a spy-sf thriller of your choice. You're a secret agent, conscripted against your will by a tyrannical dystopian government. Your agency frequently mind-scans you to see to if you have revealed your true occupation to anyone, and then kills them to protect the secrecy of your work. They also kill anyone to whom you say that you can't reveal your true occupation, lest they become suspicious - the only allowed course of action is to lie plausibly. Your dear old mom asks "What kind of job did they assign you to, Richard?" Now motivated purely by wishing her own benefit, do you: a) tell her the truth, condemning her to die. b) tell her a plausible lie, ensuring her continuing survival.
-4Richard_Kennaway
I just don't find these extreme thought experiments useful. Parfit's Hitchhiker is a practical problem. Newcomb's Problem is an interesting conundrum. Omega experiments beyond that amount to saying "Suppose I push harder on this pan of the scales that you can push on the other. Which way will they go?" The question is trite and the answers nugatory. People talk about avoiding the hypothesis, but in these cases the hypothesis should be avoided. To accept it is already to have gone wrong.
0drethelin
If you're trying to find the important decision point in real situations it can often be helpful to go to extremes to admit that things are possible. Ie, if the best lie is preferred to the worst truth, that implies that some truths are worse than some lies, and you can start talking about how to figure this out. If you just start with the actual question, you get people who say "No, the truth is most important"
0Richard_Kennaway
Considering the extreme is only useful if the extreme is a realistic one -- if it is the least convenient possible world. (The meaning of "possible" in this sentence does not include "probability 1/3^^^3".) With extreme Omega-scenarios, the argument is nothing more than an outcome pump: you nail the conclusion you want to the ceiling of p=1 and confabulate whatever scenario is produced by conditioning on that hypothesis. The underlying structure is "Suppose X was the right thing to do -- would X be the right thing to so?", and the elaborate story is just a conjurer's misdirection. That's one problem. A second problem with hypothetical scenarios, even realistic ones, is that they're a standard dark arts tool. A would-be burner of Dawkins' oeuvre presents a hypothetical scenario where suppressing a work would be the right thing to do, and triumphantly crows after you agree to it, "So, you do believe in censorship, we're just quibbling over which books to burn!" In real life, there's a good reason to be wary of hypotheticals: if you take them at face value, you're letting your opponent write the script, and you will never be the hero in it.
0ArisKatsaris
Harmful truths are not extreme hypotheticals, they're a commonly recognized part of everyday existence. * You don't show photographs of your poop to a person who is eating - it would be harmful to the eaters' appetite. * You don't repeat to your children every little grievance you ever had with their other parent - it might be harmful to them. * You don't need to tell your child that they're NOT your favourite child either. Knowledge tends to be useful, but there's no law in the universe that forces it to be always beneficial to you. You've not indicated any reason that it is so obliged to be in every single scenario.
0Richard_Kennaway
Since I have not claimed it to be so, it is completely appropriate that I have given no reasons for it to be so.
0wedrifid
Then you lose. You also maintain your ideological purity with respect to epistemic rationality. Well, perhaps not. You will likely end up with an overall worse map of the territory (given that the drastic loss of instrumental resources probably kills you outright rather than enabling your ability to seek out other "truth" indefinitely). We can instead say that at least you refrained from making a deontological violation against an epistemic rationality based moral system.
0Jay_Schweikert
Even if it's a basilisk? Omega says: "Surprise! You're in a simulation run by what you might as well consider evil demons, and anyone who learns of their existence will be tortured horrifically for 3^^^3 subjective years. Oh, and by the way, the falsehood was that the simulation is run by a dude named Kevin who will offer 3^^^3 years of eutopian bliss to anyone who believes he exists. I would have used outside-of-the-Matrix magic to make you believe that was true. The demons were presented with elaborate thought experiments when they studied philosophy in college, so they think it's funny to inflict these dilemmas on simulated creatures. Well, enjoy!" If you want to say this is ridiculously silly and has no bearing on applied rationality, well, I agree. But that response pretty clearly meets the conditions of the original hypothetical, which is why I would trust Omega. If I somehow learned that knowing the truth could cause so much disutility, I would significantly revise my estimate that we live in a Lovecraftian horror-verse with basilisks floating around everywhere.
1wedrifid
3^^^3 units of simulated Kratom?
0Jay_Schweikert
Oops, meant to say "years." Fixed now. Thanks!
0wedrifid
I honestly didn't notice the missing word. I seemed to have just read "units" as a default. My reference was to the long time user by that name who does, in fact, deal in bliss of a certain kind.
0MixedNuts
Omega could create the demons when you open the box, or if that's too truth-twisting, before asking you.
-2Richard_Kennaway
That's the problem. The question is the rationalist equivalent of asking "Suppose God said he wanted you to kidnap children and torture them?" I'm telling Omega to just piss off.
0Endovior
The bearing this has on applied rationality is that this problem serves as a least convenient possible world for strict attachment to a model of epistemic rationality. Where the two conflict, you should probably prefer to do what is instrumentally rational over what is epistemically rational, because it's rational to win, not complain that you're being punished for making the "right" choice. As with Newcomb's Problem, if you can predict in advance that the choice you've labelled "right" has less utility than a "wrong" choice, that implies that you have made an error in assessing the relative utilities of the two choices. Sure, Omega's being a jerk. It does that. But that doesn't change the situation, which is that you are being presented with a situation where you are asked to choose between two situations of differing utility, and being trapped into an option of lesser utility (indeed, vastly lesser utility) by nothing but your own "rationality". This implies a flaw in your system of rationality.
-3Richard_Kennaway
When the least convenient possible world is also the most impossible possible world, I find the exercise less than useful. It's like Pascal's Mugging. Sure, there can be things you're better off not knowing, but the thing to do is to level up your ability to handle it. The fact that however powerful you imagine youself, you can imagine a more powerful Omega is like asking whether God can make a rock so heavy he can't lift it.
-1wedrifid
Leveling up is great, but I'm still not going to try to beat up an entire street-gang just to steal their bling. I don't have that level of combat prowess right now even though it is entirely possible to level up enough for that kind of activity to be possible and safe. It so happens that neither I nor any non-fictional human is at that level or likely to be soon. In the same way there is a huge space of possible agent that would be able to calculate true information that it would be detrimental for me to have. For most humans just another particularly manipulative human would be enough and for all the rest any old superintellgence would do. No, this is a cop-out. Humans do encounter situations where they encounter agents more powerful than themselves, including agents that are more intelligent and able to exploit human weaknesses. Just imagining yourself to be more powerful and more able to "handle the truth" isn't especially useful and trying to dismiss all such scenarios as like God combatting his own omnipotence would be irresponsible.
-2Richard_Kennaway
Omega isn't showing up right now. No non-fictional Omega is at that level either.
-1wedrifid
Then it would seem you need to delegate your decision theoretic considerations to those better suited to abstract analysis.

Does the utility calculation from the false belief include utility from the other beliefs I will have to overwrite? For example, suppose the false belief is "I can fly". At some point, clearly, I will have to rationalise away the pain of my broken legs from jumping off a cliff. Short of reprogramming my mind to really not feel the pain anymore - and then we're basically talking about wireheading - it seems hard to come up with any fact, true or false, that will provide enough utility to overcome that sort of thing.

I additionally note that the ma... (read more)

1AlexMennen
"I can fly" doesn't sound like a particularly high-utility false belief. It sounds like you are attacking a straw man. I'd assume that if the false information is a package of pieces of false information, then the entire package is optimized for being high-utility.
0RolfAndreassen
True, but that's part of my point: The problem does not specify that the false belief has high utility, only that it has the highest possible utility. No lower bound. Additionally, any false belief will bring you into conflict with reality eventually. "I can fly" just illustrates this dramatically.
6AlexMennen
Of course there will be negative-utility results of most false beliefs. This does not prove that all false beliefs will be net negative utility. The vastness of the space of possible beliefs should suggest that there are likely to be many approximately harmless false ones, and some very beneficial ones, despite the tendency for false beliefs to be negative utility. In fact, Kindly gives an example of each here. In the example of believing some sufficiently hard to factor composite to be prime, you would not naturally be able to cause a conflict anyway, since it is too hard to show that it is not prime. In the FAI example, it might have to keep you in the dark for a while and then fool you into thinking that someone else had created an FAI separately so you wouldn't have to know that your game was actually an FAI. The negative utility from this conflict resolution would be negligible compared to the benefits. The negative utility arising from belief conflict resolution in your example of "I can fly" does not even come close to generalizing to all possible false beliefs.
0Endovior
As written, the utility calculation explicitly specifies 'long-term' utility; it is not a narrow calculation. This is Omega we're dealing with, it's entirely possible that it mapped your utility function from scanning your brain, and checked all possible universes forward in time from the addition of all possible facts to your mind, and took the worst and best true/false combination. Accordingly, a false belief that will lead you to your death or maiming is almost certainly non-optimal. No, this is the one false thing that has the best long-term consequences for you, as you value such things, out of all the false things you could possibly believe. True, the maximum utility/disutility has no lower bound. This is intentional. If you really believe that your position is such that no true information can hurt you, and/or no false information can benefit you, then you could take the truth. This is explicitly the truth with the worst possible long-term consequences for whatever it is you value. Yes, it's pretty much defined as a sucker bet, implying that Omega is attempting to punish people for believing that there is no harmful true information and no advantageous false information. If you did, in fact, believe that you couldn't possibly gain by believing a falsehood, or suffer from learning a truth, this is the least convenient possible world.

The parallels with Newcomb's Paradox are obvious, and the moral is the same. If you aren't prepared to sacrifice a convenient axiom for greater utility, you're not really rational. In the case of Newcomb's Paradox, that axiom is Dominance. In this case, that axiom is True Knowledge Is Better Than False Knowledge.

In this instance, go for falsehood.