According to Eliezer, there are two types of rationality. There is epistemic rationality, the process of updating your beliefs based on evidence to correspond to the truth (or reality) as closely as possible. And there is instrumental rationality, the process of making choices in order to maximize your future utility yield. These two slightly conflicting definitions work together most of the time as obtaining the truth is the rationalists' ultimate goal and thus yields the maximum utility. Are there ever times when the truth is not in a rationalist's best interest? Are there scenarios in which a rationalist should actively try to avoid the truth to maximize their possible utility? I have been mentally struggling with these questions for a while. Let me propose a scenario to illustrate the conundrum.
Suppose Omega, a supercomputer, comes down to Earth to offer you a choice. Option 1 is to live in a stimulated world where you have infinite utility (on this world there is no, pain, suffering, death, its basically a perfect world) and you are unaware you are living in a stimulation. Option 2 is Omega will answer one question on absolutely any subject truthfully pertaining to our universe with no strings attached. You can ask about the laws governing the universe, the meaning of life, the origin of time and space, whatever and Omega will give you a absolutely truthful, knowledgeable answer. Now, assuming all of these hypotheticals are true, which option would you pick? Which option should a perfect rationalist pick? Does the potential of asking a question whose answer could greatly improve humanity's knowledge of our universe outweigh the benefits of living in a perfect simulated world with unlimited utility? There is probably a lot of people who would object outright to living in a simulation because it's not reality or the truth. Well lets consider the simulation in my hypothetical conundrum for a second. It's a perfect reality and has unlimited utility potential, and you are completely unaware you are in a simulation on this world. Aside from the unlimited utility part, that sounds a lot like our reality. There are no signs of our reality of being a simulation and all (most) of humanity is convinced that our reality is not a simulation. There for, the only difference that really matters between the simulation in Option 1 and our reality is the unlimited utility potential that Option 1 offers. If there is no evidence that a simulation is not reality then the simulation is reality for the people inside the simulation. That is what I believe and that is why I would choose Option 1. The infinite utility of living in a perfect reality outweighs almost any utility amount increase I could contribute to humanity.
I am very interested in which option the less wrong community would choose (I know Option 2 is kind of arbitrary I just needed an option for people who wouldn't want to live in a simulation). As this is my first post, any feedback or criticism is appreciated. Also many more information on the topic of truth vs utility would be very helpful. Feel free to down vote me to oblivion if this post was stupid, didn't make sense, etc. It was simply an idea that I found interesting that I wanted to put into writing. Thank you for reading.
Here is my attempt to rephrase Vladimir's comment:
Consider a possible world W that someone could simulate, but which, in fact, no one ever will simulate. An agent A can still care about what happens in W. The agent could even try to influence what happens in W acausally.
A natural rejoinder is, How is A going to influence W unless A itself simulates W? How else can A play out the acausal consequences of its choices?
The reply is, A can have some idea about what happens in W without reasoning about W in so fine-grained a way as to deserve the word "simulation". Coarse-grained reasoning could still suffice for A to influence W.
For example, recall Vladimir's counterfactual mugging:
Now consider a variant in which, in the counterfactual heads world, instead of giving you $10,000, Omega would have given you an all-expenses-paid month-long vacation to the destination of your choice.
You don't need to simulate all the details of how that vacation would have played out. You don't even need to simulate where you would have chosen to go. (And let us assume that Omega also never simulates any of these things.) Even if no such simulations ever run, you might still find the prospect of counterfactual-you getting that vacation so enticing that you give Omega the $100 in the actual tails world.