According to Eliezer, there are two types of rationality. There is epistemic rationality, the process of updating your beliefs based on evidence to correspond to the truth (or reality) as closely as possible. And there is instrumental rationality, the process of making choices in order to maximize your future utility yield. These two slightly conflicting definitions work together most of the time as obtaining the truth is the rationalists' ultimate goal and thus yields the maximum utility. Are there ever times when the truth is not in a rationalist's best interest? Are there scenarios in which a rationalist should actively try to avoid the truth to maximize their possible utility? I have been mentally struggling with these questions for a while. Let me propose a scenario to illustrate the conundrum.
Suppose Omega, a supercomputer, comes down to Earth to offer you a choice. Option 1 is to live in a stimulated world where you have infinite utility (on this world there is no, pain, suffering, death, its basically a perfect world) and you are unaware you are living in a stimulation. Option 2 is Omega will answer one question on absolutely any subject truthfully pertaining to our universe with no strings attached. You can ask about the laws governing the universe, the meaning of life, the origin of time and space, whatever and Omega will give you a absolutely truthful, knowledgeable answer. Now, assuming all of these hypotheticals are true, which option would you pick? Which option should a perfect rationalist pick? Does the potential of asking a question whose answer could greatly improve humanity's knowledge of our universe outweigh the benefits of living in a perfect simulated world with unlimited utility? There is probably a lot of people who would object outright to living in a simulation because it's not reality or the truth. Well lets consider the simulation in my hypothetical conundrum for a second. It's a perfect reality and has unlimited utility potential, and you are completely unaware you are in a simulation on this world. Aside from the unlimited utility part, that sounds a lot like our reality. There are no signs of our reality of being a simulation and all (most) of humanity is convinced that our reality is not a simulation. There for, the only difference that really matters between the simulation in Option 1 and our reality is the unlimited utility potential that Option 1 offers. If there is no evidence that a simulation is not reality then the simulation is reality for the people inside the simulation. That is what I believe and that is why I would choose Option 1. The infinite utility of living in a perfect reality outweighs almost any utility amount increase I could contribute to humanity.
I am very interested in which option the less wrong community would choose (I know Option 2 is kind of arbitrary I just needed an option for people who wouldn't want to live in a simulation). As this is my first post, any feedback or criticism is appreciated. Also many more information on the topic of truth vs utility would be very helpful. Feel free to down vote me to oblivion if this post was stupid, didn't make sense, etc. It was simply an idea that I found interesting that I wanted to put into writing. Thank you for reading.
A consequentialist agent makes decisions based on the effect they have, as depicted in its map. Different agents may use different maps that describe different worlds, or rather more abstract considerations about worlds that don't pin down any particular world. Which worlds appear on an agent's map determines which worlds matter to it, so it seems natural to consider the relevance of such worlds an aspect of agent's preference.
The role played by these worlds in an idealized agent's decision-making doesn't require them to be "real", simulated in a "real" world, or even logically consistent. Anything would do for an agent with the appropriate preference, properties of an impossible world may well matter more than what happens in the real world.
You called attention to the idea that a choice apparently between an effect on the real world, and an effect on a simulated world, may instead be a choice between effects in two simulated worlds. Why is it relevant whether a certain world is "real" or simulated? In many situations that come up in thought experiments, simulated worlds matter less, because they have less measure, in the same way as an outcome predicated on a thousand coins all falling the same way matters less than what happens in all the other cases combined. Following the reasoning similar to expected utility considerations, you would be primarily concerned with the outcomes other than the thousand-tails one; and for the choice between influence in a world that might be simulated as a result of an unlikely collection of events, and influence in the real world, you would be primarily concerned with influence in the real world. So finding out that the choice is instead between two simulated worlds may matter a great deal, shifting focus of attention from the real world (now unavailable, not influenced by your decisions) to both of the simulated worlds, a priori expected to be similarly valuable.
My point was that the next step in this direction is to note that being simulated in an unlikely manner, as opposed to not even being simulated, is not obviously an important distinction. At some point the estimate of moral relevance may fail to remain completely determined by how a world (as a theoretical construct giving semantics to agent's map, or agent's preference) relates to some "real" world. At that point, discussing contrived mechanisms that give rise to the simulation may become useless as an argument about which worlds have how much moral relevance, even if we grant that the worlds closer to the real world in their origin are much more important in human preference.