What must a sane person1 think regarding religion? The naive first approximation is "religion is crap". But let's consider the following:
Humans are imperfectly rational creatures. Our faults include not being psychologically able to maximally operate according to our values. We can e.g. suffer from burn-out if we try to push ourselves too hard.
It is thus important for us to consider, what psychological habits and choices contribute to our being able to work as diligently for our values as we want to (while being mentally healthy). It is a theoretical possibility, a hypothesis that could be experimentally studied, that the optimal2 psychological choices include embracing some form of Faith, i.e. beliefs not resting on logical proof or material evidence.
In other words, it could be that our values mean that Occam's Razor should be rejected (in some cases), since embracing Occam's Razor might mean that we miss out on opportunities to manipulate ourselves psychologically into being more what we want to be.
To a person aware of The Simulation Argument, the above suggests interesting corollaries:
- Running ancestor simulations is the ultimate tool to find out, what (if any) form of Faith is most conducive to us being able to live according to our values.
- If there is a Creator and we are in fact currently in a simulation being run by that Creator, it would have been rather humorous of them to create our world thus that the above method would yield "knowledge" of their existence.
1: Actually, what I've written here assumes we are talking about humans. Persons-in-general may be psychologically different, and theoretically capable of perfect rationality.
2: At least for some individuals, not necessarily all.
I didn't downvote this post, but I can't say I endorse seeing more posts like it. The concept of this post is one of the least interesting in a huge conceptspace of decision theory problems, especially decision theory problems in an ensemble universe. To focus on 'having faith' and 'rationality' in particular might seem clever, but it fails to illuminate in the same way that e.g. Nesov's counterfactual mugging does. When you start thinking about various things simulators might do, you're probably wrong about how much measure is going to be taken up by any given set of simulations. Especially so once you consider that a superintelligence is extremely likely to occur before brain emulations and that a superintelligence is almost assuredly not going to be running simulations of the kind you specify.
Instead of thinking "What kind of scenario involving simulations could I post to Less Wrong and still be relevant?", as it seems to me you did, it would be much better to ask the more purely curious question "What is the relative power of optimization processes that would cause universes that include agents in my observer moment reference class to find themselves in a universe that looks like this one instead of some other universe?" Asking this question has led me to some interesting insights, and I imagine it would interest others as well.
Actually, I didn't try to be relevant or interesting to the LW community. I'm just currently genuinely very interested in the kinds of questions this post was about, and selfishly thought I'd get very useful criticism and comments if I'd post here like this (as did indeed happen).
Getting downvoted so much is something that I for some reason enjoy very much :) It probably has to do with me thinking, that while there are very valid points on which my post and my decision to post it can be criticized, I suspect that instead of thinking of those valid reasons ... (read more)