The good news is that this pruning heuristic will probably be part of any AI we build. (In fact, early forms of this AI will have to use a much stronger version of this heuristic if we want to keep them focused on the task at hand.)
So there is no danger of AIs having existential Boltzmann crises. (Although, ironically, they actually are brains-in-a-jar, for certain definitions of that term...)
Response to: Forcing Anthropics: Boltzmann Brains by Eliezer Yudkowsky
There is an argument that goes like this:
This argument has been reformulated many times. For example, here is the "Future Simulation" version of the argument:
Here is the "Boltzmann Brain" version of the argument:
All of these are the same possibility. And you know what? All of them are potentially true. I could be a brain in a jar, or a simulation, or a Boltzmann brain. And I have no way of calculating the probability of any of this, because it involves priors that I can't even begin to guess.
So how am I still functioning?
My optimization algorithm follows this very simple rule: When considering possible states of the universe, if in a given state S my actions are irrelevant to my utility, then I can safely ignore the possibility of S.
For example, suppose I am on a runaway train that is about to go over a cliff. I have a button marked "eject" and a button marked "self-destruct painfully". An omniscient, omnitruthful being named Omega tells me: "With 50% probability, both buttons are fake and you're going to go over the cliff and die no matter what you do." I can safely ignore this possibility because, if it were true, I would have no way to optimize for it.
Suppose Omega tells me there's actually a 99% probability that both buttons are fake. Maybe I'm pretty sad about this, but the "eject" button is still good for my utility and the "self-destruct" button is still bad.
Suppose Omega now tells me there's some chance the buttons are fake, but I can't estimate the probability, because it depends on my prior assumptions about the nature of the universe. Still don't care! Still pushing the eject button!
That is how I feel about the brain-in-a-jar problem.
The good news is that this pruning heuristic will probably be a part of any AI we build. In fact, early forms of AI will probably need to use much stronger versions of this heuristic if we want to keep them focused on the task at hand. So there is no danger of AIs having existential Boltzmann crises. (Although, ironically, they actually are brains-in-a-jar, for certain definitions of that term...)