To those who say "Nothing is real," I once replied, "That's great, but how does the nothing work?"
Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
Devastating news, to be sure—and no, I am not telling you this in real life. But suppose I did tell it to you. Suppose that, whatever you think is the basis of your moral philosophy, I convincingly tore it apart, and moreover showed you that nothing could fill its place. Suppose I proved that all utilities equaled zero.
I know that Your-Moral-Philosophy is as true and undisprovable as 2 + 2 = 4. But still, I ask that you do your best to perform the thought experiment, and concretely envision the possibilities even if they seem painful, or pointless, or logically incapable of any good reply.
Would you still tip cabdrivers? Would you cheat on your Significant Other? If a child lay fainted on the train tracks, would you still drag them off?
Would you still eat the same kinds of foods—or would you only eat the cheapest food, since there's no reason you should have fun—or would you eat very expensive food, since there's no reason you should save money for tomorrow?
Would you wear black and write gloomy poetry and denounce all altruists as fools? But there's no reason you should do that—it's just a cached thought.
Would you stay in bed because there was no reason to get up? What about when you finally got hungry and stumbled into the kitchen—what would you do after you were done eating?
Would you go on reading Overcoming Bias, and if not, what would you read instead? Would you still try to be rational, and if not, what would you think instead?
Close your eyes, take as long as necessary to answer:
What would you do, if nothing were right?
Michael Vassar, I read that and laughed and said, "Oh, great, now I've got to play the thought experiment again in this new version."
Albeit I would postulate that on every occasion, the FAI underwent the water-flowing-downhill automatic shutdown that was automatically enginereed into it, with the stop code "desirability differentials vanished".
The responses that occurred to me - and yes, I had to think about it for a while - would be as follows:
*) Peek at the code. Figure out what happened. Go on from there.
Assuming we don't allow that (and it's not in the spirit of the thought experiment), then:
*) Try running the FAI at simpler extrapolations until it preserved desirability; stop worrying about anything that was in the desirability-killing extrapolations. So if being "more the people we wished we were" was the desirability-killer, then I would stop worrying about that, and update my morality accordingly.
*) Transform myself to something with a coherent morality.
*) Proceed as before, but with a shorter-term focus on when my life's goals are to be achieved, thinking less about the far future - as if you told me that, no matter what, I had to die before a thousand years were up.