To those who say "Nothing is real," I once replied, "That's great, but how does the nothing work?"
Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.
Devastating news, to be sure—and no, I am not telling you this in real life. But suppose I did tell it to you. Suppose that, whatever you think is the basis of your moral philosophy, I convincingly tore it apart, and moreover showed you that nothing could fill its place. Suppose I proved that all utilities equaled zero.
I know that Your-Moral-Philosophy is as true and undisprovable as 2 + 2 = 4. But still, I ask that you do your best to perform the thought experiment, and concretely envision the possibilities even if they seem painful, or pointless, or logically incapable of any good reply.
Would you still tip cabdrivers? Would you cheat on your Significant Other? If a child lay fainted on the train tracks, would you still drag them off?
Would you still eat the same kinds of foods—or would you only eat the cheapest food, since there's no reason you should have fun—or would you eat very expensive food, since there's no reason you should save money for tomorrow?
Would you wear black and write gloomy poetry and denounce all altruists as fools? But there's no reason you should do that—it's just a cached thought.
Would you stay in bed because there was no reason to get up? What about when you finally got hungry and stumbled into the kitchen—what would you do after you were done eating?
Would you go on reading Overcoming Bias, and if not, what would you read instead? Would you still try to be rational, and if not, what would you think instead?
Close your eyes, take as long as necessary to answer:
What would you do, if nothing were right?
I wonder if Eliezer is planning to say that morality is just an extrapolation of our own desires? If so, then my morality would be an extrapolation of my desires, and your morality would be an extrapolation of yours. This is disturbing, because if our extrapolated desires don't turn out to be EXACTLY the same, something might be immoral for me to do which is moral for you to do, or moral for me and immoral for you.
If this is so, then if I programmed an AI, I would be morally obligated to program it to extrapolate my personal desires-- i.e. my personal desires, not the desires of the human race. So Eliezer would be deceiving us about FAI: his intention is to extrapolate his personal desires, since he is morally obligated to do so. Maybe someone should stop him before it's too late?