(This is a semi-serious introduction to the metaethics sequence. You may find it useful, but don't take it too seriously.)
Meditate on this: A wizard has turned you into a whale. Is this awesome?
"Maybe? I guess it would be pretty cool to be a whale for a day. But only if I can turn back, and if I stay human inside and so on. Also, that's not a whale.
"Actually, a whale seems kind of specific, and I'd be suprised if that was the best thing the wizard can do. Can I have something else? Eternal happiness maybe?"
Meditate on this: A wizard has turned you into orgasmium, doomed to spend the rest of eternity experiencing pure happiness. Is this awesome?
...
"Kindof... That's pretty lame actually. On second thought I'd rather be the whale; at least that way I could explore the ocean for a while.
"Let's try again. Wizard: maximize awesomeness."
Meditate on this: A wizard has turned himself into a superintelligent god, and is squeezing as much awesomeness out of the universe as it could possibly support. This may include whales and starships and parties and jupiter brains and friendship, but only if they are awesome enough. Is this awesome?
...
"Well, yes, that is awesome."
What we just did there is called Applied Ethics. Applied ethics is about what is awesome and what is not. Parties with all your friends inside superintelligent starship-whales are awesome. ~666 children dying of hunger every hour is not.
(There is also normative ethics, which is about how to decide if something is awesome, and metaethics, which is about something or other that I can't quite figure out. I'll tell you right now that those terms are not on the exam.)
"Wait a minute!" you cry, "What is this awesomeness stuff? I thought ethics was about what is good and right."
I'm glad you asked. I think "awesomeness" is what we should be talking about when we talk about morality. Why do I think this?
-
"Awesome" is not a philosophical landmine. If someone encounters the word "right", all sorts of bad philosophy and connotations send them spinning off into the void. "Awesome", on the other hand, has no philosophical respectability, hence no philosophical baggage.
-
"Awesome" is vague enough to capture all your moral intuition by the well-known mechanisms behind fake utility functions, and meaningless enough that this is no problem. If you think "happiness" is the stuff, you might get confused and try to maximize actual happiness. If you think awesomeness is the stuff, it is much harder to screw it up.
-
If you do manage to actually implement "awesomeness" as a maximization criteria, the results will be actually good. That is, "awesome" already refers to the same things "good" is supposed to refer to.
-
"Awesome" does not refer to anything else. You think you can just redefine words, but you can't, and this causes all sorts of trouble for people who overload "happiness", "utility", etc.
-
You already know that you know how to compute "Awesomeness", and it doesn't feel like it has a mysterious essence that you need to study to discover. Instead it brings to mind concrete things like starship-whale math-parties and not-starving children, which is what we want anyways. You are already enabled to take joy in the merely awesome.
-
"Awesome" is implicitly consequentialist. "Is this awesome?" engages you to think of the value of a possible world, as opposed to "Is this right?" which engages to to think of virtues and rules. (Those things can be awesome sometimes, though.)
I find that the above is true about me, and is nearly all I need to know about morality. It handily inoculates against the usual confusions, and sets me in the right direction to make my life and the world more awesome. It may work for you too.
I would append the additional facts that if you wrote it out, the dynamic procedure to compute awesomeness would be hellishly complex, and that right now, it is only implicitly encoded in human brains, and no where else. Also, if the great procedure to compute awesomeness is not preserved, the future will not be awesome. Period.
Also, it's important to note that what you think of as awesome can be changed by considering things from different angles and being exposed to different arguments. That is, the procedure to compute awesomeness is dynamic and created already in motion.
If we still insist on being confused, or if we're just curious, or if we need to actually build a wizard to turn the universe into an awesome place (though we can leave that to the experts), then we can see the metaethics sequence for the full argument, details, and finer points. I think the best post (and the one to read if only one) is joy in the merely good.
Don't we have to do it (lying to people) because we value other people being happy? I'd rather trick them (or rather, let the AI do so without my knowledge) than have them spend a lot of time angsting about not being able to help anyone because everyone was already helped. (If there are people who can use your help, I'm not about to wirehead you though)
Yes. Thinking about simulating achievement got me confused about it. I can imagine intense pleasure or pain. I can't imagine intense achievement; if I just got the surge of warmth I normally get, it would feel wrong, removed from flying a spaceship. Yet, that doesnt mean that I don't have an achievement slider to max; it just means I can't imagine what maxing it indefinitely would feel like. Maxing the slider leading to hallucinations about performing activities related to achievement seems too roundabout - really, that's the only thing I can say; it feels like it won't work that way. Can the pill satisfy terminal values without making me think I satisfied them? I think this question shows that the sentence before it is just me being confused. Yet I can't imagine how an awesomeness pill would feel, hence I can't dispel this annoying confusion.
[EDIT] Maybe a pill that simply maxes the sliders would make me feel achievement, but without flying a spaceship, hence making it incomplete, hence forcing the AI to include a spaceship hallucinator. I think I am/was making it needlessly complicated. In any case, the general idea is that if we are all opposed to just feeling intense pleasure without all the other stuff we value, then a pill that gives us only intense pleasure is flawed and would not even be given as an option.
Regarding the first bit... well, we have a few basic choices:
If I'm understanding your scenario properly, we don't want to do the first because it leaves more people worse off, and we don't want to do the last because it leaves us worse off. (Why our valuing other people being happy should be more important than their valuing actually helping people, I don't know, but I'll accept that it is.)
But why, on your view, ought we lie to them, rather than change them?