What's so bad about morality being a mere human's construct - in other words, the notion that there is no "stone tablet" of morals? In fact, I think the notion that morality exists objectively, like some looming Platonic figure, raises more questions than would be solved by such a condition.
I think the best way to construct this "morality" is just to say that it's got a quasi-mathematical existence, it's axiomic, and it's all augmented by empirical/logical reasoning.
Why accept it, why be moral? I feel the same way about this question as I do about the question of why somebody who believes "if A, then B," and also believes that A, should also believe that B.
"Torture is a relative morality, as such, when a subculture like an intelligence agency tortures a terrorist, then it is allowed and it is moral. Any moral 'critique' of the torture is tantamount to a universal moralist rule: Torture is universally bad."
Torture is universally bad, with the exception of imperatives which are heirarchally superior.
"On the other hand, if morality is defined as "the way people make decisions", then of course everybody is moral and morality exists."
It's more like "the way people ought to make (certain sorts of) decisions". Morality doesn't describe the way people do act, it describes the way they should act (in situations with certain variables).
"I hope Eliezer is trying to demonstrate the absurdity of believing in objective morality, if so, then good luck!"
Perhaps. I think he believes in a sort of "objective morality" - that is, a morality which is distinct from arbitrary beliefs and such. That's different than saying that morality really exists, that we can find it somewhere, that it's divine, or part of the natural universe. It's not real, in that sense. It's a human construct - but that doesn't mean it's not objective. Math is a human construct, but that's not to say the it's arbitrary, that it is not objective.
To Eliezer's query: I would want to be able to live forever, but only for so long. (I would have to retain the power to end it.)
I think what you've done here is sort of examined one horn of the Euthyphro dilemma (A refutation of Divine Command Theory: Is it right because God commands it, or does God command it because it's right?)
If it's "right" because God commands it, then conceivably he could command that killing a baby is right (and did so in the Bible, apparently). The devout either have to eat this bullet (say that infanticide really becomes moral if God commands it), or dodge it - "God is good, he would never command such a thing" (but, with this, they acknowledge the fact that God is adhering to a set of rules outside of himself).
If it did come about that I needed to kill a baby, morally needed, then I would. But, while God could pick and chose any moral rules he wants, killing a baby is something that My-Moral-Theory is unlikely ever to require.
One theory (The Matrix spawned a lot of philosophy talk, and even books) was that, unbeknownst to the machines themselves, they couldn't simply kill off the humans - for ethical reasons. I mean, there are obviously more efficient ways to generate energy, but the robots couldn't kill off their creators - so they came up with this elaborate scheme of harvesting energy from their bodies, and never thinking much about how they were actually losing energy in the process.