Unknowns comments on Strong moral realism, meta-ethics and pseudo-questions. - Less Wrong

18 [deleted] 31 January 2010 08:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 01 February 2010 01:26:08AM *  1 point [-]

...sounds mostly good so far. Except that there's plenty of justification for thinking about morality besides "it's something we happen to think about". They're just... well... there's no other way to put this... perfectly valid, moving, compelling, heartwarming, moral justifications. They're actually better justifications than being compelled by some sort of ineffable transcendent compellingness stuff - if I've got to respond to something, those are just the sort of (logical) facts I'd want to respond to! (I think this may be the part Roko still doesn't get.) Also, the "lucky causal history" isn't luck at all, of course.

It's also quite possible that human beings, from time to time, are talking about different subject matters when they have what looks like a moral disagreement; but this is a rather drastic assumption to make in our current state of ignorance, and I feel that a sort of courtesy should be extended, to the extent of hearing out each other's arguments and proceeding on the assumption that we actually are disagreeing about something.

Comment author: Unknowns 01 February 2010 07:28:36AM 7 points [-]

Eliezer, I don't understand how you can say that the "lucky causal history" wasn't luck, unless you also say "if humans had evolved to eat babies, babyeating would have been right."

If it wouldn't have been right even in that event, then it took a stupendous amount of luck for us to evolve in just such a way that we care about things that are right, instead of other things.

Either that or there is a shadowy figure.

Comment author: aleksiL 01 February 2010 04:43:14PM 2 points [-]

As I understand Eliezer's position, when babyeater-humans say "right", they actually mean babyeating. They'd need a word like "babysaving" to refer to what's right.

Morality is what we call the output of a particular algorithm instantiated in human brains. If we instantiated a different algorithm, we'd have a word for its output instead.

I think Eliezer sees translating babyeater word for babyeating as "right" as an error similar to translating their word for babyeaters as "human".

Comment author: Unknowns 01 February 2010 05:04:36PM 3 points [-]

Precisely. So it was luck that we instantiate this algorithm, instead of a different one.