Eliezer_Yudkowsky comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 16 November 2009 08:01:48PM 2 points [-]

If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct.

If rerunning the clock produces highly similar moralities, then the moral objectivists will be able to declare victory.

Why should we care about this mere physical fact of which you speak? What has this mere "is" to do with whether "should" is "objective", whatever that last word means (and why should we care about that?)

Comment author: Tyrrell_McAllister 16 November 2009 08:23:35PM 1 point [-]

Why should we care about this mere physical fact of which you speak?

Where did Tim say that we should?

Comment author: Eliezer_Yudkowsky 16 November 2009 08:27:48PM 0 points [-]

If it's got nothing to do with shouldness, then how does it determine the truth-value of "moral objectivism"?

Comment author: timtyler 16 November 2009 09:20:10PM *  0 points [-]

Hi, Eli! I'm not sure I can answer directly - here's my closest shot:

If there's a kind of universal moral attractor, then the chances seem pretty good that either our civilisation is on route for it - or else we will be obliterated or assimilated by aliens or other agents as they home in on it.

If it's us who are on route for it, then we (or at least our descendants) will probably be sympathetic to the ideas it represents - since they will be evolved from our own moral systems.

If we get obliterated at the hands of some other agents, then there may not necessarily be much of a link between our values and the ones represented by the universal moral attractor.

Our values might be seen as OK by the rest of the universe - and we fail for other reasons.

Or our morals might not be favoured by the universe - we could be a kind of early negative moral mutation - in which case we would fail because our moral values would prevent us from being successful.

Comment author: Eliezer_Yudkowsky 16 November 2009 09:26:08PM 2 points [-]

Maybe it turns out that nearly all biological organisms except us prefer to be orgasmium - to bliss out on pure positive reinforcement, as much of it as possible, caretaken by external AIs, until the end. Let this be a fact in some inconvenient possible world. Why does this fact say anything about morality in that inconvenient possible world? Why is it a universal moral attractor? Why not just call it a sad but true attractor in the evolutionary psychology of most aliens?

Comment author: timtyler 16 November 2009 09:34:57PM *  0 points [-]

It's a fact about morality in that world - if we are talking about morality as values - or the study of values - since that's what a whole bunch of creatures value.

Why is it a universal moral attractor? I don't know - this is your hypothetical world, and you haven't told me enough about it to answer questions like that.

Call it other names if you prefer.

Comment author: Eliezer_Yudkowsky 16 November 2009 09:55:14PM 1 point [-]

What do you mean by "morality"? It obviously has nothing to do with the function I try to compute to figure out what I should be doing.

Comment author: timtyler 16 November 2009 10:13:49PM 1 point [-]

1 2 and 3 on http://en.wikipedia.org/wiki/Morality all seem OK to me.

I would classify the mapping you use between possible and actual actions to be one type of moral system.