Furcas comments on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions - Less Wrong

16 Post author: MichaelGR 11 November 2009 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (682)

You are viewing a single comment's thread. Show more comments above.

Comment author: Furcas 16 November 2009 07:36:25PM *  0 points [-]

"Universal values" presumably refers to values the universe will converge on, once living systems have engulfed most of it.

If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct.

If rerunning the clock produces highly similar moralities, then the moral objectivists will be able to declare victory.

Yeah, but Stefan's post was about AI, not about minds that evolved in our universe.

Also, there is a difference between moral universalism and moral objectivism. What your last sentence describes is universalism, while Stefan is talking about objectivism:

"My claim is that compassion is a universal rational moral value. Meaning any sufficiently rational mind will recognize it as such."

The idea that physics makes no mention of morality seems totally and utterly irrelevant to me. Physics makes no mention of convection, diffusion-limited aggregation, or fractal drainage patterns either - yet those things are all universal.

Agreed.

Comment author: timtyler 16 November 2009 07:45:10PM *  1 point [-]

Assuming that I'm right about this:

http://alife.co.uk/essays/engineered_future/

...it seems likely that most future agents will be engineered. So, I think we are pretty-much talking about the same thing.

Re: universalism vs objectivism - note that he does use the "u" word.