MugaSofer comments on The Fallacy of Gray - Less Wrong

97 Post author: Eliezer_Yudkowsky 07 January 2008 06:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: non-expert 10 January 2013 06:45:44AM 1 point [-]

Perspectivism provides that all truth is subjective, but in practice, this characterization has no relevance to the extent there is agreement on any particular truth. For example, "Murder is wrong," even if a subjective truth, is not so in practice because there is collective agreement that murder is wrong. That is all I meant, but agree that it was not clear.

Comment author: MugaSofer 10 January 2013 10:37:35AM *  -2 points [-]

Wait, does this "truth is relative" stuff only apply to moral questions? Because if it does then, while I personally disagree with you, there's a sizable minority here who wont.

Comment author: non-expert 14 January 2013 08:01:43AM 0 points [-]

What do you disagree with? That "truth is relative" applies to only moral questions? or that it applies to more than moral questions?

If instead your position is that moral truths are NOT relative, what is the basis for that position? No need to dive deep if you know of something i can read...even EY :)

Comment author: MugaSofer 14 January 2013 09:38:56AM *  1 point [-]

My position is that moral truths are not relative, exactly, but agents can of course have different goals. We can know what is Right, as long as we define it as "right according to human morals." Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI - so I would have a hard time calling them "subjective".

Of course, an AI with limited reasoning capacity might judge wrongly, but then humans do likewise - see e.g. Nazis.

EDIT: Regarding EY writings on the subject, he wrote a whole Metaethics Sequence, much of which is leading up to or directly discussing this exact topic. Unfortunately, I'm having trouble with the filters on this library computer, but it should be listed on the sequences page (link at top right) or in a search for "metaethics sequence".

Comment author: non-expert 14 January 2013 06:04:03PM 0 points [-]

We can know what is Right, as long as we define it as "right according to human morals." Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI - so I would have a hard time calling them "subjective"

I don't dispute the possibility that your conclusion may be correct, I'm wondering the basis under which you believe your position to be correct. Put another way, why are moral truths NOT relative? How do you know this? Thinking something can be done is fine (AI, etc.), but without substantiation it introduces a level of faith to the conversation -- I'm comfortable with that as the reason, but wondering if you are or if you have a different basis for the position.

From my view, moral truths may NOT be relative, but I have no basis for which to know that, so I've chosen to operate as if they are relative because (i) if moral truths exist but I don't know what they are, I'm in the same position as them not existing/being relative, and (ii) moral truths may not exist. This doesn't mean you don't use morality in your life, its just that you need to have a belief, without substantiation, that those you subscribe to conform with universal morals, if they exist.

OK, i'll try to search for those EY writings, thanks.

Comment author: brianmts 28 May 2013 06:37:57PM *  0 points [-]

Comment author: MugaSofer 29 May 2013 10:53:55AM *  -1 points [-]

I, ah ... I'm not seeing anything here. Have you accidentally posted just a space or something?