Jack comments on Is the orthogonality thesis at odds with moral realism? - Less Wrong

3 Post author: ChrisHallquist 05 November 2013 08:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (118)

You are viewing a single comment's thread.

Comment author: Jack 06 November 2013 05:45:26PM *  3 points [-]

I don't think you have to be a moral anti-realist to believe the orthogonality thesis but you certainly have to be a moral realist to not believe it.

Now if you're a moral realist and you try to start writing an AI you're going to quickly see that you have a problem.

/#Initiates AI morality /#

  1. action_array.sort(morality)
  2. do action_array[0]

Doesn't work. So you have to start defining "morality" any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn't rapidly lead to disastrous consequences. You end up with the only plausible option looking like : "Examine what humans would want if they were rational and had all the information you have". Seems to me that that is the moment you should just become a moral subjectivist -- maybe of the ideal observer theory variety.

Now you might just believe the orthogonality thesis because you are a moral realist who doesn't believe in motivational internalism-- they're lots of ways to get there. But you can't be an anti-realist and ever even come close to making such a mistake.

Comment author: novalis 11 November 2013 06:20:38AM 1 point [-]

So you have to start defining "morality" any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn't rapidly lead to disastrous consequences.

No, because it's possible that there genuinely is a possible total ordering, but that nobody knows how to figure out what it is. "No human always knows what's right" is not an argument against moral realism, any more than "No human knows everything about God" is an argument against theism.

(I'm not a moral realist or theist)

Comment author: Jack 14 November 2013 06:15:07AM 0 points [-]

I wasn't making an argument against moral realism in the sentence you quoted.

Comment author: DanielLC 07 November 2013 05:26:04AM 1 point [-]

I would expect, due to the nature of intelligence, that they'd be likely to end up valuing certain things, like power or wireheading. I don't see why this would require that those values are in some way true.

Comment author: TheAncientGeek 09 December 2013 09:40:24AM -1 points [-]

The (possible extreme) difficulty of figuring out objective morality in a way that can be coded into an AI is not an argument against moral realism. If it were, we would have to disbelieve in language, consciousness and other difficult issues.

Doesn't work. So you have to start defining "morality" any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn't rapidly lead to disastrous consequences.

What consequences? That claim is badly in need of support.

Comment author: Jack 10 December 2013 04:48:27AM 1 point [-]

What consequences? That claim is badly in need of support.

No, it isn't. It's Less Wrong/MIRI boilerplate. I'm not really interested in rehashing that stuff with someone who isn't already familiar with it.

The (possible extreme) difficulty of figuring out objective morality in a way that can be coded into an AI is not an argument against moral realism. If it were, we would have to disbelieve in language, consciousness and other difficult issues.

The question was "is the orthogonality thesis at odds with moral realism?". I answered: "maybe not, but moral anti-realism is certainly closely aligned with the orthogonality thesis-- it's actually a trivial implication of moral anti-realism."

If you are concerned that people aren't taking the orthogonality thesis seriously enough then emphasizing that there is as much evidence for moral realism as there is for God is a pretty good way to frame the issue.