I'm still bothered by the fact that different people mean different and in fact contradictory things by "moral realism".
The SEP says that moral realism means thinking that (some) morality exists as objective fact, which can be discovered through thinking or experimentation or some other process which would lead all right-thinking minds to agree about it. That is also how I understood the term before reading these posts.
And yet Eliezer seems to call himself (or be called?) a moral realist, even though he explicitly only talks about MoralGood!Eliezer (or !Humanity, !CEV, etc.) This is confusing and consequently irritating to people including myself.
So when you ask if:
maybe some people have the intuition that the orthogonality thesis is at odds with moral realism.
What do you mean? I think it's time to taboo "moral realism" because people have repeatedly failed to agree on what these words should mean.
What the SEP actually says is, "Moral realists are those who think that, in these respects, things should be taken at face value—moral claims do purport to report facts and are true if they get the facts right," and that's it.
This is all a matter of misunderstanding the meaning of words, and nobody is objectively right or wrong about that, since the disagreement is widespread - I'm not the only one to complain.
To me, an unqualified "fact" is, by implication, a simple claim about the universe, not a fact about the person holding the belief in that fact. An unqualified "fact" should be true or false in itself, without requiring you to further specify you meant the instance-of-that-fact that applies to some particular person with particular moral beliefs.
If SEP's usage of "fact" is taken to mean "a fact about the person holding the moral belief", the fact being that the person does hold that belief, then I don't understand what it would mean to say that there aren't any moral facts (i.e. moral anti-realism). Would it mean to claim that people have no moral beliefs? That's obviously false.
...On Eliezer's view, as I understand it, huma
I thought that when humans and Clippy speak about morality, they speak about the same thing (assuming that they are not lying and not making mistakes).
The difference is in connotations. For humans, morality has a connotation "the thing that should be done". For Clippy, morality has a connotation "this weird stuff humans care about".
So, you could explain the concept of morality to Clippy, and then also explain that X is obviously moral. And Clippy would agree with you. It just wouldn't make Clippy any more likely to do X; the "should" emotion would not get across. The only result would be Clippy remembering that humans feel a desire to do X; and that information could be later used to create more paperclips.
Clippy's equivalent of "should" is connected to maximizing the number of paperclips. The fact that X is moral is about as much important for it as an existence of a specific paperclip is for us. "Sure, X is moral. I see. I have no use of this fact. Now stop bothering me, because I want to make another paperclip."
If moral realism is simply the view that some positive moral claims are true, without further metaphysical or conceptual commitments, then I can't see how it could be at odds with the orthogonality thesis. In itself, that view doesn't entail anything about the relation between intelligence levels and goals.
On the other hand, the conjunction of moral realism, motivational judgment internalism (i.e. the view that moral judgments necessarily motivate), and the assumption that a sufficiently intelligent agent would grasp at least some moral truths is at odds with the orthogonality thesis. Other combinations of views may yield similar results.
I would say that the orthogonality thesis does not necessarily imply moral non-realism... but some forms of moral non-realism do imply the orthogonality thesis, in which case rejecting the orthogonality thesis would require rejecting at least that particular kind of moral non-realism. This may cause moral non-realists of that variety to equate moral realism and a rejection of the OT.
For example, if you are a moral non-cognitivist, then according to the SEP, you believe that:
...when people utter moral sentences they are not typically expressing states of min
I don't think you have to be a moral anti-realist to believe the orthogonality thesis but you certainly have to be a moral realist to not believe it.
Now if you're a moral realist and you try to start writing an AI you're going to quickly see that you have a problem.
/#Initiates AI morality /#
- action_array.sort(morality)
- do action_array[0]
Doesn't work. So you have to start defining "morality" any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn't rapidly lead to disastrous consequences. You end...
Specific types of moral realism require the orthagonality thesis to be false, and you could argue that if it were false, moral realism would be true.
Continuing my quest to untangle people's confusions about Eliezer's metaethics...
I wonder how confident you are that this is not, at least in part, Eliezer's own confusion about metaethics?
...I suspect the reason is fairly mundane, though: before Kant (roughly), it was not only dangerous to be an atheist, it was dangerous to question that the existence of God could be proven through reason (because it would get you suspected of being an atheist). It was even dangerous to advocated philosophical views that might possibly undermine the standard arguments for the existence of God. That guaranteed that philosophers could used whatever half-baked premises they wanted in constructing arguments for the existence of God, and have little fear of being
If morality exists in an objective manner, and our beliefs about it are correlated with what it is, then the orthogonality thesis is false.
If the orthogonality thesis is true, then simply being intelligent is not enough to deduce objective morality even if it exists, and any accurate beliefs we have about it are due to luck, or possibly due to defining morality in some way involving humans (as with Eliezer's beliefs).
That being said, the orthogonality thesis may be partially true. That is, it may be that an arbitrarily advanced intelligence can have any ut...
I agree with vallinder's point, and would also like to add that arguments for moral realism which aren't theistic or contractarian in nature typically appeal to moral intuitions. Thus, instead of providing positive arguments for realism, they at best merely show that arguments for the unreliability of realists' intuitions are unsound. (For example, IIRC, Russ Shafer-Landau in this book tries to use a parity argument between moral and logical intuitions, so that arguments against the former would have to also apply to the latter.) But clearly this is an es...
The thesis says:
more or less any level of intelligence could in principle be combined with more or less any final goal.
The "in principle" still allows for the possibility of a naturalistic view of morality grounding moral truths. For example, we could have the concept of: the morality that advanced evolutionary systems tend to converge on - despite the orthogonality thesis.
It doesn't say what is likely to happen. It says what might happen in principle. It's a big difference.
Just a guess here, but I think they take the orthogonality thesis to mean 'The morals we humans have are just a small subset of many possibilities, thus there is no preferred moral system, thus morals are abitrary'. The error, of course, is in step 2. Just because our moral systems are a tiny subset of the space of moral systems doesn't mean no preferred moral system exists. What Elezier is saying, I think, is that in the context of humanity, preferred moral systems do exist, and they're the ones we have.
EDIT: I'd appreciate to know why this is being downvoted.
They're just conflating two different definitions of good <-- just read the part where I define Good[1] and Good[2] - the rest is specific to the comment i was replying to.
1) As they get evidence, rational agents will converge on what Good[1] is.
2) Everyone agrees that people should be Good[2]
3) Good[2] = Good[1] ...(this is the false step.)
4)Therefore, all rational agents will all want to be Good[1]
Your last post, concerning the confusion over universally compelling arguments, is similar. Just replace "good" with "mind". (As in, you...
I don't think the two are at odds in an absolute sense, but I think there is a meaningful anticorrelation.
tl;dr: Real morals, if they exist, provide one potential reason for AIs to use their intelligence to defy their programmed goals if those goals conflict with real morals.
If true morals exist (i.e. moral realism), and are discoverable (if they're not then they might as well not exist), then you would expect that a sufficiently intelligent being will figure them out. Indeed most atheistic moral realists would say that's what humans and progress are doing...
Continuing my quest to untangle people's confusions about Eliezer's metaethics... I've started to wonder if maybe some people have the intuition that the orthogonality thesis is at odds with moral realism.
I personally have a very hard time seeing why anyone would think that, perhaps in part because of my experience in philosophy of religion. Theistic apologists would love to be able to say, "moral realism, therefore a sufficiently intelligent being would also be good." It would help patch some obvious holes in their arguments and help them respond to things like Stephen Law's Evil God Challenge. But they mostly don't even try to argue that, for whatever reason.
You did see philosophers claiming things like that back in the bad old days before Kant, which raises the question of what's changed. I suspect the reason is fairly mundane, though: before Kant (roughly), it was not only dangerous to be an atheist, it was dangerous to question that the existence of God could be proven through reason (because it would get you suspected of being an atheist). It was even dangerous to advocated philosophical views that might possibly undermine the standard arguments for the existence of God. That guaranteed that philosophers could used whatever half-baked premises they wanted in constructing arguments for the existence of God, and have little fear of being contradicted.
Besides, even if you think an all-knowing would also necessarily be perfectly good, it still seems perfectly possible to have an otherwise all-knowing being with a horrible blind spot regarding morality.
On the other hand, in the comments of a post on the orthogonality thesis, Stuart Armstrong mentions that:
This is not super-enlightening, partly because Stuart is talking about people whose views he admits he doesn't understand... but on the other hand, maybe Stuart agrees that there is some kind of conflict there, since he seems to imply that he himself rejects moral realism.
I realize I'm struggling a bit to guess what people could be thinking here, but I suspect some people are thinking it, so... anyone?