Matt_Simpson comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong

33 Post author: lukeprog 29 January 2011 07:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (368)

You are viewing a single comment's thread. Show more comments above.

Comment author: Matt_Simpson 31 January 2011 12:41:52AM *  3 points [-]

Why would it be right to help people to have more fun if helping people to have more fun does not match up with your current preferences

Because right is a rigid designator. It refers to a specific set of terminal values. If your terminal values don't match up with this specific set of values, then they are wrong, i.e. not right. Not that you would particularly care, of course. From your perspective, you only want to maximize your own values and no others. If your values don't match up with the values defined as moral, so much for morality. But you still should be moral because should, as it's defined here, refers to a specific set of terminal values - the one we labeled "right."

(Note: I'm using the term should exactly as EY uses it, unlike in my previous comments in these threads. In my terms, should=should_human and on the assumption that you, XiXiDu, don't care about the terminal values defined as right, should_XiXiDu =/= should)

Comment author: XiXiDu 31 January 2011 09:35:30AM *  3 points [-]

I'm getting the impression that nobody here actually disagrees but that some people are expressing themselves in a very complicated way.

I parse your comment to mean that the definition of moral is a set of terminal values of some agents and should is the term that they use to designate instrumental actions that do serve that goal?

Comment author: endoself 31 January 2011 10:00:54AM 1 point [-]

Your second paragraph looks correct. 'Some agents' refers to humanity rather than any group of agents. Technically, should is the term anything should use when discussing humanity's goals, at least when speaking Eliezer.

Your first paragraph is less clear. You definitely disagree with others. There are also some other disagreements.

Comment author: XiXiDu 31 January 2011 10:19:11AM *  0 points [-]

You definitely disagree with others.

Correct, I disagree. What I wanted to say with my first paragraph was that I might disagree because I don't understand what others believe because they expressed it in a way that was too complicated for me to grasp. You are also correct that I myself was not clear in what I tried to communicate.

ETA That is if you believe that disagreement fundamentally arises out of misunderstanding as long as one is not talking about matters of taste.

Comment author: endoself 31 January 2011 06:31:05PM 2 points [-]

In Eliezer's metaethics, all disagreement are from misunderstanding. A paperclip maximizer agrees about what is right, it just has no reason to act correctly.

Comment author: Matt_Simpson 31 January 2011 06:52:56PM *  2 points [-]

To whoever voted the parent down, this is edit nearly /edit exactly correct. A paperclip maximizer could, in principle, agree about what is right. It doesn't have to, I mean a paperclip maximizer could be stupid, but assuming it's intelligent enough, it could discover what is moral. But a paperclip maximizer doesn't care about what is right, it only cares about paperclips, so it will continue maximizing paperclips and only worry about what is "right" when doing so helps it create more paperclips. Right is a specific set of terminal values that the paperclip maximizer DOESN"T have. On the other hand you, being human, do have those terminal values on EY's metaethics.

Comment author: TheOtherDave 31 January 2011 07:21:16PM 2 points [-]

Agreed that a paperclip maximizer can "discover what is moral," in the sense that you're using it here. (Although there's no reason to expect any particular PM to do so, no matter how intelligent it is.)

Can you clarify why this sort of discovery is in any way interesting, useful, or worth talking about?

Comment author: Matt_Simpson 31 January 2011 07:28:23PM 0 points [-]

It drives home the point that morality is an objective feature of the universe that doesn't depend on the agent asking "what should I do?"

Comment author: TheOtherDave 31 January 2011 07:37:57PM 2 points [-]

Huh. I don't see how it drives home that point at all. But OK, at least I know what your intention is... thank you for clarifying that.

Comment author: XiXiDu 01 February 2011 10:57:44AM 0 points [-]

...morality is an objective feature of the universe...

Fascinating. I still don't understand in what sense this could be true, except maybe the way I tried to interpret EY here and here. But those comments simply got downvoted without any explanation or attempt to correct me, therefore I can't draw any particular conclusion from those downvotes.

You could argue that morality (what is right?) is human and other species will agree that from a human perspective what is moral is right is right is moral. Although I would agree, I don't understand how such a confusing use of terms is helpful.

Comment author: Matt_Simpson 01 February 2011 09:51:53PM *  1 point [-]

Morality is just a specific set of terminal values. It's an objective feature of the universe because... humans have those terminal values. You can look inside the heads of humans and discover them. "Should," "right," and "moral," in EY's terms, are just being used as a rigid designators to refer to those specific values.

I'm not sure I understand the distinction between "right" and "moral" in your comment.

Comment author: wedrifid 31 January 2011 07:20:04PM *  1 point [-]

To whoever voted the parent down, this is exactly correct.

I was the second to vote down the grandparent. It is not exactly correct. In particular it claims "all disagreement" and "a paperclip maximiser agrees", not "could in principle agree".

While the comment could perhaps be salvaged with some tweaks, as it stands it is not correct and would just serve to further obfuscate what some people find confusing as it is.

Comment author: endoself 01 February 2011 02:53:39AM *  0 points [-]

I concede that I was implicitly assuming that all agents have access to the same information. Other than that, I can think of no source of disagreements apart from misunderstanding. I also meant that if paperclip maximizer attempted to find out what is right and did not make any mistakes, it would arrive at the same answer as a human, though there is not necessarily any reason for it to try in the first place. I do not think that these distinctions were nonobvious, but this may be overconfidence on my part.

Comment author: TheOtherDave 01 February 2011 03:07:22AM 0 points [-]

Can you say more about how the sufficiently intelligent paperclip maximizer goes about finding out what is right?

Comment author: endoself 01 February 2011 03:41:59AM 0 points [-]

Depends on how the question is asked. Does the paperclip maximizer have the definition of the word right stored in its memory? If so, it just consults the memory. Otherwise, the questioner would have to either define the word or explain how to arrive at a definition.

This may seem like cheating, but consider the analogous case where we are discussing prime numbers. You must either already know what a prime number is, or I must tell you, or I must tell you about mathematicians, and you must observe them.

As long as a human and a paperclip maximizer both have the same information about humans, they will both come to the same conclusions about human brains, which happen to encode what is right, thus allowing both the human and the paperclip maximizer to learn about morality. If this paperclip maximizer then chooses to wipe out humanity in order to get more raw materials, it will knows that its actions are wrong; it just has no term in its utility function for morality.

Comment author: Matt_Simpson 31 January 2011 03:56:06PM 0 points [-]

Yep, with the caveat that endoself added below: "should" refers to humanity's goals, no matter who is using the term (on EY's theory and semantics).