army1987 comments on No Universally Compelling Arguments in Math or Science - Less Wrong

30 Post author: ChrisHallquist 05 November 2013 03:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (227)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 06 November 2013 10:26:57AM 7 points [-]

Clippy knows what is moral and what isn't. He just doesn't care.

Comment author: Jack 06 November 2013 03:47:07PM 5 points [-]

Imagine if humans had never broken into different groups and we all spoke the same language. No French, no English, just "the Language". People study the Language, debate it, etc.

Then one day intelligent aliens arrive. Philosophers immediately begin debating: do these aliens have the Language? One the one hand, they're making noises with what appears to be something comparable to a mouth, the noises have an order and structure to them, and they communicate information. But what they do sounds nothing like "the Language". They refer to objects with different sounds than the Language requires, and sometimes make sounds that describe what an object is like after the sound that refers to the object.

"Morality" has a similar type-token ambiguity. It can refer to our values or to values in general. Saying Clippy knows what is moral but that he doesn't care is true under the token interpretation, but not the type one. The word "morality" has meanings and connotations that imply that Clippy has a morality but that it is just different-- in the same way that the aliens have language but that it is just different.

Comment author: [deleted] 07 November 2013 10:34:12AM 5 points [-]

So, I guess the point of EY's metaethics can be summarized as ‘by “morality” I mean the token, not the type’.

(Which is not a problem IMO, as there are unambiguous words for the type, e.g. “values” -- except insofar as people are likely to misunderstand him.)

Comment author: Viliam_Bur 07 November 2013 07:28:10PM 2 points [-]

‘by “morality” I mean the token, not the type’

Especially because the whole point is to optimize for something. You can't optimize for a type that could have any value.

Comment author: Eugine_Nier 07 November 2013 03:23:29AM -2 points [-]

How is this different from:

The creationist knows what I believe but doesn't care.

Comment author: fubarobfusco 07 November 2013 08:18:29AM 4 points [-]

The argument of the dragon in my garage suggests that the supernaturalist already knows the facts of the natural world, but doesn't care.

But the sense in which "Clippy knows what is moral" is that Clippy can correctly predict humans, and "morality" has to do with what humans value and approve of — not what paperclippers value and approve of.

Comment author: nshepperd 07 November 2013 05:07:00AM 1 point [-]

A creationist is mistaken about the origin of the Earth (they believe the Earth was created by a deity).

Comment author: [deleted] 07 November 2013 09:32:05AM 1 point [-]

Aumann's agreement theorem prevents that from happening to ideal epistemic rationalists; there's no analogue for instrumental rationality.

But...

Aumann's agreement theorem assumes common priors, what I described can only happen to instrumental rationalists with different utility functions. So the question is why we expect all rationalists to use One True Prior (e.g. Solomonoff induction) but each to use their own utility function.