Today's post, The Meaning of Right was originally published on 29 July 2008. A summary (taken from the LW wiki):

 

Eliezer's long-awaited theory of meta-ethics.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Setting Up Metaethics, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
6 comments, sorted by Click to highlight new comments since:

Attempt at a four sentence summary for practicing ethical agents:

You decide just how right some event is by approximating an ideal computation. This is why if you think about it longer, sometimes you change your mind about how right an event was. This solves the problem of metaethics. However, most of the work for object level ethicologists remains open, e.g., specifying the ideal computation we approximate when we decide how right some event is.

[-][anonymous]10

Suppose there is a switch, currently set to OFF, and it is morally desirable for this switch to be flipped to ON.

Let A equal B.

It seems that—all else being equal, and assuming no other consequences or exceptional conditions which were not specified—value flows backward along arrows of causality.

B equals A.

Let A equal B followed by B equals A is not a convincing argument.

... how do you know that the procedure, 'Do whatever Emperor Ming says' is not the entirety of should-ness?

Let Trevor equal Emperor Ming and this holds for egoism.

By representing right-ness as an attribute of objects, you can recruit a whole previously evolved system that reasons about the attributes of objects.

http://tinyurl.com/kpsourcesofknowledge1979

... is a 748-word excerpt from a 1979 speech by Karl Popper that has insights into sources of knowledge (including morals), the role of tradition, the lack of a blank slate, why we should not over-clarify, and much more.

Suppose there is a switch, currently set to OFF, and it is morally desirable for this switch to be flipped to ON.

Let A equal B.

It seems that—all else being equal, and assuming no other consequences or exceptional conditions which were not specified—value flows backward along arrows of causality.

B equals A.

Sorry, what precisely are you calling A and B here?

Let Trevor equal Emperor Ming and this holds for egoism.

What is the "this" that holds for egoism?

[-]Shmi10

So, is morality anything more than a set of precomputed shortcuts, some learned, some inherited, elevated into ethics by conveniently lost purposes?

Well, evolution was never very good at "purpose" anyhow.

There's no purpose to purpose, but there's still plenty of purpose in the object level.