Today's post, Moral Error and Moral Disagreement was originally published on 10 August 2008. A summary (taken from the LW wiki):
How can you make errors about morality?
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Sorting Pebbles Into Correct Heaps, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
Reading http://lesswrong.com/lw/t1/arbitrary/ makes me think that a rational agent, even if its greatest motivation is to maximize its paperclip production, would be able to determine that its desire for paperclips was more arbitrary than its tools for rationality. It could perform simulations or thought experiments to determine its most likely origins and find that while many possible origins lead to the development of rationality there are only a few paths that specifically generate paperclip maximization. Equally likely are pencil maximization and smiley-face maximization, and even some less likely things like human-friendliness maximization will use the same rationality framework because it works well in the Universe. There's justification for rationality but not for paperclip maximization.
That also means that joy and happiness are not completely arbitrary for humans because they are tools used to maximize evolutionary fitness, which we can identify as the justification for the development of those emotions. Some of the acquired tastes, fetishes, or habits of humans might well be described as arbitrary, though.