As someone who does value happiness alone, I'd like to say that it's still not that simple (there's no known way to calculate the happiness of a given system), and that I understand full well that maximizing it would be the end of all life as we know it. What we do end up will be very, very happy, and that's good enough for me, even if it isn't really anything besides happy (such as remotely intelligent).
Today's post, Fake Utility Functions was originally published on 06 December 2007. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Uncritical Supercriticality, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.