Today's post, Not for the Sake of Happiness (Alone) was originally published on 22 November 2007. A summary (taken from the LW wiki):
Tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. An ideal Bayesian agent - implementing strict Bayesian decision theory - can have a utility function that ranges over anything, not just internal subjective experiences.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Truly Part of You, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
For what it's worth, I value happiness alone (though not my happiness in particular).
You sure about that? You could be sure, but lets say you I told you that in 5 years you would become demented. This dementia would not make you unhappy, in fact it would make you slightly happier and your condition would not make any person unhappier. A very artificial situation but still. Would you still consider it a good thing that you would become demented?