Today's post, Fake Utility Functions was originally published on 06 December 2007. A summary (taken from the LW wiki):

 

Describes the seeming fascination that many have with trying to compress morality down to a single principle. The sequence leading up to this post tries to explain the cognitive twists whereby people smuggle all of their complicated other preferences into their choice of exactly which acts they try to justify using their single principle; but if they were really following only that single principle, they would choose other acts to justify.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Uncritical Supercriticality, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 10:51 AM
[This comment is no longer endorsed by its author]Reply

How did you even do this?

I commented, and then I realized that I posted it on the wrong tab so I edited the comment and removed all the text.

What impresses me is that there's no vote buttons. Why on earth would the Reddit devs special-case that, I find myself wondering.

I also retracted the statement and then removed all the text.

[This comment is no longer endorsed by its author]Reply

As someone who does value happiness alone, I'd like to say that it's still not that simple (there's no known way to calculate the happiness of a given system), and that I understand full well that maximizing it would be the end of all life as we know it. What we do end up will be very, very happy, and that's good enough for me, even if it isn't really anything besides happy (such as remotely intelligent).

Saddly I still find many, many people arguing that "altruism" or "selflessness" just can't exist, that everyone is purely selfish, and that helping others is only done because either you accept them to pay back later on (IPD-like) or because it makes you feel good to do so.

I tried many arguments, from opportunity cost (yes, giving to charity may give a "warm fuzzy" as Eliezer says, but spending the same money in buying a video game, going to a concert or eating yummy food can easily give more happiness), to exceptional situations (an atheist (so you can't invoke fear of hell) from the Resistance withstanding torture to not betray his friends), but they always manage to dodge the issue and find pseudo-arguments like "they still do it only because of fear of shame".

So I ended up trying to imagine hypothetical situations like "Imagine aliens kidnap you, and offer you a choice between pressing Blue Button or Red Button. If you press Blue Button, you'll forget anything about aliens, awake next day in perfect health, find a winning lottery ticket in your mailbox, but aliens will destroy the Earth once you're dead from natural death. If you press Red Button, aliens will offer to Earth a cure for cancer, aids, ... but they'll torture you for months and then kill you. Do you say no human would ever press the Red Button ?" But I guess that even then they'll say "but the shame felt while pressing the Blue Button will be too high".

And anyway "escalating" the conflict at this point doesn't feel a "clean" answer to me. So I'm still looking for a more clever way of making people understand that humans are more complicated and you can't explain all of "altruism" by just guilt feelings and warm fuzzies.

...humans are more complicated and you can't explain all of "altruism" by just guilt feelings and warm fuzzies.

Chimpanzees also engage in altruism, even interspecies altruism. Humans tend to go a step further by using moral language and formalizing their morality. But how much of it is done for the purpose of signaling and rationalization compared to the altruism we share with chimpanzees and other animals?

But how much of it is done for the purpose of signaling and rationalization

What do you mean by "purpose" in this context? A "purpose" is a property of an optimization process, so the answer will depend on which optimization process you're talking about. Are you asking about evolution or our conscious thought process?

Wasn't there a different article about this?

...but spending the same money in buying a video game, going to a concert or eating yummy food can easily give more happiness

Not to mention donating to less efficient charity, but higher warm fuzzy-generating.

"but the shame felt while pressing the Blue Button will be too high"

Don't forget to add that they'll immediately wipe your memory the moment you press the button.

I'd suggest bringing up time discounting. Favoring earlier you isn't technically altruistic, but it's still something other than caring about your total happiness. Also, addictions.