Good point Bacon. I've been wondering where the implicit assumption that rational agents have an altruistic agenda came from. The assumption seems to permeate a rather large number of posts.
When Omega offers to save lives, why do I care? To be perfectly honest, my own utility function suggests that those extra billions are a liability to my interests.
When I realise that my altruistic notions are in conflict with my instinctive drive for status and influence, why do I "need to move in the direction of joining groups more easily, even in the face of annoyances and apparent unresponsiveness"? If anything it seems somewhat more rational to acknowledge the drive for status and self-interest as the key component and satisfy those criteria more effectively.
This isn't to say I don't have an altruistic agenda that I pursue. It is just that I don't see that agenda itself as 'rational' at all. It is somewhere between merely arbitrary and 'slightly irrational'.
With that caveat, this summary and plenty of the posts contained within are damn useful!
"With that caveat, this summary and plenty of the posts contained within are damn useful!"
I resoundingly agree.
That said, Eliezer is attempting to leverage the sentiments we now call "altruistic" into efficient other-optimizing. What if all people are really after is warm fuzzies? Mightn't they then shrink from the prospect of optimally helping others?
Hobbes gives us several possible reasons for altruism, none of which seem to be conducive to effective helping:
"When the transferring of right is not mutual, but one of the parties tr...
This sequence ran from March to April of 2009 and dealt with the topic of building rationalist communities that could systematically improve on the art, craft, and science of human rationality. This is a highly forward-looking sequence - not so much an immediately complete recipe, as a list of action items and warnings for anyone setting out in the future to build a craft and a community.