Comment author: Cameron_Taylor 27 April 2009 03:32:59PM *  0 points [-]

Good point Bacon. I've been wondering where the implicit assumption that rational agents have an altruistic agenda came from. The assumption seems to permeate a rather large number of posts.

When Omega offers to save lives, why do I care? To be perfectly honest, my own utility function suggests that those extra billions are a liability to my interests.

When I realise that my altruistic notions are in conflict with my instinctive drive for status and influence, why do I "need to move in the direction of joining groups more easily, even in the face of annoyances and apparent unresponsiveness"? If anything it seems somewhat more rational to acknowledge the drive for status and self-interest as the key component and satisfy those criteria more effectively.

This isn't to say I don't have an altruistic agenda that I pursue. It is just that I don't see that agenda itself as 'rational' at all. It is somewhere between merely arbitrary and 'slightly irrational'.

With that caveat, this summary and plenty of the posts contained within are damn useful!

Comment author: SirBacon 27 April 2009 07:47:32PM 1 point [-]

"With that caveat, this summary and plenty of the posts contained within are damn useful!"

I resoundingly agree.

That said, Eliezer is attempting to leverage the sentiments we now call "altruistic" into efficient other-optimizing. What if all people are really after is warm fuzzies? Mightn't they then shrink from the prospect of optimally helping others?

Hobbes gives us several possible reasons for altruism, none of which seem to be conducive to effective helping:

"When the transferring of right is not mutual, but one of the parties transferreth in hope to gain thereby friendship or service from another, or from his friends; or in hope to gain the reputation of charity, or magnanimity; or to deliver his mind from the pain of compassion [self-haters give more?]; or in hope of reward in heaven; this is not contract, but gift, free gift, grace: which words signify one and the same thing."

There is also the problem of epistemic limitations around other-optimizing. Charity might remove more utilons from the giver than it bestows upon the receiver, if only because it's difficult to know what other people need and easier to know what oneself needs.

Comment author: SirBacon 26 April 2009 08:03:33PM 9 points [-]

"...then there's the idea that rationalists should be able to (a) solve group coordination problems, (b) care a lot about other people and (c) win..."

Why should rationalists necessarily care a lot about other people? If we are to avoid circular altruism and the nefarious effects of other-optimizing, the best amount of caring might be less than "a lot."

Additionally, caring about other people in the sense of seeking emotional gratification primarily in tribe-like social rituals may be truly inimical to dedicating one's life to theoretical physics, math, or any other far-thinking discipline.

Caring about other people may entail involvement in politics, and local politics can be just as mind-killing as national politics.

Comment author: SirBacon 20 April 2009 11:34:07PM 0 points [-]

It might be prudent to avoid associating rationality with particular people or social institutions.

There's always the risk that particular instances of rationality will result in disaster, or that Bad Guys will be painstakingly rational, and in the early stages, wouldn't want to suffer the fate of religions, which often take reputation hits when their followers do nasty things.

Rationality could be advertised as a morally neutral instrumental value, i.e., Better Living Through Rationality.

On the other hand, we could sell rationality as a tool for atheists, drug policy activists, and stockbrokers, and publicly associate with their successes.

Comment author: SirBacon 17 April 2009 05:03:33AM 2 points [-]

I would venture that emotivism can be a way of setting up short-run incentives for the achievement of sub-goals. If we think "Bayesian insights are good," we can derive some psychological satisfaction from things which, in themselves, do not have direct personal consequences.

By attaching "goodness" to things too far outside our feedback loops, like "ending hunger," we get things like counterproductive aid spending. By attaching "goodness" too strongly to subgoals close to individual feedback loops, like "publishing papers," we get a flood of inconsequential academic articles at the expense of general knowledge.

View more: Prev