Roko comments on Can Humanism Match Religion's Output? - Less Wrong

45 Post author: Eliezer_Yudkowsky 27 March 2009 11:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (102)

You are viewing a single comment's thread.

Comment deleted 27 March 2009 05:31:06PM [-]
Comment author: sketerpot 27 March 2009 07:29:23PM 6 points [-]

I don't want to become a "cleaning up this world"-bot. I have my own goals and aims in life, and they are distinct from the goal of "producing as much positive utility for humanity" as possible. I'd rather spend £99 out of every £100 on myself than give it to a random poor person in the third world, because I am more important than s/he is (more important in the subjective, antirealist sense).

Hey, that's fine. You certainly don't have to try to justify your basic utility function. But for people who want to do more to help the rest of the world (even if we prioritize ourselves first), it can be hard just to get ourselves to act rationally in pursuit of this goal. That's the issue at hand.

Comment author: Eliezer_Yudkowsky 27 March 2009 09:50:34PM 5 points [-]

It would seem that Greene has deconverted you away from objective morality along different lines than I was trying for myself.

Anyway, your comment suggests that FAI should take its funding primarily from the most selfish of rationalists who still have a trace of altruism in them, since FAI would be the only project where expected utilons can be purchased so cheaply as to move them; and leave more altruistic funding to more mundane projects.

Now, what are the odds that would work in real life? I would think very low. FAI is likely to actually need those rare folk who can continue supporting without a lot of in-person support and encouragement and immediately visible concrete results, leaving the others to those projects which are more intuitively encouraging to a human brain.

It seems to me that no matter what people claim about their selfishness or altruism, the real line is between those who can bring themselves to do something about it under conditions X and those who can't - and that the actual payoff in expected utilons matters little, but the reinforcing conditions matter a lot.

But perhaps I am mistaken.

Comment author: MichaelVassar 27 March 2009 10:27:12PM 13 points [-]

Shhh.
No saying the F-acronym yet.