Eliezer_Yudkowsky comments on Can Humanism Match Religion's Output? - Less Wrong

45 Post author: Eliezer_Yudkowsky 27 March 2009 11:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (102)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 27 March 2009 09:50:34PM 5 points [-]

It would seem that Greene has deconverted you away from objective morality along different lines than I was trying for myself.

Anyway, your comment suggests that FAI should take its funding primarily from the most selfish of rationalists who still have a trace of altruism in them, since FAI would be the only project where expected utilons can be purchased so cheaply as to move them; and leave more altruistic funding to more mundane projects.

Now, what are the odds that would work in real life? I would think very low. FAI is likely to actually need those rare folk who can continue supporting without a lot of in-person support and encouragement and immediately visible concrete results, leaving the others to those projects which are more intuitively encouraging to a human brain.

It seems to me that no matter what people claim about their selfishness or altruism, the real line is between those who can bring themselves to do something about it under conditions X and those who can't - and that the actual payoff in expected utilons matters little, but the reinforcing conditions matter a lot.

But perhaps I am mistaken.

Comment author: MichaelVassar 27 March 2009 10:27:12PM 13 points [-]

Shhh.
No saying the F-acronym yet.