Eliezer_Yudkowsky comments on Can Humanism Match Religion's Output? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (102)
It would seem that Greene has deconverted you away from objective morality along different lines than I was trying for myself.
Anyway, your comment suggests that FAI should take its funding primarily from the most selfish of rationalists who still have a trace of altruism in them, since FAI would be the only project where expected utilons can be purchased so cheaply as to move them; and leave more altruistic funding to more mundane projects.
Now, what are the odds that would work in real life? I would think very low. FAI is likely to actually need those rare folk who can continue supporting without a lot of in-person support and encouragement and immediately visible concrete results, leaving the others to those projects which are more intuitively encouraging to a human brain.
It seems to me that no matter what people claim about their selfishness or altruism, the real line is between those who can bring themselves to do something about it under conditions X and those who can't - and that the actual payoff in expected utilons matters little, but the reinforcing conditions matter a lot.
But perhaps I am mistaken.
Shhh.
No saying the F-acronym yet.