Excellently put, I think that sums up our disagreement very accurately. I'm not sure risk aversion couldn't be expressed as an irreducible term in a utility function, though. I suppose it would be more of a trait of the utility function, such as all probabilities are raised to a power greater than one, or something.
When I was a graduate student at the University of Notre Dame, I received a monthly living stipend of roughly $1,600. I decided to commit to giving ~10% of it to charity, and I had read in Peter Singer's book The Life You Can Save that one of the most efficient charities out there was Population Services International (PSI). Singer reported that GiveWell, a leading charity rating organization, had estimated that PSI's efforts saved lives at a cost of $650-$1000 each (pp. 88-89). So, I set up a recurring monthly donation of $160 to PSI, and kept it up for 15 months, for a total donation of $2,400.
I've been meaning to post the above information publicly for awhile, but was pushed over the edge by reading one of Eliezer's posts from a couple years back, Why Our Kind Can't Cooperate:
Since Eliezer's post is about rationalists, he stresses the issue of what arguments people voice. However, we know that just telling other people that you've given to charity makes them more likely to give. This is a point that Singer himself has emphasized.
I propose a thread for people to publicize their charitable donations. In light of the above, I'm especially interested to hear from people who've donated to the Singularity Institute for Artificial Intelligence. Once I acquire a regular source of income again in March, I intend to continue to primarily direct my charitable giving towards PSI, but maybe someone in this thread will persuade me to start giving to the Singularity Institute.