His points about risk aversion are confused. If you make choices consistently, you are maximizing the expected value of some function, which we call "utility". (Von Neumann and Morgenstern) Yes, it may grow sublinearly with regard to some other real-world variable like money or number of happy babies, but utility itself cannot have diminishing marginal utility and you cannot be risk-averse with regard to your utility. One big bet vs many small bets is also irrelevant. When you optimize your decision over one big bet, you either maximize expected utility or exhibit circular preferences.
If you make choices consistently, you are maximizing the expected value of some function, which we call "utility".
Unfortunately in real life many important choices are made just once, taken from a set of choices that is not well-delineated (because we don't have time to list them), in a situation where we don't have the resources to rank all these choices. In these cases, the hypotheses of von Neumann-Morgenstern utility theorem don't apply: the set of choices is unknown and so is the ordering, even on the elements we know are members of the s...
John Baez's This Week's Finds (Week 311) [Part 1; added for convenience following Nancy Lebovitz's comment]
John Baez's This Week's Finds (Week 312)
John Baez's This Week's Finds (Week 313)
I really like Eliezer's response to John Baez's last question in Week 313 about environmentalism vs. AI risks. I think it satisfactorily deflects much of the concern that I had when I wrote The Importance of Self-Doubt.
Eliezer says
This is true as stated but ignores an important issue which is there is feedback between more mundane current events and the eventual potential extinction of the humane race. For example, the United States' involvement in Libya has a (small) influence on existential risk (I don't have an opinion as to what sort). Any impact on human society impact due to global warming has some influence on existential risk.
Eliezer's points about comparative advantage and of existential risk in principle dominating all other considerations are valid, important, and well-made, but passing from principle to practice is very murky in the complex human world that we live in.
Note also the points that I make in Friendly AI Research and Taskification.