Peter_de_Blanc comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (369)
I think it would be helpful to talk about exactly what quantities one is risk averse about. If we can agree on a toy example, it should be easy to resolve the argument using math.
For instance, I am (reflectively) somewhat risk averse about the amount of money I have. I am not, on top of that, risk averse about the amount of money I gain or lose from a particular action.
Now how about human lives?
I'm not sure if I am risk averse about the amount of human life in all of spacetime.
I think I am risk averse about the number of humans living at once; if you added a second Earth to the solar system, complete with 6.7 billion humans, I don't think that makes the universe twice as good.
I think death events might be even more bad than you would predict from the reduction in human capital, but I am not risk averse about them; 400 deaths sound about twice as bad as 200 deaths if there are 6.7 billion people total.
Nor am I risk averse about the size of my personal contribution to preventing deaths. If I personally save 400 people, that is about twice as good as if I save 200 people.
I'd like to hear how you (and other commenters) feel about each of these measures.
I'm an ethical egoist - so my opinions here are likely to be off topic. Perhaps that makes non-utilitarian preferences seem less unreasonable to me, though.
If someone prefers saving 9 lives at p = 0.1 to 1 life with certainty - well, maybe they just want to make sure that somewhere in the multiverse is well-populated. It doesn't necessarily mean they don't care - just that they don't care in a strictly utilitarian way.
If you are risk-neutral, I agree that there is no reason to diversify.