Will_Newsome comments on Is Rationality Maximization of Expected Value? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (64)
The whole nonlinear utility thing makes this specific point wrong, but:
It seems like the main counter-intuitive part of expected utility theory (or counter-expected utility theory part of intuition) is just this type of question. See: Pascal's Mugging.
Humans tend to be loathe to trade of high probabilities of small benefits for low probabilities of big benefits in cases where linearity is very plausible, such as # of people saved.
But people seem to just as often make the opposite mistake about various scary risks.
Are people just bad at dealing with small probabilities?
What does that mean for coming to a reflective equilibrium about ethics?
Are you talking about CEV? Civilization as we know it will end long before people agree about metaethics.
Before CEV, we have to do a rough estimate of our personal extrapolated volition so we know what to do. One way to do this is to extrapolate our volition as far as we can see by, e.g., thinking about ethics.
I intuitively feels that X is good and Y is bad. I believe morality will mostly fit my intuitions. I believe morality will be simple. I know my intuitions, in this case, are pretty stupid. I can't find a simple system that fits my intuitions here. What should I do? How much should I suck up and take the counterintuitiveness? How much should I suck up and take complex morality?
These are difficult questions.