D_Malik comments on Open thread, Mar. 2 - Mar. 8, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (155)
In Pascal's Mugging, the problem seems to be using expected values, which is highly distorted by even a single outlier.
The post led to a huge number of proposed solutions. They all seem pretty bad, and none of them even address the problem itself, just the specific thought experiment. And others, like bounding the utility function, are ok, but not really elegant. We don't really want to disregard high utility futures, we just don't want them to highly distort our decision process. But if we make decisions based on expected utility, they inevitably do.
So why is it taken as a given that we decide based on expected utility? Why not "median expected utility"? That is, if you look at the space of all possible outcomes, and select the point where exactly 50% of them are better, and exactly 50% are worse. Choose actions so that this median future is the best.
I'm not certain that this would generate consistent behavior, although you could possibly fix that by making it self referencing. That is, predetermine your future actions now so they lead to the future you desire. Or modify your decision making algorithm to the same effect.
I'm more concerned that there's also weird edge cases where this also doesn't line up with our decision making algorithm. It solves the outlier problem by giving outliers absolutely zero weight. If you have a choice to buy a dollar lottery ticket that has a 20% chance at giving you millions, you would pass it up. (Although, if you expect to encounter many such opportunities in the future, you would predetermine yourself to take them, but only up to a certain point. And this intuitively seems to me the sort of reasoning humans use to choose to obey expected utility calculations.) The same with avoiding large risks.
But not all is lost, there wasn't a priori any reason to believe that was the ideal human decision algorithm either. There are an infinite number of possible algorithms for converting a distribution to a single value. Granted most of them aren't elegant like these, but who says humans are?
We should expect this from evolution. Not just because it's messy, but any creature that actually follows expected utility calculation in extreme cases would almost certainly die. The best strategy would be to follow it in everyday circumstances but break from it in the extremes.
The point is just that the utility function isn't the only thing we need to worry about. I think not paying the Mugger or worshiping the Christian God are perfectly valid options. Even if you really have a boundless utility function and non-balancing priors. And most likely we will be fine if we do that.
VNM utility is basically defined as "that function whose expectation we maximize". There exists such a function as long as you obey some very unobjectionable axioms. So instead of saying "I do not want to maximize the expectation of my utility function U", you should say "U is not my utility function".
The problem with this argument, is that it boils down to, if we accept intuitive axioms X we get counter-intuitive result Y. But why is ~Y any less worthy of being an axiom then X?
You miss my point. I am objecting to those axioms. I don't want to change my utility function. If God is real, perhaps he really could offer infinite reward or infinite punishment. You might really think murdering 3^^^^3 people is just that bad.
However these events have such low probability that I can safely choose to ignore them, and that's a perfectly valid choice. Maximizing expected utility means you will almost certainly do worse in the real world than an agent that doesn't.
Which axiom do you reject?
Continuity, I would say.
That makes no sense in context, since continuity is equivalent to saying (roughly) 'If you prefer staying on this side of the street to dying, but prefer something on the other side of the street to staying here, there exists some probability of death which is small enough to make you prefer crossing the street.'
This sounds almost exactly like what Houshalter is arguing in the great-grandparent ("these events have such low probability that I can safely choose to ignore them,") so it can't be the axiom s/he objects to.
I could see objecting to Completeness, since in fact our preferences may be ill-defined for some choices. I don't know if rejecting this axiom could produce the desired result in Pascal's Mugging, though, and I'd half expect it to cause all sorts of trouble elsewhere.
That sounds right, actually.
That for any bet with an infinitesimally small value of p, there is a value of u high enough that I would take it.
That's not one of the axioms. In fact, none of the axioms mention u at all.