If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
In Pascal's Mugging, the problem seems to be using expected values, which is highly distorted by even a single outlier.
The post led to a huge number of proposed solutions. They all seem pretty bad, and none of them even address the problem itself, just the specific thought experiment. And others, like bounding the utility function, are ok, but not really elegant. We don't really want to disregard high utility futures, we just don't want them to highly distort our decision process. But if we make decisions based on expected utility, they inevitably do.
So why is it taken as a given that we decide based on expected utility? Why not "median expected utility"? That is, if you look at the space of all possible outcomes, and select the point where exactly 50% of them are better, and exactly 50% are worse. Choose actions so that this median future is the best.
I'm not certain that this would generate consistent behavior, although you could possibly fix that by making it self referencing. That is, predetermine your future actions now so they lead to the future you desire. Or modify your decision making algorithm to the same effect.
I'm more concerned that there's also weird edge cases where this also doesn't line up with our decision making algorithm. It solves the outlier problem by giving outliers absolutely zero weight. If you have a choice to buy a dollar lottery ticket that has a 20% chance at giving you millions, you would pass it up. (Although, if you expect to encounter many such opportunities in the future, you would predetermine yourself to take them, but only up to a certain point. And this intuitively seems to me the sort of reasoning humans use to choose to obey expected utility calculations.) The same with avoiding large risks.
But not all is lost, there wasn't a priori any reason to believe that was the ideal human decision algorithm either. There are an infinite number of possible algorithms for converting a distribution to a single value. Granted most of them aren't elegant like these, but who says humans are?
We should expect this from evolution. Not just because it's messy, but any creature that actually follows expected utility calculation in extreme cases would almost certainly die. The best strategy would be to follow it in everyday circumstances but break from it in the extremes.
The point is just that the utility function isn't the only thing we need to worry about. I think not paying the Mugger or worshiping the Christian God are perfectly valid options. Even if you really have a boundless utility function and non-balancing priors. And most likely we will be fine if we do that.
This might sound silly, but it's deeper than it looks: the reason why we use the expected value of utility (i.e. means) to determine the best of a set of gambles is because utility is defined as the thing that you maximize the expected value of.
The thing that's nice about VNM utility is that it's mathematically consistent. That means we can't come up with a scenario where VNM utility generates silly outputs with sensible inputs. Of course we can give VNM silly inputs and get silly outputs back--scenarios like ... (read more)