You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Slider comments on Open thread, Mar. 2 - Mar. 8, 2015 - Less Wrong Discussion

4 Post author: MrMind 02 March 2015 08:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread. Show more comments above.

Comment author: Houshalter 02 March 2015 11:00:47AM 0 points [-]

In Pascal's Mugging, the problem seems to be using expected values, which is highly distorted by even a single outlier.

The post led to a huge number of proposed solutions. They all seem pretty bad, and none of them even address the problem itself, just the specific thought experiment. And others, like bounding the utility function, are ok, but not really elegant. We don't really want to disregard high utility futures, we just don't want them to highly distort our decision process. But if we make decisions based on expected utility, they inevitably do.

So why is it taken as a given that we decide based on expected utility? Why not "median expected utility"? That is, if you look at the space of all possible outcomes, and select the point where exactly 50% of them are better, and exactly 50% are worse. Choose actions so that this median future is the best.

I'm not certain that this would generate consistent behavior, although you could possibly fix that by making it self referencing. That is, predetermine your future actions now so they lead to the future you desire. Or modify your decision making algorithm to the same effect.

I'm more concerned that there's also weird edge cases where this also doesn't line up with our decision making algorithm. It solves the outlier problem by giving outliers absolutely zero weight. If you have a choice to buy a dollar lottery ticket that has a 20% chance at giving you millions, you would pass it up. (Although, if you expect to encounter many such opportunities in the future, you would predetermine yourself to take them, but only up to a certain point. And this intuitively seems to me the sort of reasoning humans use to choose to obey expected utility calculations.) The same with avoiding large risks.

But not all is lost, there wasn't a priori any reason to believe that was the ideal human decision algorithm either. There are an infinite number of possible algorithms for converting a distribution to a single value. Granted most of them aren't elegant like these, but who says humans are?

We should expect this from evolution. Not just because it's messy, but any creature that actually follows expected utility calculation in extreme cases would almost certainly die. The best strategy would be to follow it in everyday circumstances but break from it in the extremes.

The point is just that the utility function isn't the only thing we need to worry about. I think not paying the Mugger or worshiping the Christian God are perfectly valid options. Even if you really have a boundless utility function and non-balancing priors. And most likely we will be fine if we do that.

Comment author: Slider 02 March 2015 02:58:29PM 1 point [-]

There is reason to believe that "expected amount of reproductions" is more aligned with natural selection than most other candidates. However organisms can't directly decide to prosper. They have to do it via spesific ways. That is why a surrogate is expected. You can't say that utility maximization would be a bad surrogate as it is almost defined to be the best surrogate. Now that doesn't mean that what you cognitive ritual calls calls utility need to correspond to actaul utility but it doesn't destroy the concept.

Comment author: Houshalter 02 March 2015 07:30:55PM 0 points [-]

In an infinite world, expected reproductions would be a good thing to maximize. An organism that had 3^^^^3 babies would vastly increase the spread of it's genes, and so it would be worth taking very very low probability bets. But in a finite world all such bets will lose, leaving behind only organisms which don't take such bets, in the vast majority of worlds.

Comment author: Lumifer 02 March 2015 07:41:05PM *  1 point [-]

An organism that had 3^^^^3 babies would vastly increase the spread of it's genes

Not quite, such an organism is likely to devastate its ecosystem in one generation and die out soon after that.

Comment author: Slider 04 March 2015 03:05:02PM 0 points [-]

a reason why any amont of sustainable growth is preferable to a large oneshot.

Comment author: Slider 04 March 2015 03:25:16PM 0 points [-]

Your argument seems to use expected amount of copies to argue in favour of forgetting about expected amount of copies. In a way this is illustrative, an organism that only cares about sex but not about defence is more naive than one that sometimes forgoes sex to meet defence needs. But in a way the defence option provides for more copies. In this way sex isn't choosing to make more copies, it is only one strategy path to it that might fail.

Arguing about finiteness is like knowing the maximum size of bets the universe can offer. But how can one be sure about the size of that limit? There is althought an argument that a species that has lived a finite time will have only finite amount of evidence and thus a limit on certainty that it can archieve. There are some propositions that might exceed this limit. However using any probability analysis to solve how to tune your behaviour to these propositions would be arbitrary. That is there is no way to calculate unexpected utility and expected utility doesn't take a stance on what grounds you expect that utility to take place.