Vaniver comments on Open thread, Mar. 2 - Mar. 8, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (155)
Where "right" is defined as "maximizing expected utility", then yes. It's just a tautology, "maximizing expected utility maximizes expected utility".
My point is if you actually asked the average person, even if you explained all this to them, they would still not agree that it was the right decision.
There is no law written into the universe that says you have to maximize expected utility. I don't think that' what humans really want. If we choose to follow it, in many situation it will lead to undesirable outcomes. And it's quite possible that those situations are actually common.
It may mean life becomes more complicated than making simple EU calculations, but you can still be perfectly consistent (see further down.)
You could express it as a limit trivially (e.g. a hypothesis that in heaven you will collect 3^^^3 utilons per second for an unending amount of time.)
Sounds reasonable, but it breaks down in extreme cases, where you end up spending almost all of your probability mass in exchange for a single good future with arbitrarily low probability.
Here's a thought experiment. Omega offers you tickets for 2 extra lifetimes of life, in exchange for a 1% chance of dying when you buy the ticket. You are forced to just keep buying tickets until you finally die.
Maybe you object that you discount extra years of life by some function, so just modify the thought experiments so the reward increase factorially per ticket bought, or something like that.
Fortunately we don't have to deal with these situations much, because we happen to live in a universe where there aren't powerful agents offering us very high utility lotteries. But these situations occur all the time if you deal with hypotheses instead of lotteries. The only reason we don't notice it is because we ignore or refuse to assign probability estimates to very unlikely hypotheses. An AI might not, and so it's very important to consider this issue.
My method isn't vulnerable to money pumps, as is an infinite number of arbitrary algorithms of the same class. See my comment here for details.
You don't even need the stuff I wrote about predetermining actions, that just minimizes regret. Even a naive implementation of expected median utility should not be money pumpable.
The method by which you assign probabilities should be unrelated to the method you assign utilities to outcomes. That is, you can't just say you don't like the outcome EU gives you and so assign it a lower probability, that's a horrible violation of Bayesian principles.
I don't know what the correct method of assigning probabilities, but even if you discount complex hypotheses factorially or something, you still get the same problem.
I certainly think these scenarios have reasonable prior probability. God could exist, we could be in the matrix, etc. I give them so low probability I don't typically think about them, but for this issue that is irrelevant.
This suggests buying tickets takes finite time per ticket, and that the offer is perpetually open. It seems like you could get a solid win out of this by living your life, buying one ticket every time you start running out of life. You keep as much of your probability mass alive as possible for as long as possible, and your probability of being alive at any given time after the end of the first "lifetime" is greater than it would've been if you hadn't bought tickets. Yeah, Omega has to follow you around while you go about your business, but that's no more obnoxious than saying you have to stand next to Omega wasting decades on mashing the ticket-buying button.
Ok change it so the ticket booth closes if you leave.