You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Epictetus comments on Open thread, Mar. 2 - Mar. 8, 2015 - Less Wrong Discussion

4 Post author: MrMind 02 March 2015 08:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread. Show more comments above.

Comment author: Houshalter 03 March 2015 03:21:33AM 0 points [-]

I don't agree. Utility is a separate concept from expected value maximization. Utility is a way of ordering and comparing different outcomes based on how desirable they are. You can say that one outcome is more desirable than another, or even quantify how many times more desirable it is. This is a useful and general concept.

Expected utility does have some nice properties being completely consistent. However I argued above that this isn't a necessary property. It adds complexity, sure, but if you self modify your decision making algorithm or predetermine your actions, you can force your future self to be consistent with your present self's desires.

Expected utility is perfectly rational as the number of "bets" you take goes to infinity. Rewards will cancel out the losses in the limit, and so any agent would choose to follow EU regardless of their decision making algorithm. But as the number of bets becomes finite, it's less obvious that this is the most desirable strategy.

That means we can't come up with a scenario where VNM utility generates silly outputs with sensible inputs. Of course we can give VNM silly inputs and get silly outputs back--scenarios like Pascal's Mugging are the equivalent of "suppose something really weird happens; wouldn't that be weird?" to which the answer is "well, yes."

Pascal's Mugging isn't "weird", it's perfectly typical. There are probably an infinite number of pascal's mugging type situations. Hypotheses with exceedingly low probability but high utility.

If we built an AI today, based on pure expected utility, it would most likely fail spectacularly. These low probability hypotheses would come to totally dominate it's decisions. Perhaps it would start to worship various gods and practice rituals and obeying superstitions. Or something far more absurd we haven't even thought of.

And if you really believe in EU, you can't say that this behavior is wrong or undesirable. This is what you should be doing, if you could, and you are losing a huge amount of EU by not doing it. You should want more than anything in existence, the ability to exactly calculate these hypotheses so you can collect that EU.

I don't want that though. I want a decision rule such that I am very likely to end up in a good outcome. Not one where I will mostly likely end up in a very suboptimal outcome, with an infinitesimal probability of winning the infinite utility lottery.

Comment author: Epictetus 04 March 2015 05:09:24AM 3 points [-]

Expected utility is convenient and makes for a nice mathematical theory.

It also makes a lot of assumptions. One assumes that the expectation does, in fact, exist. It need not. For example, in a game where two players toss a fair coin, we expect that in the long run the number of heads should equal the number of tails at some point. It turns out that the expected waiting time is infinite. Then there's the classic St. Petersburg paradox.

There are examples of "fair" bets (i.e. expected gain is 0) that are nevertheless unfavorable (in the sense that you're almost certain to sustain a net loss over time).

Expected utility is a model of reality that does a good job in many circumstances but has some key drawbacks where naive application will lead to unrealistic decisions. The map is not the territory, after all.