All of Mathilde's Comments + Replies

You're absolutely right. I was starting to get at this idea from another of the comments, but you've laid out where I've gone wrong very clearly. Thank you.

  1. Thank you, this is good to know. I'll have to think about this some more.

  2. Hm, I was working under the assumption that the "utility" with paperclips was just the number of paperclips. A universe with X - 10n + 3^^^^3 paperclips is better than a universe with just X paperclips by 3^^^^3 - 10n. Is this not a proper utility function?

  3. The casino version evolved from repeated alterations to Pascal's Mugging, so it retained the 3^^^^3 from there. I had written a paragraph where I mentioned that for one-shot problems, even a more realistic probability could

... (read more)

Very interesting, thank you!

I think "maximising" still makes sense in one-shot problems. 2>1 and 1000>1, but it's also the case that 1000>2, even without expected utility. The way I see it, EU is a method of comparing choices based on their average utility, but the "average" turns out to be a less useful metric when you only have one chance.

So for cases when an outcome is not a constant amount of paperclips we need more rules than what the object of attention is. So a paperclip maximiser is actually underspecified.

If this is true, it would imp

... (read more)
2Slider
Many times opinions how to handle uncertainty get baked into the utility functions. That is a standard naive construction is to say "be risk neutral" and value paperclips linearly for their amount. But I could imagine a policy for which more paperclips is always better but from a default position of 100% 2 paperclips it wouldn't choose a option of 0.1% 1 paperclips, 49.9% 2 paperclips and 50% 3 paperclips. One can construct a "risk averse" function where the new function can simply be optimised. But does it really mean the new function is not a paper clip maximation function?

Thank you for your response!

Are you rejecting Pascal’s mugging because of the prospect of relying on uncertain models that you do not expect to confirm?

My intuition is that in a one-shot problem, gambling everything on an extremely low probability event is a bad idea, even when the reward from that low probability event is very high, because you are effectively certain to lose. This is the basis for me not paying up in Pascal's Mugging and in the casino problem in the post.

I'm trying to keep my reasoning simple, so in my examples I always assume that t

... (read more)
1Gurkenglas
In a market of bettors that draw the line of how much risk to take at different points, the early game will be dominated by the most risk-taking folks and as the game grows older, the line that was chosen by the current winners moves. Perhaps your intuition is merely the product of evolution playing this game for as long as it took for the line to reach its current point?