Liron comments on VNM expected utility theory: uses, abuses, and interpretation - Less Wrong

21 Post author: Academian 17 April 2010 08:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: Academian 18 April 2010 06:48:31PM *  1 point [-]

better to just flap its arms so that the butterfly effect will increase the chance of A by 1/googol

Heh, yeah that's roughly how I feel when noting Archimedeanity of my values. But then I wonder... maybe I wouldn't "flap my arms just so" to increase P(A) because I'm running on hostile hardware that makes my belief probabilities coarse-grained.... i.e. maybe I'm forced to treat 1/googol like 0, and open the box with B in it. I certainly feel that way.

Such reflection leads me to think that humans aren't precise enough for the difference between VNM utility and Hausner utility to really manifest decisively. When would a human really be convinced enough that EUBig(X) precisely equals EUBig(Y), so to start optimizing EUSmall? It seems like the difference between VNM and Hausner utility only happens in a measure-0 class of scenarios that humans couldn't practically detect anyway. ETA: except maybe when there's a time limit...

This is actually one reason I posted on Hausner utility: if you like it, then note it's 0% likely to give you different answers from VNM utility, and then just use VNM because you're not precise enough to know the difference :)

Comment author: Liron 18 April 2010 09:04:00PM 0 points [-]

It's not really an issue of impracticality. It's just that, any time you have a higher-level class of utility, the lower-level classes of utility stop being relevant to your decisions. No matter how precise the algorithm is. That's why I say it's extra complexity with no optimization benefit. Since the extra structure doesn't even map better to my intuition about preference, I just Occam-shave it away.

Comment author: Academian 18 April 2010 09:43:13PM *  0 points [-]

any time you have a higher-level class of utility, the lower-level classes of utility stop being relevant to your decisions

Wait... certainly, if you lexicographically value (brightness, redness) of a light, and somehow manage to be in a scenario where you can't make the light brighter, and somehow manage to know that, then the redness value becomes relevant.

What I mean is that the environment itself makes such precise situations rare (a non-practical issue), and an imprecise algorithm makes it hard to detect when, if ever, they occur (a practical issue).