wedrifid comments on True numbers and fake numbers - Less Wrong

19 Post author: cousin_it 06 February 2014 12:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (128)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 08 February 2014 04:03:24AM *  0 points [-]

"Accepting the VNM axioms requires you to assume that everything can be reduced to a unitary "utility"." (Which is to say, if you accept the axioms, you will be forced to conclude this; and also, assuming this leads you to the VNM axioms.)

With the minor errata that 'assume' would best be replaced with 'conclude', 'believe' or 'accept' this revision seems accurate. For someone taking your position the most interesting thing about the VNM theory is that it prompts you to work out just which of the axioms you reject. One man's modus ponens is another man's modus tollens. The theory doesn't care whether it is being used to conclude acceptance of the conclusion or rejection of one or more of the axioms.

If you find that reducing everything to a unitary utility then fails to describe your preferences over outcomes, you have a problem.

Entirely agree. Humans, for example, are not remotely VNM coherent.

This line ... is indeed a misstatement (as it stands it is indeed incorrect for the reasons you state).

I have retracted my criticism via edit. One misstatement does not unfamiliarity make so even prior to your revision I suspect my criticism was overstated. Pardon me.

Comment author: SaidAchmiz 08 February 2014 04:30:10AM 1 point [-]

Thank you, and no offense taken.

Entirely agree. Humans, for example, are not remotely VNM coherent.

Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)

One man's modus ponens is another man's modus tollens. The theory doesn't care whether it is being used to conclude acceptance of the conclusion or rejection of one or more of the axioms.

Indeed. Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don't quote me on that; I have to get my copy of Rational Choice in an Uncertain World out of storage (as I recall said book explains the implications of the VNM axions quite well and I distinctly recall that my objections to VNM arose when reading it).

Comment author: wedrifid 08 February 2014 04:56:49AM *  -1 points [-]

Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)

I tentatively agree. The decision system I tend toward modelling an idealised me as having contains an extra level of abstraction in order to generalise the VNM axioms and decision theory regarding utility maximisation principles to something that does allow the kind of system you are advocating (and which I don't consider intrinsically irrational).

Simply put, if instead of having preferences for world-histories you have preferences for probability distributions of world-histories then doing the same math and reasoning gives you an entirely different but still clearly defined and abstractly-consequentialist way of interacting with lotteries. It means the agent is doing a different thing than maximising the mean of utility... it could, in effect, be maximising the mean subject to satisficing on a maximum probability of utility below a value.

It's the way being inherently and coherently risk-averse (and similar non-mean optimisers) would work.

Such agents are coherent. It doesn't matter much whether we call them irrational or not. If that is what they want to do then so be it.

Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don't quote me on that

That does seem to be the most likely axiom being rejected. At least that has been my intuition when I've considered how plausible not 'expected' utility maximisers seem to think.