nyan_sandwich comments on We Don't Have a Utility Function - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (123)
Stanovich's paper on why humans are apparently worse at following the VNM axioms than some animals has some interesting things to say, although I don't like the way it says them. I quit halfway through the paper out of frustration, but what I got out of the paper (which may not be what the paper itself was trying to say) is more or less the following: humans model the world at different levels of complexity at different times, and at each of those levels different considerations come into play for making decisions. An agent behaving in this way can appear to be behaving VNM-irrationally when really it is just trying to efficiently use cognitive resources by not modeling the world at the maximum level of complexity all the time. Non-human animals may model the world at more similar levels of complexity over time, so they behave more VNM-rationally even if they have less overall optimization power than humans.
A related consideration, which is more about the methodology of studies claiming to measure human irrationality, is that the problem you think a test subject is solving is not necessarily the problem they're actually solving. I guess a well-known example is when you ask people to play the prisoner's dilemma but in their heads they're really playing the iterated prisoner's dilemma.
And another point: an agent can have a utility function and still behave VNM-irrationally if computing the VNM-rational thing to do given its utility function takes too much time, so the agent computes some approximation of it. It's a given in practical applications of Bayesian statistics that Bayesian inference is usually intractable, so it's necessary to compute some approximation to it, e.g. using Monte Carlo methods. The human brain may be doing something similar (a possibility explored in Lieder-Griffiths-Goodman, for example).
(Which reminds me: we don't talk anywhere near enough about computational complexity on LW for my tastes. What's up with that? An agent can't do anything right if it can't compute what "right" means before the Sun explodes.)
Right, this is an important point that could use more discussion.
In closer inspection a lot of the "irrationalities" are either rational on a higher-level game, or to be expected given the inability of people to "feel" abstract facts that they are told.
That said, the inability to properly incorporate abstract information is quite a rationality problem.
I've made this point quite a few times here and here
Depends, sometimes this is actually a decent way avoid believing every piece of abstract information one is presented with.