Eugine_Nier comments on We Don't Have a Utility Function - Less Wrong

43 [deleted] 02 April 2013 03:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: Qiaochu_Yuan 02 April 2013 05:37:49AM *  22 points [-]

Stanovich's paper on why humans are apparently worse at following the VNM axioms than some animals has some interesting things to say, although I don't like the way it says them. I quit halfway through the paper out of frustration, but what I got out of the paper (which may not be what the paper itself was trying to say) is more or less the following: humans model the world at different levels of complexity at different times, and at each of those levels different considerations come into play for making decisions. An agent behaving in this way can appear to be behaving VNM-irrationally when really it is just trying to efficiently use cognitive resources by not modeling the world at the maximum level of complexity all the time. Non-human animals may model the world at more similar levels of complexity over time, so they behave more VNM-rationally even if they have less overall optimization power than humans.

A related consideration, which is more about the methodology of studies claiming to measure human irrationality, is that the problem you think a test subject is solving is not necessarily the problem they're actually solving. I guess a well-known example is when you ask people to play the prisoner's dilemma but in their heads they're really playing the iterated prisoner's dilemma.

And another point: an agent can have a utility function and still behave VNM-irrationally if computing the VNM-rational thing to do given its utility function takes too much time, so the agent computes some approximation of it. It's a given in practical applications of Bayesian statistics that Bayesian inference is usually intractable, so it's necessary to compute some approximation to it, e.g. using Monte Carlo methods. The human brain may be doing something similar (a possibility explored in Lieder-Griffiths-Goodman, for example).

(Which reminds me: we don't talk anywhere near enough about computational complexity on LW for my tastes. What's up with that? An agent can't do anything right if it can't compute what "right" means before the Sun explodes.)

Comment author: Eugine_Nier 03 April 2013 05:55:35AM 1 point [-]

humans model the world at different levels of complexity at different times, and at each of those levels different considerations come into play for making decisions. An agent behaving in this way can appear to be behaving VNM-irrationally when really it is just trying to efficiently use cognitive resources by not modeling the world at the maximum level of complexity all the time. Non-human animals may model the world at more similar levels of complexity over time, so they behave more VNM-rationally even if they have less overall optimization power than humans.

Notice the obvious implications to the ability of super-human AI's to behave VNM-rationally.

Comment author: private_messaging 11 April 2013 07:07:49AM 1 point [-]

Which are what? The AI that is managing some sort of upload society could trade it's clock time for utility.

It's no different from humans where you can either waste your time pondering if you're being rational about how jumpy you are when you see a moving shadow that looks sort of like a sabre-toothed tiger, or you can figure out how to tie a rock to a stick; in the modern times, ponder what is a better deal at the store vs try to invent something and make a lot of money.

Comment author: Eugine_Nier 12 April 2013 04:39:52AM 1 point [-]

The AI that is managing some sort of upload society could trade it's clock time for utility.

It still has to deal with the external world.

Comment author: private_messaging 12 April 2013 05:33:39AM *  1 point [-]

But the point is, it's computing time costs utility, and so it can't waste it on things that will not gain it enough utility.

If you consider 2x1x1 cube to have probability of 1/6 of landing on each side, you can still be VNM rational about that - then you won't be dutch booked, you'll lose money though because that cube is not a perfect die and you'll accept losing bets. Real world is like that, it doesn't give cookies for non-dutch-bookability, it gives cookies for correct predictions of what is actually going to happen.