Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

# Perplexed comments on Focus Your Uncertainty - Less Wrong

33 05 August 2007 08:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

## Comments (16)

Sort By: Old

You are viewing a single comment's thread.

Comment author: 22 July 2010 03:31:05PM 3 points [-]

Eliezer wrote: "...you do work out that, if some particular outcome occurs, then your utility function is logarithmic in time spent preparing the excuse." That kind of dropped out of the sky, didn't it?

Since our puzzled pundit pontificates regarding market issues, it seems likely to me that he will draw upon his undergraduate training in economics to recognize this as an allocation problem, and he will immediately begin thinking in terms of equalizing marginal returns. Or, if his undergraduate training was at one of the better schools, then he will realize that he first has to show that marginal returns are decreasing before he begins equating them. I rather doubt that he would begin fretting about defending his allocations to a committee unless his training were in some other field entirely! :)

But even starting from an assumption of decreasing marginal utility, it is very unclear as to how he would guess that the utility function must be logarithmic. There are many decreasing functions. What is so special about the function MU(x)=1/x? Hmmm. Perhaps he can get some leverage by reflecting that he has already spent some amount of time "a priori" thinking about the problem even before the clock starts on his allocated 100 minutes. How much time? He can't remember. But he does have the intuition that the decreasing function giving the marginal utility of additional prep time should have the same general shape regardless of how much "a priori" time was spent before the clock began ticking. That is, he intuits that shifting the function graph along the X axis should be equivalent to scaling it along the Y axis.

Or does this intuition seem just as "out of the sky" as the "logarithmic" intuition that I am trying to avoid?

Comment author: 19 June 2011 07:13:59PM *  1 point [-]

Eliezer wrote: "...you do work out that, if some particular outcome occurs, then your utility function is logarithmic in time spent preparing the excuse." That kind of dropped out of the sky, didn't it?

The only way that I can make sense of the line you quote is to assume that the pundit already identifies "the probability that bond prices go up" with "the fraction of the 100 minutes that I ought to spend on a story explaining why bond prices went up".

For simplicity, suppose that there are only two possible outcomes, UP and DOWN. Let p be the probability of UP, where 0 < p < 1. Let U(x) be the utility of having spent 100x minutes on an explanation for an outcome, given that that outcome occurs. (So, we are assuming that the utility of spending 100x minutes on a story depends only on whether you get to use that story, not on whether the story explains UP or DOWN. In other words, it is equally easy to concoct equally good stories of either kind.) Assume that the utility function U is differentiable.

The pundit is trying to maximize the expected utility

EU(x) = U(x) p + U(1−x) (1−p).

But it is given that the pundit ought to spend 100p minutes on UP. That is, the expected utility attains its maximum when x = p. Equivalently, the utility function U must satisfy

EU′(p) = U′(p) pU′(1−p) (1−p) = 0.

That is,

U′(p) / U′(1−p) = (1/p) / (1/(1−p)).

This equation should hold regardless of the value of p. In other words, the conditions are equivalent to saying that U is a solution to the DE

U′(x) / U′(1−x) = (1/x) / (1/(1−x)).

It's natural enough to notice that this DE holds if U′(x) = 1/x. That is, setting U(x) = ln(x) yields the desired behavior.

More generally, the DE says that U′(x) = (1/x) g(x) for some function g satisfying g(x) = g(1−x). But if you are only interested in finding some model of the pundit's behavior that predicts what the pundit does for all values of p, you can set g(x) = 1.