Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Vaniver comments on Value of Information: Four Examples - Less Wrong

74 Post author: Vaniver 22 November 2011 11:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread.

Comment author: Vaniver 21 November 2011 09:11:20PM 38 points [-]

Background: lukeprog wrote this post about articles he wouldn't have the time to write, and the first one on the list was something I was confident about, and so I decided to write a post on it. (As a grad student in operations research, practical decision theory is what I spend most of my time thinking about.)

Amusingly enough, I had the most trouble working in his 'classic example.' Decision analysis tends to be hinged on Bayesian assumptions often referred to as "small world"- that is, your model is complete and unbiased (If you knew there was a bias in your model, you'd incorporate that into your model and it would be unbiased!). Choosing a career is more of a search problem, though- specifying what options you have is probably more difficult than picking from them. You can still use the VoI concept- but mostly for deciding when to stop accumulating new information. Before you've done your first research, you can't predict the results of your research very well, and so it's rather hard to put a number on how valuable looking into potential careers is.

There seems to be a lot of interest in abstract decision theory, but is there interest in more practical decision analysis? That's the sort of thing I suspect I could write a useful primer on, whereas I find it hard to care about, say, Sleeping Beauty.

Comment author: steven0461 21 November 2011 09:42:59PM 33 points [-]

There seems to be a lot of interest in abstract decision theory, but is there interest in more practical decision analysis? That's the sort of thing I suspect I could write a useful primer on

Please do! This is exactly the sort of topic that should be LessWrong's specialty.

Comment author: lukeprog 22 November 2011 06:53:14AM 3 points [-]

Agree.

Comment author: Kaj_Sotala 22 November 2011 08:36:52AM 18 points [-]

There seems to be a lot of interest in abstract decision theory, but is there interest in more practical decision analysis? That's the sort of thing I suspect I could write a useful primer on

My reaction while reading this post was "whoa, this seems really valuable, and the sort of thing that should have been discussed on LW years ago". So yes, please write more.

Comment author: thejash 24 November 2011 05:21:39AM 7 points [-]

Please write an article about "practical decision analysis". I tried to learn about this briefly before, but didn't learn anything useful. I must be missing the right keywords and phrases that are used in the field, so I would definitely appreciate an overview, or anything that helps improve everyday decision making.

Comment author: Oscar_Cunningham 21 November 2011 10:36:03PM 5 points [-]

Bayesian assumptions often referred to as "small world"- that is, your model is complete and unbiased.

Side question: Why are these called "small world" assumptions? I've heard the term before but didn't understand it there either.

Comment author: Vaniver 22 November 2011 01:51:09PM *  12 points [-]

I was introduced to the term by Binmore's Rational Decisions. Amusingly, he asks what small worlds are on page 2 but doesn't get around to answering the question until page 117.

Essentially, a "small world" is one in which you can "look before you leap." When playing Chess by the rules, you could in theory determine every possible position which could be legally reached from the current position. If you have a sufficiently good model of your opponent and know your decision strategy, you could even assign a probability on every terminal board position in that tree. (This world may not seem very small because there are combinatorially many states!)

A large world is one in which you cannot cross some bridges until you get to them. The example given by Binmore is that, at one time, people thought the world was flat; now, they think it's round. That's a process that could be described by Bayesian updating, but it's not clear that's the best way to do things. When I think the world is flat, does it make much sense to enumerate every possible way for the world to be non-flat and parcel out a bit of belief to each? I would argue against such an approach. Wait until you discover that the Earth is roughly spherical, then work from there. That is, parcel out some probability to "world is not flat" and then, when you get evidence for that, expand on it. In a "small world," everything is expanded from the beginning.

This happens in many numerical optimization problems. Someone in my department (who defended their PhD yesterday, actually) was working on a decision model for Brazilian hydroelectric plants. They have to decide how much water stored in dams to use every month, and face stochastic water inflows. The model looks ahead by four years to help determine how much water to use this month, but it only tells you how much water to use this month. There's no point in computing a lookup table for next month, because next month you can take the actual measurements for the most recent month (which you have probability ~0 to predict exactly) and solve the model again, looking ahead four years based on the most recent data.

Comment author: Kaj_Sotala 22 November 2011 08:45:01AM 3 points [-]

I presume it's because actually having a complete model about a problem requires looking at a problem that is small enough that you can actually know all the relevant factors. This is in contrast to e.g. problems in the social sciences, where the amount of things that might possibly affect the result - the size of the world - is large enough that you can never have a complete model.

As another example, many classic AI systems like SHRDLU fared great when in small, limited domains where you could hand-craft rules for everything. They proved pretty much useless in larger, more complex domains where you ran into a combinatorial explosion of needed rules and variables.

Comment author: thomblake 21 November 2011 10:43:26PM 0 points [-]

I had assumed that the term related to small-world network (math) though it doesn't seem to have quite the same application.

Comment author: Michael_Sullivan 24 November 2011 02:29:44AM 5 points [-]

I, too, find it hard to care about Sleeping Beauty, which is perhaps why this post is the first time in years of reading LW, that I've actually dusted off my math spectacles fully and tried to rigorously understand what some of this decision theory notation actually means.

So count me in for a rousing endorsement of interest in more practical decision theory.

Comment author: WrongBot 22 November 2011 08:33:17AM 1 point [-]

Seconding Steven. You write well and this is an interesting and useful topic that has not been sufficiently explored.

Comment author: Vaniver 29 November 2011 11:14:58PM 1 point [-]

The start of my decision analysis sequence is here.