David_Chapman comments on Probability, knowledge, and meta-probability - Less Wrong

38 Post author: David_Chapman 17 September 2013 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (71)

You are viewing a single comment's thread. Show more comments above.

Comment author: jeremysalwen 14 September 2013 09:46:14PM *  3 points [-]

The subtlety is about what numerical data can formally represent your full state of knowledge. The claim is that a mere probability of getting the $2 payout does not.

However, a single probability for each outcome given each strategy is all the information needed. The problem is not with using single probabilities to represent knowledge about the world, it's the straw math that was used to represent the technique. To me, this reasoning is equivalent to the following:

"You work at a store where management is highly disorganized. Although they precisely track the number of days you have worked since the last payday, they never remember when they last paid you, and thus every day of the work week has a 1/5 chance of being a payday. For simplicity's sake, let's assume you earn $100 a day.

You wake up on Monday and do the following calculation: If you go in to work, you have a 1/5 chance of being paid. Thus the expected payoff of working today is $20, which is too low for it to be worth it. So you skip work. On Tuesday, you make the same calculation, and decide that it's not worth it to work again, and so you continue forever.

I visit you and immediately point out that you're being irrational. After all, a salary of $100 a day clearly is worth it to you, yet you are not working. I look at your calculations, and immediately find the problem: You're using a single probability to represent your expected payoff from working! I tell you that using a meta-probability distribution fixes this problem, and so you excitedly scrap your previous calculations and set about using a meta-probability distribution instead. We decide that a Gaussian sharply peaked at 0.2 best represents our meta-probability distribution, and I send you on your way."

Of course, in this case, the meta-probability distribution doesn't change anything. You still continue skipping work, because I have devised the hypothetical situation to illustrate my point (evil laugh). The point is that in this problem the meta-probability distribution solves nothing, because the problem is not with a lack of meta-probability, but rather a lack of considering future consequences.

In both the OPs example and mine, the problem is that the math was done incorrectly, not that you need meta-probabilities. As you said, meta-probabilities are a method of screening off additional labels on your probability distributions for a particular class of problems where you are taking repeated samples that are entangled in a very particular sort of way. As I said above, I appreciate the exposition of meta-probabilities as a tool, and your comment as well has helped me better understand their instrumental nature, but I take issue with what sort of tool they are presented as.

If you do the calculations directly with the probabilities, your calculation will succeed if you do the math right, and fail if you do the math wrong. Meta-probabilities are a particular way of representing a certain calculation that succeed and fail on their own right. If you use them to represent the correct direct probabilities, you will get the right answer, but they are only an aid in the calculation, they never fix any problem with direct probability calculations. The fixing of the calculation and the use of probabilities are orthogonal issues.

To make a blunt analogy, this is like someone trying to plug an Ethernet cable into a phone jack, and then saying "when Ethernet fails, wifi works", conveniently plugging in the wifi adapter correctly.

The key of the dispute in my eyes is not whether wifi can work for certain situations, but whether there's anything actually wrong with Ethernet in the first place.

Comment author: David_Chapman 14 September 2013 10:02:19PM 1 point [-]

Jeremy, I think the apparent disagreement here is due to unclarity about what the point of my argument was. The point was not that this situation can't be analyzed with decision theory; it certainly can, and I did so. The point is that different decisions have to be made in two situations where the probabilities are the same.

Your discussion seems to equate "probability" with "utility", and the whole point of the example is that, in this case, they are not the same.

Comment author: jeremysalwen 14 September 2013 10:15:21PM 4 points [-]

I guess my position is thus:

While there are sets of probabilities which by themselves are not adequate to capture the information about a decision, there always is a set of probabilities which is adequate to capture the information about a decision.

In that sense I do not see your article as an argument against using probabilities to represent decision information, but rather a reminder to use the correct set of probabilities.

Comment author: Vaniver 14 September 2013 11:38:13PM 1 point [-]

In that sense I do not see your article as an argument against using probabilities to represent decision information, but rather a reminder to use the correct set of probabilities.

My understanding of Chapman's broader point (which may differ wildly from his understanding) is that determining which set of probabilities is correct for a situation can be rather hard, and so it deserves careful and serious study from people who want to think about the world in terms of probabilities.