Eugine_Nier comments on David Deutsch on How To Think About The Future - Less Wrong

4 Post author: curi 11 April 2011 07:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (197)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eugine_Nier 10 April 2011 04:31:30PM 2 points [-]

Suppose I give you some odds p:q and force you to bet on some proposition X (say, Democrats win in 2012) being true, but I let you pick which side of the bet you take; a payoff of p if X is true, or a payoff of q if X is false. For some (unique) value of p/q, you'll switch which side you want to take.

It seems this can force you to assign probabilities to arbitrary hypothesis.

So, how precise should these probabilities be? Any why can't I apply this argument to force the probabilities to have arbitrary high precision?

Comment author: Larks 10 April 2011 06:57:27PM 1 point [-]

Not that I can think of, besides memory/speed constaints, and how much updating you can have done with the evidence you've recieved.

Comment author: Eugine_Nier 10 April 2011 07:31:53PM 2 points [-]

and how much updating you can have done with the evidence you've recieved.

Why can't it happen that you have so little and/or such weak evidence, that the amount of precision you should have is none at all?

Comment author: Manfred 10 April 2011 08:01:44PM *  0 points [-]

Imagine that you had to give a probability density to each probability estimate you could make of Obama winning in 2012 being the correct one. You'd end up with something looking like a bell curve over probabilities, centered somewhere around "Obama has a 70% (or something) chance of winning." Then to make a decision based on that distribution using normal decision theory, you would average over the possible results of an action, weighted by the probability. But this is equivalent to taking the mean of your bell curve - no matter how wide or narrow the bell curve, all that matters to your (standard decision theory) decision is the location of the mean.

Less evidence is like a wider bell curve, more evidence like a sharper one. But as long as the mean stays the same, the average result of each decision stays the same, so your decision will also be the same.

So there are two kinds of precision here: the precision of the mean probability given your current (incomplete) information, which can be arbitrarily high, and the precision with which you estimate the true answer, which is the width of the bell curve. So when you say "precision," there is a possible confusion. Your first post was about the "how precise can these probabilities be," which was the first (and boring, since it's so high) kind of precision, while this post seems to be talking about the second kind, the kind that is more useful because it reflects how much evidence you have.

Comment author: Eugine_Nier 10 April 2011 08:48:20PM 2 points [-]

So there are two kinds of precision here: the precision of the mean probability given your current (incomplete) information, which can be arbitrarily high, and the precision with which you estimate the true answer, which is the width of the bell curve.

I'm not sure what you mean by the "true answer". After all, in some sense the true probability is either 0 or 1 it's just that we don't know which.

Comment author: Manfred 10 April 2011 09:09:11PM 1 point [-]

That's a good point. So I guess the second kind of precision doesn't make sense in this case (like it would if the bell curve were over, say, the number of beans in a jar), and "precision" should only refer to "precision with which we can extract an average probability from our information," which is very high.

Comment author: [deleted] 11 April 2011 02:46:14PM 0 points [-]

Imagine that you had to give a probability density to each probability estimate you could make of Obama winning in 2012 being the correct one. You'd end up with something looking like a bell curve over probabilities

Bell curves prefer to live on unbounded intervals! It would be less jarring, (and less convenient for you?), if he ended up with something looking like a uniform distribution over probabilities.

Comment author: Manfred 11 April 2011 06:07:57PM 0 points [-]

It's equally convenient, since the mean doesn't care about the shape. I don't think it's particularly jarring - just imagine it going to 0 at the edges.

The reason you'll probably end up with something like a bell curve is a practical one - the central limit theorem. For complicated problems, you very often get what looks something like a bell curve. Hardly watertight, but I'd bet decent amounts of money that it is true in this case, so why not use it to add a little color to the description?

Comment author: Larks 10 April 2011 08:03:45PM 0 points [-]

Well, your prior gives you a unique value, and bayes theorem is a function, so it gives you a unique value for every input.

Comment author: Eugine_Nier 10 April 2011 08:50:33PM 2 points [-]

Well, your prior gives you a unique value,

So the claim is that you have arbitrary precision priors. What are they, and where are they stored?

Comment author: Larks 10 April 2011 09:38:21PM 0 points [-]

Sorry, I haven't been very clear. A perfect bayesian agent would have a unique real number to represent it's level of belief in every hypothesis.

The betting-offer system I described about can force people (and force any hypothetical agent) to assign unique values.

Of course, an actual person won't be capable of this level of precision or coherence.

Comment author: Eugine_Nier 10 April 2011 08:17:05PM 1 point [-]

Yes, but actually computing that function is computationally intractable in all but the simplest examples.