Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Houshalter comments on Bayes' Theorem Illustrated (My Way) - Less Wrong

126 Post author: komponisto 03 June 2010 04:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (191)

You are viewing a single comment's thread.

Comment author: Houshalter 03 June 2010 08:59:08PM *  9 points [-]

I don't get it really. I mean, I get the method, but not the formula. Is this useful for anything though?

Also, a simpler method of explaining the Monty Hall problem is to think of it if there were more doors. Lets say there were a million (thats alot ["a lot" grammar nazis] of goats.) You pick one and the host elliminates every other door except one. The probability you picked the right door is one in a million, but he had to make sure that the door he left unopened was the one that had the car in it, unless you picked the one with a car in it, which is a one in a million chance.

Comment author: JoshuaZ 03 June 2010 10:17:19PM *  4 points [-]

It might help to read the sequences, or just read Jaynes. In particular, one of the central ideas of the LW approach to rationality is that when one encounters new evidence one should update one's belief structure based on this new evidence and your estimates using Bayes' theorem. Roughly speaking, this is in contrast to what is sometimes described as "traditional rationalism" which doesn't emphasize updating on each piece of evidence but rather on updating after one has a lot of clearly relevant evidence.

Edit: Recommendation of Map-Territory sequence seems incorrect. Which sequence is the one to recommend here?

Comment author: Vive-ut-Vivas 03 June 2010 10:34:16PM *  2 points [-]
Comment author: Houshalter 03 June 2010 11:38:18PM *  0 points [-]

Updating your belief based on different pieces of evidence is useful, but (and its a big but) just believing strange things based on imcomplete evidence is bad. Also, this neglects the fact of time. If you had an infinite amount of time to analyze every possible scenario, you could get away with this, but otherwise you have to just make quick assumptions. Then, instead of testing wether these assumptions are correct, you just go with them wherever it takes you. If only you could "learn how to learn" and use the Bayesian method on different methods of learning; eg, test out different heuristics and see which ones give the best results. In the end, you find humans already do this to some extent and "traditional rationalism" and science is based off of the end result of this method. Is this making any sense? Sure, its useful in some abstract sense and on various math problems, but you can't program a computer this way, nor can you live your life trying to compute statistics like this in your head.

Other than that, I can see different places where this would be useful.

Comment author: thomblake 04 June 2010 04:08:42PM 7 points [-]

nor can you live your life trying to compute statistics like this in your head

And so it is written, "Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims."

Comment author: JoshuaZ 04 June 2010 01:55:47AM *  2 points [-]

I may not be the best person to reply to this given that I a) am much closer to being a traditional rationalist than a Bayesian and b) believe that the distinction between Bayesian rationalism and traditional rationalism is often exaggerated. I'll try to do my best.

Updating your belief based on different pieces of evidence is useful, but (and its a big but) just believing strange things based on incomplete evidence is bad.

So how do you tell if a belief is strange? Presumably if the evidence points in one direction, one shouldn't regard that belief as strange. Can you give an example of a belief that should considered not a good belief to have due to strangeness that one could plausibly have a Bayesian accept like this?

Also, this neglects the fact of time. If you had an infinite amount of time to analyze every possible scenario, you could get away with this, but otherwise you have to just make quick assumptions.

Well yes, and no. The Bayesian starts with some set of prior probability estimates, general heuristics about how the world seems to operate (reductionism and locality would probably be high up on the list). Everyone has to deal with the limits on time and other resources. That's why for example, if someone claims that hopping on one foot cures colon cancer we don't generally bother testing it. That's true for both the Bayesian and the traditionalist.

Sure, its useful in some abstract sense and on various math problems, but you can't program a computer this way, nor can you live your life trying to compute statistics like this in your head

I'm curious as to why you claim that you can't program a computer this way. For example, automatic Bayesian curve fitting has been around for almost 20 years and is a useful machine learning mechanism. Sure, it is much more narrow than applying Bayesianism to understanding reality as a whole, but until we crack the general AI problem, it isn't clear to me how you can be sure that that's a fault of the Bayesian end and not the AI end. If we can understand how to make general intelligences I see no immediate reason why one couldn't make them be good Bayesians.

I agree that in general, trying to generally compute statistics in one's head is difficult. But I don't see why that rules out doing it for the important things. No one is claiming to be a perfect Bayesian. I don't think for example that any Bayesian when walking into a building tries to estimate the probability that the building will immediately collapse. Maybe they do if the building is very rickety looking, but otherwise they just think of it as so tiny as to not bother examining. But Bayesian updating is a useful way of thinking about many classes of scientific issues, as well as general life issues (estimates for how long it will take to get somewhere, estimates of how many people will attend a party based on the number invited and the number who RSVPed for example both can be thought of in somewhat Bayesian manners). Moreover, forcing oneself to do a Bayesian calculation can help bring into the light many estimates and premises that were otherwise hiding behind vagueness or implicit structures.

Comment author: Sniffnoy 04 June 2010 09:24:37AM 1 point [-]

(reductionism and non-locality would probably be high up on the list).

Guessing here you mean locality instead of nonlocality?

Comment author: JoshuaZ 04 June 2010 12:45:21PM 0 points [-]

Yes, fixed thank you.

Comment author: Vive-ut-Vivas 03 June 2010 09:45:07PM *  3 points [-]

Is this useful for anything though?

Only for the stated purpose of this website - to be "less wrong"! :) Quoting from Science Isn't Strict Enough:

But the Way of Bayes is also much harder to use than Science. It puts a tremendous strain on your ability to hear tiny false notes, where Science only demands that you notice an anvil dropped on your head.

In Science you can make a mistake or two, and another experiment will come by and correct you; at worst you waste a couple of decades.

But if you try to use Bayes even qualitatively - if you try to do the thing that Science doesn't trust you to do, and reason rationally in the absence of overwhelming evidence - it is like math, in that a single error in a hundred steps can carry you anywhere. It demands lightness, evenness, precision, perfectionism.

There's a good reason why Science doesn't trust scientists to do this sort of thing, and asks for further experimental proof even after someone claims they've worked out the right answer based on hints and logic.

But if you would rather not waste ten years trying to prove the wrong theory, you'll need to essay the vastly more difficult problem: listening to evidence that doesn't shout in your ear.

As for the rest of your comment: I completely agree! That was actually the explanation that the OP, komponisto, gave to me to get Bayesianism (edit: I actually mean "the idea that probability theory can be used to override your intuitions and get to correct answers") to "click" for me (insofar as it has "clicked"). But the way that it's represented in the post is really helpful, I think, because it eliminates even the need to imagine that there are more doors; it addresses the specifics of that actual problem, and you can't argue with the numbers!

Comment author: cupholder 03 June 2010 09:41:15PM 3 points [-]

I don't get it really. I mean, I get the method, but not the formula. Is this useful for anything though?

Quite a bit! (A quick Google Scholar search turns up about 1500 papers on methods and applications, and there are surely more.)

The formula tells you how to change your strength of belief in a hypothesis in response to evidence (this is 'Bayesian updating', sometimes shortened to just 'updating'). Because the formula is a trivial consequence of the definition of a conditional probability, it holds in any situation where you can quantify the evidence and the strength of your beliefs as probabilities. This is why many of the people on this website treat it as the foundation of reasoning from evidence; the formula is very general.

Eliezer Yudkowsky's Intuitive Explanation of Bayes' Theorem page goes into this in more detail and at a slower pace. It has a few nice Java applets that you can use to play with some of the ideas with specific examples, too.

Comment author: RobinZ 03 June 2010 09:08:55PM 3 points [-]

I don't get it really. I mean, I get the method, but not the formula. Is this useful for anything though?

There's a significant population of people - disproportionately represented here - who consider Bayesian reasoning to be theoretically superior to the ad hoc methods habitually used. An introductory essay on the subject that many people here read and agreed with A Technical Explanation of Technical Explanation.

Comment author: RomanDavis 03 June 2010 09:05:20PM *  3 points [-]

Also, a simpler method of explaining the Monty Hall problem is to think of it if there were more doors. Lets say there were a million (thats alot ["a lot" grammar nazis] of goats.) You pick one and the host elliminates every other door except one. The probability you picked the right door is one in a million, but he had to make sure that the door he left unopened was the one that had the car in it, unless you picked the one with a car in it, which is a one in a million chance.

That's awesome. I shall use it in the future. Wish I could multi upvote.

Comment author: mhomyack 04 June 2010 04:29:37PM *  6 points [-]

The way I like to think of the Monty Hall problem is like this... if you had the choice of picking either one of the three doors or two of the three doors (if the car is behind either, you win it), you would obviously pick two of the doors to give yourself a 2/3 chance of winning. Similarly, if you had picked your original door and then Monty asked if you'd trade your one door for the other two doors (all sight unseen), it would again be obvious that you should make the trade. Now... when you make that trade, you know that at least one of the doors you're getting in trade has a goat behind it (there's only one car, you have two doors, so you have to have at least one goat). So, given that knowledge and the certainty that trading one door for two is the right move (statistically), would seeing the goat behind one of the doors you're trading for before you make the trade change the wisdom of the trade? You KNOW that you're getting at least one goat in either case. Most people who I've explained it to in this way seem to see that making the trade still makes sense (and is equivalent to making the trade in the original scenario).

I think the struggle is that people tend to dismiss the existance of the 3rd door once they see what's behind it. It sort of drops out of the picture as a resolved thing and then the mind erroneously reformulates the situation with just the two remaining doors. The scary thing is that people are generally quite easily manipulated with these sorts of puzzles and there are plenty of circumstances (DNA evidence given during jury trials comes to mind) when the probabilities being presented are wildly misleading as the result of erroneously eliminating segments of the problem space because they are "known".

Comment author: Vive-ut-Vivas 03 June 2010 10:11:54PM *  0 points [-]

One more application of Bayes I should have mentioned: Aumann's Agreement Theorem.