Bayes' Theorem Illustrated (My Way) - Less Wrong
http://lesswrong.com/
Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/Thu, 03 Jun 2010 14:40:21 +1000
Submitted by <a href="http://lesswrong.com/user/komponisto">komponisto</a>
•
126 votes
•
<a href="http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/#comments">191 comments</a>
<div><p><em>(This post is elementary: it introduces a simple method of visualizing Bayesian calculations. In my defense, we've had <a href="/lw/1to/what_is_bayesianism/">other</a> elementary posts before, and they've been found useful; plus, I'd really like this to be online somewhere, and it might as well be here.)</em></p>
<p>I'll admit, those <a href="http://en.wikipedia.org/wiki/Monty_Hall_problem">Monty-Hall</a>-<a href="/lw/1lh/drawing_two_aces/">type</a> problems invariably trip me up. Or at least, they do if I'm not thinking <em>very</em> carefully -- doing quite a bit more work than other people seem to have to do.</p>
<p>What's more, people's explanations of how to get the right answer have almost never been satisfactory to me. If I concentrate hard enough, I can usually follow the reasoning, sort of; but I never quite "see it", and nor do I feel equipped to solve similar problems in the future: it's as if the solutions seem to work only in retrospect. </p>
<p><a href="/lw/dr/generalizing_from_one_example/">Minds work differently</a>, <a href="/lw/ke/illusion_of_transparency_why_no_one_understands/">illusion of transparency</a>, and all that.</p>
<p>Fortunately, I eventually managed to identify the source of the problem, and I came up a way of thinking about -- <em>visualizing </em>-- such problems that suits my own intuition. Maybe there are others out there like me; this post is for them.</p>
<p><a id="more"></a></p>
<p>I've <a href="http://wiki.lesswrong.com/wiki/Chat_Logs/2010-02-18">mentioned before</a> that I like to think in very abstract terms. What this means in practice is that, if there's some simple, general, elegant point to be made, <em>tell it to me right away</em>. Don't start with some messy concrete example and attempt to "work upward", in the hope that difficult-to-grasp abstract concepts will be made more palatable by relating them to "real life". If you do that, I'm liable to get stuck in the trees and not see the forest. Chances are, I won't have much trouble understanding the abstract concepts; "real life", on the other hand...</p>
<p>...well, let's just say I prefer to start at the top and work downward, as a general rule. Tell me how the trees relate to the forest, rather than the other way around.</p>
<p>Many people have found Eliezer's <a href="http://yudkowsky.net/rational/bayes">Intuitive Explanation of Bayesian Reasoning</a> to be an excellent introduction to <a href="http://wiki.lesswrong.com/wiki/Bayes%27_theorem">Bayes' theorem</a>, and so I don't usually hesitate to recommend it to others. But for me personally, if I didn't know Bayes' theorem and you were trying to explain it to me, pretty much the worst thing you could do would be to start with some detailed scenario involving breast-cancer screenings. (And not just because it tarnishes beautiful mathematics with images of sickness and death, either!)</p>
<p>So what's the right way to explain Bayes' theorem to me?</p>
<p>Like this:</p>
<p>We've got a bunch of hypotheses (states the world could be in) and we're trying to figure out which of them is true (that is, which state the world is actually in). As a concession to concreteness (and for ease of drawing the pictures), let's say we've got three (mutually exclusive and exhaustive) hypotheses -- possible world-states -- which we'll call H<sub>1</sub>, H<sub>2</sub>, and H<sub>3</sub>. We'll represent these as blobs in space:</p>
<p><img src="http://imgur.com/NpNUV.png" alt="Figure 0" height="415" width="225"></p>
<p><strong>                   Figure 0</strong></p>
<p><br>Now, we have some prior notion of how probable each of these hypotheses is -- that is, each has some <em>prior probability</em>. If we don't know anything at all that would make one of them more probable than another, they would each have probability 1/3. To illustrate a more typical situation, however, let's assume we have more information than that. Specifically, let's suppose our prior probability distribution is as follows: P(H<sub>1</sub>) = 30%, P(H<sub>2</sub>)=50%, P(H<sub>3</sub>) = 20%. We'll represent this by resizing our blobs accordingly:</p>
<p><img src="http://i.imgur.com/8JAkA.png" alt="Figure 1" height="533" width="337"></p>
<p>                       <strong>Figure 1<br></strong></p>
<p>That's our <em>prior</em> knowledge. Next, we're going to collect some <em>evidence</em> and <em>update</em> our prior probability distribution to produce a <em>posterior</em> probability distribution. Specifically, we're going to run a test. The test we're going to run has three possible outcomes: Result A, Result B, and Result C. Now, since this test happens to have three possible results, it would be really nice if the test just flat-out told us which world we were living in -- that is, if (say) Result A meant that H<sub>1</sub> was true, Result B meant that H<sub>2</sub> was true, and Result 3 meant that H<sub>3</sub> was true. Unfortunately, the real world is messy and complex, and things aren't that simple. Instead, we'll suppose that each result can occur under each hypothesis, but that the different hypotheses have different effects on how likely each result is to occur. We'll assume for instance that if Hypothesis  H<sub>1</sub> is true, we have a 1/2 chance of obtaining Result A, a 1/3 chance of obtaining Result B, and a 1/6 chance of obtaining Result C; which we'll write like this:</p>
<p>P(A|H<sub>1</sub>) = 50%, P(B|H<sub>1</sub>) = 33.33...%, P(C|H<sub>1</sub>) = 16.166...%</p>
<p>and illustrate like this:</p>
<p> </p>
<p><img src="http://imgur.com/9jpzJ.png" alt="" height="140" width="384"></p>
<p>        <strong>Figure 2</strong></p>
<p>(Result A being represented by a triangle, Result B by a square, and Result C by a pentagon.)</p>
<p>If Hypothesis H<sub>2</sub> is true, we'll assume there's a 10% chance of Result A, a 70% chance of Result B, and a 20% chance of Result C:</p>
<p><img src="http://imgur.com/puWW1.png" alt="Figure 3" height="180" width="433"></p>
<p>              <strong>Figure 3</strong></p>
<p><strong><br></strong>(P(A|H<sub>2</sub>) = 10% , P(B|H<sub>2</sub>) = 70%, P(C|H<sub>2</sub>) = 20%)<strong><br></strong></p>
<p>Finally, we'll say that if Hypothesis H<sub>3</sub> is true, there's a 5% chance of Result A, a 15% chance of Result B, and an 80% chance of Result C:<strong><br></strong></p>
<p><img src="http://imgur.com/DHitn.png" alt="Figure 4" height="140" width="384"></p>
<p>              <strong>Figure 4</strong></p>
<p>(P(A|H<sub>3</sub>) = 5%, P(B|H<sub>3</sub>) = 15% P(C|H<sub>3</sub>) = 80%)</p>
<p>Figure 5 below thus shows our knowledge prior to running the test:</p>
<p> </p>
<p> </p>
<p><img src="http://imgur.com/qlyGw.png" alt=""></p>
<p>                <strong>Figure 5</strong></p>
<p> </p>
<p>Note that we have now carved up our hypothesis-space more finely; our possible world-states are now things like "Hypothesis H<sub>1</sub> is true and Result A occurred", "Hypothesis H<sub>1</sub> is true and Result B occurred", etc., as opposed to merely "Hypothesis H<sub>1</sub> is true", etc. The numbers above the slanted line segments -- the <em>likelihoods</em> of the test results, assuming the particular hypothesis -- represent <em>what proportion</em> of the total probability mass assigned to the hypothesis H<sub>n</sub> is assigned to the conjunction of Hypothesis H<sub>n</sub> and Result X; thus, since P(H<sub>1</sub>) = 30%, and P(A|H<sub>1</sub>) = 50%, P(H<sub>1</sub> & A) is therefore 50% of 30%, or, in other words, 15%.</p>
<p>(That's really all Bayes' theorem is, right there, but -- shh! -- don't tell anyone yet!)</p>
<p><br>Now, then, suppose we run the test, and we get...Result A.</p>
<p>What do we do? We <em>cut off all the other branches</em>:</p>
<p><img src="http://imgur.com/XBXi5.png" alt=""></p>
<p>                <strong>Figure 6</strong></p>
<p> </p>
<p>So our updated probability distribution now looks like this:</p>
<p><img src="http://imgur.com/nXENh.png" alt=""></p>
<p><strong>          Figure 7</strong></p>
<p><strong><br></strong></p>
<p>...except for one thing: probabilities are supposed to add up to 100%, not 21%. Well, since we've <em>conditioned</em> on Result A, that means that the 21% probability mass assigned to Result A is now the entirety of our probability mass -- 21% is the new 100%, you might say. So we simply adjust the numbers in such a way that they add up to 100% <em>and the proportions are the same</em>:</p>
<p><img src="http://i.imgur.com/RIeff.png" alt="" height="541" width="253"></p>
<p><strong>                      Figure 8</strong></p>
<p>There! We've just performed a Bayesian update. And that's what it <em>looks like</em>.</p>
<p> </p>
<p>If, instead of Result A, we had gotten Result B,</p>
<p><img src="http://2.bp.blogspot.com/_Ig9I_03TGBQ/TAXmNUu1BpI/AAAAAAAAAAM/s9iVIdtmPy0/s1600/figure09.png" alt="Figure 9" height="480" width="460"></p>
<p><strong>                      Figure 9</strong></p>
<p><strong><br></strong></p>
<p>then our updated probability distribution would have looked like this:</p>
<p><img src="http://imgur.com/s9Tw5.png" alt=""></p>
<p><strong>                     Figure 10</strong></p>
<p> </p>
<p>Similarly, for Result C:</p>
<p><img src="http://imgur.com/9Ikc0.png" alt=""></p>
<p><strong>               Figure 11</strong></p>
<p><em>Bayes' theorem</em> is the formula that calculates these updated probabilities. Using H to stand for a hypothesis (such as H<sub>1</sub>, H<sub>2</sub> or H<sub>3</sub>), and E a piece of evidence (such as Result A, Result B, or Result C), it says:</p>
<p>P(H|E) = P(H)*P(E|H)/P(E)</p>
<p>In words: to calculate the updated probability P(H|E), take the portion of the prior probability of H that is allocated to E (i.e. the quantity P(H)*P(E|H)), and calculate what fraction this is of the total prior probability of E (i.e. divide it by P(E)).</p>
<p>What I like about this way of visualizing Bayes' theorem is that it makes the importance of prior probabilities -- in particular, the difference between P(H|E) and P(E|H) -- <em>visually obvious</em>. Thus, in the above example, we easily see that even though P(C|H<sub>3</sub>) is high (80%), P(H<sub>3</sub>|C) is much less high (around 51%) -- and once you have assimilated this visualization method, it should be easy to see that even more extreme examples (e.g. with P(E|H) huge and P(H|E) tiny) could be constructed.</p>
<p>Now let's use this to examine two tricky probability puzzles, the infamous <a href="http://en.wikipedia.org/wiki/Monty_Hall_problem">Monty Hall Problem</a> and Eliezer's <a href="/lw/1lh/drawing_two_aces/">Drawing Two Aces</a>, and see how it illustrates the correct answers, as well as how one might go wrong.</p>
<p> </p>
<h3><strong>The Monty Hall Problem</strong></h3>
<p>The situation is this: you're a contestant on a game show seeking to win a car. Before you are three doors, one of which contains a car, and the other two of which contain goats. You will make an initial "guess" at which door contains the car -- that is, you will select one of the doors, without opening it. At that point, the host will open a goat-containing door from among the two that you did not select. You will then have to decide whether to stick with your original guess and open the door that you originally selected, or switch your guess to the remaining unopened door. The question is whether it is to your advantage to switch -- that is, whether the car is more likely to be behind the remaining unopened door than behind the door you originally guessed.</p>
<p>(If you haven't thought about this problem before, you may want to try to figure it out before continuing...)</p>
<p> </p>
<p> </p>
<p>The answer is that it <em>is</em> to your advantage to switch -- that, in fact, switching <em>doubles</em> the probability of winning the car.</p>
<p>People often find this counterintuitive when they first encounter it -- where "people" includes the author of this post. There are two possible doors that could contain the car; why should one of them be more likely to contain it than the other?</p>
<p>As it turns out, while constructing the diagrams for this post, I "rediscovered" the error that led me to incorrectly conclude that there is a 1/2 chance the car is behind the originally-guessed door and a 1/2 chance it is behind the remaining door the host didn't open. I'll present that error first, and then show how to correct it. Here, then, is the <em>wrong</em> solution:</p>
<p>We start out with a perfectly correct diagram showing the prior probabilities:</p>
<p><img src="http://imgur.com/aXwYS.png" alt=""></p>
<p><strong>               Figure 12</strong></p>
<p>The possible hypotheses are Car in Door 1, Car in Door 2, and Car in Door 3; before the game starts, there is no reason to believe any of the three doors is more likely than the others to contain the car, and so each of these hypotheses has prior probability 1/3.</p>
<p>The game begins with our selection of a door. That itself isn't <a href="http://wiki.lesswrong.com/wiki/Evidence">evidence</a> about where the car is, of course -- we're assuming we have no particular information about that, other than that it's behind one of the doors (that's the whole point of the game!). Once we've done that, however, we will then have the opportunity to "run a test" to gain some "experimental data": the host will perform his task of opening a door that is guaranteed to contain a goat. We'll represent the result Host Opens Door 1 by a triangle, the result Host Opens Door 2 by a square, and the result Host Opens Door 3 by a pentagon -- thus carving up our hypothesis space more finely into possibilities such as "Car in Door 1 and Host Opens Door 2" , "Car in Door 1 and Host Opens Door 3", etc:</p>
<p><img src="http://imgur.com/bIxZr.png" alt=""></p>
<p>            <strong>Figure 13</strong></p>
<p><strong></strong><br>Before we've made our initial selection of a door, the host is equally likely to open either of the goat-containing doors. Thus, at the beginning of the game, the probability of each hypothesis of the form "Car in Door X and Host Opens Door Y" has a probability of 1/6, as shown. So far, so good; everything is still perfectly correct.</p>
<p>Now we select a door; say we choose Door 2. The host then opens either Door 1 or Door 3, to reveal a goat. Let's suppose he opens Door 1; our diagram now looks like this:<br><br><br><img src="http://imgur.com/0xMQs.png" alt=""></p>
<p>            <strong>Figure 14</strong></p>
<p>But this shows equal probabilities of the car being behind Door 2 and Door 3!</p>
<p><img src="http://imgur.com/07q9g.png" alt=""></p>
<p>                   <strong>Figure 15</strong></p>
<p>Did you catch the mistake?</p>
<p>Here's the <em>correct</em> version:<br><br><em>As soon as we selected Door 2</em>, our diagram should have looked like this:</p>
<p><img src="http://imgur.com/tKGgR.png" alt=""></p>
<p>                                <strong>Figure 16</strong></p>
<p> </p>
<p>With Door 2 selected, the host no longer has the <em>option</em> of opening Door 2; if the car is in Door 1, he <em>must</em> open Door 3, and if the car is in Door 3, he <em>must</em> open Door 1. We thus see that if the car is behind Door 3, the host is twice as <a href="http://wiki.lesswrong.com/wiki/Likelihood_ratio">likely</a> to open Door 1 (namely, 100%) as he is if the car is behind Door 2 (50%); his opening of Door 1 thus constitutes <a href="http://wiki.lesswrong.com/wiki/Amount_of_evidence">some evidence</a> in favor of the hypothesis that the car is behind Door 3. So, when the host opens Door 1, our picture looks as follows:</p>
<p><img src="http://imgur.com/5U47D.png" alt=""></p>
<p>               <strong>Figure 17</strong></p>
<p> </p>
<p>which yields the correct updated probability distribution:</p>
<p><img src="http://imgur.com/JYay2.png" alt=""></p>
<p>                <strong>Figure 18</strong></p>
<p> </p>
<h3>Drawing Two Aces</h3>
<p>Here is the statement of the problem, from <a href="/lw/1lh/drawing_two_aces/">Eliezer's post</a>:</p>
<blockquote>
<p><br>Suppose I have a deck of four cards:  The ace of spades, the ace of hearts, and two others (say, 2C and 2D).<br><br>You draw two cards at random.<br><br>(...)<br><br>Now suppose I ask you "Do you have an ace?"<br><br>You say "Yes."<br><br>I then say to you:  "Choose one of the aces you're holding at random (so if you have only one, pick that one).  Is it the ace of spades?"<br><br>You reply "Yes."<br><br>What is the probability that you hold two aces?</p>
</blockquote>
<p><br>(Once again, you may want to think about it, if you haven't already, before continuing...)</p>
<p> </p>
<p> </p>
<p>Here's how our picture method answers the question:</p>
<p><br>Since the person holding the cards has at least one ace, the "hypotheses" (possible card combinations) are the five shown below:</p>
<p><img src="http://imgur.com/a3dxW.png" alt=""></p>
<p><strong>      Figure 19</strong></p>
<p>Each has a prior probability of 1/5, since there's no reason to suppose any of them is more likely than any other. <br><br>The "test" that will be run is selecting an ace at random from the person's hand, and seeing if it is the ace of spades. The possible results are:</p>
<p><img src="http://imgur.com/XoXVj.png" alt="" height="446" width="330"></p>
<p>     <strong>Figure 20</strong></p>
<p> </p>
<p>Now we run the test, and get the answer "YES"; this puts us in the following situation:</p>
<p> </p>
<p><img src="http://imgur.com/b2oLJ.png" alt=""></p>
<p>     <strong>Figure 21</strong></p>
<p> </p>
<p>The total prior probability of this situation (the YES answer) is (1/6)+(1/3)+(1/3) = 5/6; thus, since 1/6 is 1/5 of 5/6 (that is, (1/6)/(5/6) = 1/5), our updated probability is 1/5 -- which happens to be the same as the prior probability. (I won't bother displaying the final post-update picture here.)</p>
<p>What this means is that the test we ran did not provide any additional information about whether the person has both aces beyond simply knowing that they have at least one ace; we might in fact say that the result of the test is <a href="http://wiki.lesswrong.com/wiki/Screening_off">screened off</a> by the answer to the first question ("Do you have an ace?").</p>
<p><br>On the other hand, if we had simply asked "Do you have the ace of spades?", the diagram would have looked like this:</p>
<p><img src="http://imgur.com/CWtH4.png" alt=""></p>
<p>     <strong>Figure 22</strong></p>
<p> </p>
<p>which, upon receiving the answer YES, would have become:</p>
<p><img src="http://imgur.com/oc1YQ.png" alt=""></p>
<p>  <strong>Figure 23</strong></p>
<p>The total probability mass allocated to YES is 3/5, and, within that, the specific situation of interest has probability 1/5; hence the updated probability would be 1/3.</p>
<p>So a YES answer in this experiment, unlike the other, would provide <a href="http://wiki.lesswrong.com/wiki/Evidence">evidence</a> that the hand contains both aces; for if the hand contains both aces, the probability of a YES answer is 100% -- twice as large as it is in the contrary case (50%), giving a <a href="http://wiki.lesswrong.com/wiki/Likelihood_ratio">likelihood ratio</a> of 2:1. By contrast, in the other experiment, the probability of a YES answer is only 50% even in the case where the hand contains both aces.</p>
<p><br>This is what people who try to explain the difference by uttering the opaque phrase "a random selection was involved!" are actually talking about: the difference between</p>
<p><img src="http://imgur.com/IWwTV.png" alt=""></p>
<p><strong>  Figure 24</strong></p>
<p> </p>
<p>and</p>
<p><img src="http://imgur.com/ugqVb.png" alt="">.</p>
<p><strong>  Figure 25</strong></p>
<p> </p>
<p> </p>
<p>The method explained here is far from the only way of visualizing Bayesian updates, but I feel that it is among the most intuitive.</p>
<p> </p>
<p>(<em>I'd like to thank my sister, </em><a href="/user/Vive-ut-Vivas/">Vive-ut-Vivas</a><em>, for help with some of the diagrams in this post.)</em></p></div>
<a href="http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/#comments">191 comments</a>
Kevin on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23na
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23na2010-06-03T06:40:59.871226+00:00
<div class="md"><p>This is great. I hope other people aren't hesitating to make posts because they are too "elementary". Content on Less Wrong doesn't need to be advanced; it just needs to be Not Wrong.</p></div>
prase on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23nl
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23nl2010-06-03T09:28:38.919969+00:00
<div class="md"><blockquote>
<p>In my defense, we've had other elementary posts before, and they've been found useful; plus, I'd really like this to be online somewhere, and it might as well be here.</p>
</blockquote>
<p>It's quite interesting that people feel a need to defend themselves in advance when they think their post is elementary, but almost never feel the same obligation when the post is supposedly too hard, or off-topic, or inappropriate for other reason. More interesting given that we all have probably read about the <a href="http://lesswrong.com/lw/kh/explainers_shoot_high_aim_low/">illusion of transparency</a>. Still, seems that inclusion of this sort of signalling is irresistible, although (as the author's own defense has stated) the experience tells us that such posts usually meet positive reception.</p>
<p>As for my part of signalling, this comment was not meant as a criticism. However I find it more useful if people defend themselves only after they are criticised or otherwise attacked.</p></div>
NancyLebovitz on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23o9
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23o92010-06-03T11:21:22.763729+00:00
<div class="md"><p>It's conceivable that people being nervous about posts on elementary subjects means that they're more careful with elementary posts, thus explaining some fraction of the higher quality.</p></div>
prase on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23oi
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23oi2010-06-03T12:59:12.190244+00:00
<div class="md"><p>It is possible. However I am not sure that the elementary posts have higher average quality than other posts, if the comparison is even possible. Rather, what strikes me is that you never read "this is specialised and complicated, but nevertheless I decided to post it here, because..."</p>
<p>There still apparently is a perception that it's a shame to write down some relatively simple truth, and if one wants to, one must have a damned good reason. I can understand the same mechanism in peer reviewed journals, where the main aim is to impress the referee and demonstrate the author's status, which increases chances to get the article published. (If the article is trivial, it at least doesn't do harm to point out that the autor knows it.) Although this practice was criticised here for many times, it seems that it is really difficult to overcome it. But at least we don't shun posts because they are elementary.</p></div>
RichardKennaway on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ok
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ok2010-06-03T13:09:19.097491+00:00
<div class="md"><p>A piece of advice I heard a long time ago, and which has sometimes greatly alleviated the boredom of being stuck in a conference session, is this: If you're not interested in what the lecturer is talking about, study the lecture as a demonstration of how to give a lecture.</p>
<p>By this method even an expert can learn from a skilful exposition of fundamentals.</p></div>
retiredurologist on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23o5
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23o52010-06-03T10:47:22.120852+00:00
<div class="md"><p>You might like <a href="http://oscarbonilla.com/2009/05/visualizing-bayes-theorem/" rel="nofollow">Oscar Bonilla's much simpler explanation</a>.</p></div>
komponisto on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23qh
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23qh2010-06-03T18:16:10.130647+00:00
<div class="md"><p>That's more or less the default visualization; unfortunately it hasn't proved particularly helpful to me, or at least not as much as the visualization presented here -- hence the need for this post.</p>
<p>The method presented in the post has a discrete, sequential, "flip-the-switch" feel to it which I find very suited to my style of thinking. If I had known how, I would have presented it as an animation.</p></div>
avalot on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/244y
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/244y2010-06-05T23:53:57.480276+00:00
<div class="md"><p>I don't have a very advanced grounding in math, and I've been skipping over the technical aspects of the probability discussions on this blog. I've been reading lesswrong by mentally substituting "smart" for "Bayesian", "changing one's mind" for "updating", and having to vaguely trust and believe instead of rationally understanding.</p>
<p>Now I absolutely get it. I've got the key to the sequences. Thank you very very much!</p></div>
prase on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23nn
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23nn2010-06-03T09:39:30.622071+00:00
<div class="md"><p>It would be nice to have the <em>areas</em> of the blobs representing the percentages.</p></div>
JenniferRM on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23q7
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23q72010-06-03T17:29:23.262871+00:00
<div class="md"><p>I was thinking about that too. Its actually something that comes up with the literature on "lying with statistics" where (either accidentally or out of a more or less subconscious attempt to convince) <a href="http://www.physics.csbsju.edu/stats/display.html" rel="nofollow">figures representing numbers are rescaled by a linear factor by the author causing super linear adjustments of area which is what the reader's visual processing really responds to</a>.</p>
<p>Generally, textbooks recommend boring linear scales (basically bar charts) with an unbroken reference line off to the side to compare numbers. However, if you <em>really want</em> to use images with area (and for this article they're a brilliant addition) then the correct thing to do is to decide the number you're representing as the area and work back to the number you have access to for a given figure (like a circle radius or a pentagon edge length or whatever) and adjust that number to your calculated value.</p></div>
Houshalter on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23sf
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23sf2010-06-03T20:59:08.305178+00:00
<div class="md"><p>I don't get it really. I mean, I get the method, but not the formula. Is this useful for anything though?</p>
<p>Also, a simpler method of explaining the Monty Hall problem is to think of it if there were more doors. Lets say there were a million (thats alot ["a lot" grammar nazis] of goats.) You pick one and the host elliminates every other door except one. The probability you picked the right door is one in a million, but he had to make sure that the door he left unopened was the one that had the car in it, <em>unless</em> you picked the one with a car in it, which is a one in a million chance.</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23td
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23td2010-06-03T22:17:19.239908+00:00
<div class="md"><p>It might help to read the sequences, or just read Jaynes. In particular, one of the central ideas of the LW approach to rationality is that when one encounters new evidence one should update one's belief structure based on this new evidence and your estimates using Bayes' theorem. Roughly speaking, this is in contrast to what is sometimes described as "traditional rationalism" which doesn't emphasize updating on each piece of evidence but rather on updating after one has a lot of clearly relevant evidence.</p>
<p>Edit: Recommendation of Map-Territory sequence seems incorrect. Which sequence is the one to recommend here?</p></div>
Vive-ut-Vivas on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23tf
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23tf2010-06-03T22:34:16.821761+00:00
<div class="md"><p><a href="http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind">How to Actually Change your Mind</a> and <a href="http://wiki.lesswrong.com/wiki/Mysterious_Answers_to_Mysterious_Questions">Mysterious Answers to Mysterious Questions</a></p></div>
Houshalter on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23tv
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23tv2010-06-03T23:38:18.201204+00:00
<div class="md"><p>Updating your belief based on different pieces of evidence is useful, but (and its a big but) just believing strange things based on imcomplete evidence is bad. Also, this neglects the fact of time. If you had an infinite amount of time to analyze every possible scenario, you could get away with this, but otherwise you have to just make quick assumptions. Then, instead of testing wether these assumptions are correct, you just go with them wherever it takes you. If only you could "learn how to learn" and use the Bayesian method on different methods of learning; eg, test out different heuristics and see which ones give the best results. In the end, you find humans already do this to some extent and "traditional rationalism" and science is based off of the end result of this method. Is this making any sense? Sure, its useful in some abstract sense and on various math problems, but you can't program a computer this way, nor can you live your life trying to compute statistics like this in your head.</p>
<p>Other than that, I can see different places where this would be useful.</p></div>
thomblake on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wf
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wf2010-06-04T16:08:42.849423+00:00
<div class="md"><blockquote>
<p>nor can you live your life trying to compute statistics like this in your head</p>
</blockquote>
<p>And so <a href="http://yudkowsky.net/rational/virtues" rel="nofollow">it is written</a>, "Even if you cannot do the math, knowing that the math exists tells you that the dance step is precise and has no room in it for your whims."</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ua
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ua2010-06-04T11:55:47.959707+10:00
<div class="md"><p>I may not be the best person to reply to this given that I a) am much closer to being a traditional rationalist than a Bayesian and b) believe that the distinction between Bayesian rationalism and traditional rationalism is often exaggerated. I'll try to do my best.</p>
<blockquote>
<p>Updating your belief based on different pieces of evidence is useful, but (and its a big but) just believing strange things based on incomplete evidence is bad.</p>
</blockquote>
<p>So how do you tell if a belief is strange? Presumably if the evidence points in one direction, one shouldn't regard that belief as strange. Can you give an example of a belief that should considered not a good belief to have due to strangeness that one could plausibly have a Bayesian accept like this?</p>
<blockquote>
<p>Also, this neglects the fact of time. If you had an infinite amount of time to analyze every possible scenario, you could get away with this, but otherwise you have to just make quick assumptions.</p>
</blockquote>
<p>Well yes, and no. The Bayesian starts with some set of prior probability estimates, general heuristics about how the world seems to operate (reductionism and locality would probably be high up on the list). Everyone has to deal with the limits on time and other resources. That's why for example, if someone claims that hopping on one foot cures colon cancer we don't generally bother testing it. That's true for both the Bayesian and the traditionalist.</p>
<blockquote>
<p>Sure, its useful in some abstract sense and on various math problems, but you can't program a computer this way, nor can you live your life trying to compute statistics like this in your head</p>
</blockquote>
<p>I'm curious as to why you claim that you can't program a computer this way. For example, automatic Bayesian curve fitting has been around for almost 20 years and is a useful machine learning mechanism. Sure, it is much more narrow than applying Bayesianism to understanding reality as a whole, but until we crack the general AI problem, it isn't clear to me how you can be sure that that's a fault of the Bayesian end and not the AI end. If we can understand how to make general intelligences I see no immediate reason why one couldn't make them be good Bayesians.</p>
<p>I agree that in general, trying to generally compute statistics in one's head is difficult. But I don't see why that rules out doing it for the important things. No one is claiming to be a perfect Bayesian. I don't think for example that any Bayesian when walking into a building tries to estimate the probability that the building will immediately collapse. Maybe they do if the building is very rickety looking, but otherwise they just think of it as so tiny as to not bother examining. But Bayesian updating is a useful way of thinking about many classes of scientific issues, as well as general life issues (estimates for how long it will take to get somewhere, estimates of how many people will attend a party based on the number invited and the number who RSVPed for example both can be thought of in somewhat Bayesian manners). Moreover, forcing oneself to do a Bayesian calculation can help bring into the light many estimates and premises that were otherwise hiding behind vagueness or implicit structures.</p></div>
Sniffnoy on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vc
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vc2010-06-04T09:24:37.325206+00:00
<div class="md"><blockquote>
<p>(reductionism and non-locality would probably be high up on the list).</p>
</blockquote>
<p>Guessing here you mean locality instead of nonlocality?</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vr
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vr2010-06-04T12:45:21.740233+00:00
<div class="md"><p>Yes, fixed thank you.</p></div>
Houshalter on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23um
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23um2010-06-04T03:05:45.252681+00:00
<div class="md"><blockquote>
<p>So how do you tell if a belief is strange? Presumably if the evidence points in one direction, one shouldn't regard that belief as strange. Can you give an example of a belief that should considered not a good belief to have due to strangeness that one could plausibly have a Bayesian accept like this?</p>
</blockquote>
<p>Well for example, if you have a situation where the evidence leads you to believe that something is true, and there is an easy, simple, reliable test to prove its not true, why would the bayesian method waste its time? Immagine you witness something which could be possible, but its extremely odd. Like gravity not working or something. It could be a hallucinations, or a glitch if your talking about a computer, and there might be an easy way to prove it is or isn't. Under either scenerio, whether its a hallucination or reality is just weird, it makes an assumption and then has no reason to prove whether this is correct. Actually, that might have been a bad example, but pretty much every scenario you can think of, where making an assumption can be a bad thing and you can test the assumptions, would work.</p>
<blockquote>
<p>I'm curious as to why you claim that you can't program a computer this way. For example, automatic Bayesian curve fitting has been around for almost 20 years and is a useful machine learning mechanism. Sure, it is much more narrow than applying Bayesianism to understanding reality as a whole, but until we crack the general AI problem, it isn't clear to me how you can be sure that that's a fault of the Bayesian end and not the AI end. If we can understand how to make general intelligences I see no immediate reason why one couldn't make them be good Bayesians.</p>
</blockquote>
<p>Well if you can't program a viable AI out of it, then its not a universal truth to rationality. Sure, you might be able to use if its complimented and powered by other mechanisms, but then its not a unvirsal truth, is it. That was my point. If it is an important tool, then I have no doubt that once we make AI, it will discover it itself, or may even have it in its original program.</p></div>
Sniffnoy on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vd
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vd2010-06-04T09:32:13.354568+00:00
<div class="md"><blockquote>
<p>Well for example, if you have a situation where the evidence leads you to believe that something is true, and there is an easy, simple, reliable test to prove its not true, why would the bayesian method waste its time? Immagine you witness something which could be possible, but its extremely odd. Like gravity not working or something. It could be a hallucinations, or a glitch if your talking about a computer, and there might be an easy way to prove it is or isn't. Under either scenerio, whether its a hallucination or reality is just weird, it makes an assumption and then has no reason to prove whether this is correct. Actually, that might have been a bad example, but pretty much every scenario you can think of, where making an assumption can be a bad thing and you can test the assumptions, would work.</p>
</blockquote>
<p>Firstly, priors are important; if something has a low prior probability, it's not generally going to get to a high probability quickly. Secondly, not all evidence has the same strength. Remember in particular that the strength of evidence is measured by the likelihood ratio. If you see something that could likely be caused by hallucinations, that isn't necessarily very strong evidence for it; but hallucinations are not totally arbitrary, IINM. Still, if you witness objects spontaneously floating off the ground, even if you know this is an unlikely hallucination, the prior for some sort of gravity failure will be so low that the posterior will probably still be very low. Not that those are the only two alternatives, of course.</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23uo
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23uo2010-06-04T03:16:46.218542+00:00
<div class="md"><blockquote>
<p>Well for example, if you have a situation where the evidence leads you to believe that something is true, and there is an easy, simple, reliable test to prove its not true, why would the bayesian method waste its time? Immagine you witness something which could be possible, but its extremely odd. Like gravity not working or something. It could be a hallucinations, or a glitch if your talking about a computer, and there might be an easy way to prove it is or isn't. Under either scenerio, whether its a hallucination or reality is just weird, it makes an assumption and then has no reason to prove whether this is correct. Actually, that might have been a bad example, but pretty much every scenario you can think of, where making an assumption can be a bad thing and you can test the assumptions, would work.</p>
</blockquote>
<p>If there is an "easy, simple, reliable test" to determine the claim's truth within a high confidence, why do you think a Bayesian wouldn't make that test?</p>
<blockquote>
<p>Well if you can't program a viable AI out of it, then its not a universal truth to rationality.</p>
</blockquote>
<p>Can you expand your logic for this? In particular, it seems like you are using a definition of "universal truth to rationality" which needs to be expanded out.</p></div>
Houshalter on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vt
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vt2010-06-04T13:06:11.063355+00:00
<div class="md"><blockquote>
<p>If there is an "easy, simple, reliable test" to determine the claim's truth within a high confidence, why do you think a Bayesian wouldn't make that test?</p>
</blockquote>
<p>Because its not a decision making theory, but a one that judges probability. The bayesian method will examine what it has, and decide the probability of different situations. Other then that, it doesn't actually do anything. It takes an entirely different system to actually act on the information given. If it is a simple system and just assumes to be correct whichever one has the highest probability, then it isn't going to bother testing it.</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vw
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vw2010-06-04T13:36:50.963935+00:00
<div class="md"><blockquote>
<p>The bayesian method will examine what it has, and decide the probability of different situations. Other then that, it doesn't actually do anything. It takes an entirely different system to actually act on the information given. If it is a simple system and just assumes to be correct whichever one has the highest probability, then it isn't going to bother testing it.</p>
</blockquote>
<p>But a Bayesian won't assume which one has the highest probability is correct. That's the one of the whole points of a Bayesian approach, every claim is probabilistic. If one claim is more likely than another, the Bayesian isn't going to lie to itself and say that the most probable claim now has a probability of 1. That's not Bayesianism. You seem to be engaging in what may be a form of the mind projection fallacy, in that humans often take what seems to be a high probability claim and then treat it like it has a much, much higher probability (this is due to a variety of cognitive biases such as confirmation bias and belief overkill). A good Bayesian doesn't do that. I don't know where you are getting this notion of a "simple system" that did that. If it did, it wouldn't be a Bayesian.</p></div>
Houshalter on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23w0
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23w02010-06-04T14:31:19.958000+00:00
<div class="md"><blockquote>
<p>But a Bayesian wont' assume which one has the highest probability is correct. That's the one of the whole points of a Bayesian approach, every claim is probabilistic. If one claim is more likely than another, the Bayesian isn't going to lie to itself and say that the most probable claim now has a probability of 1. That's not Bayesianism. You seem to be engaging in what may be a form of the mind projection fallacy, in that humans often take what seems to be a high probability claim and then treat it like it has a much, much higher probability (this is due to a variety of cognitive biases such as confirmation bias and belief overkill). A good Bayesian doesn't do that. I don't know where you are getting this notion of a "simple system" that did that. If it did, it wouldn't be a Bayesian.</p>
</blockquote>
<p>I'm not exactly sure what you mean by all of this. How does a bayesian system make decisions if not by just going on its most probable hypothesis?</p></div>
jimrandomh on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23w1
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23w12010-06-04T15:04:35.446741+00:00
<div class="md"><p>To make decisions, you combine probability estimates of outcomes with a utility function, and maximize expected utility. A possibility with very low probability may nevertheless change a decision, if that possibility has a large enough effect on utility.</p></div>
AlephNeil on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23w2
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23w22010-06-04T15:05:17.032632+00:00
<div class="md"><p>You try to maximize your expected utility. Perhaps having done your calculations, you think that action X has a 5/6 chance of earning you £1 and a 1/6 chance of killing you (perhaps someone's promised you £1 if you play Russian Roulette).</p>
<p>Presumably you don't base your decision entirely on the most likely outcome.</p></div>
Vive-ut-Vivas on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23t1
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23t12010-06-03T21:45:07.762385+00:00
<div class="md"><blockquote>
<p>Is this useful for anything though?</p>
</blockquote>
<p>Only for the stated purpose of this website - to be "less wrong"! :) Quoting from <a href="http://lesswrong.com/lw/qd/science_isnt_strict_enough/">Science Isn't Strict Enough</a>:</p>
<blockquote>
<p>But the Way of Bayes is also much harder to use than Science. It puts a tremendous strain on your ability to hear tiny false notes, where Science only demands that you notice an anvil dropped on your head.</p>
<p>In Science you can make a mistake or two, and another experiment will come by and correct you; at worst you waste a couple of decades.</p>
<p>But if you try to use Bayes even qualitatively - if you try to do the thing that Science doesn't trust you to do, and reason rationally in the absence of overwhelming evidence - it is like math, in that a single error in a hundred steps can carry you anywhere. It demands lightness, evenness, precision, perfectionism.</p>
<p>There's a good reason why Science doesn't trust scientists to do this sort of thing, and asks for further experimental proof even after someone claims they've worked out the right answer based on hints and logic.</p>
<p>But if you would rather not waste ten years trying to prove the wrong theory, you'll need to essay the vastly more difficult problem: listening to evidence that doesn't shout in your ear.</p>
</blockquote>
<p>As for the rest of your comment: I completely agree! That was actually the explanation that the OP, komponisto, gave to me to get Bayesianism (edit: I actually mean "the idea that probability theory can be used to override your intuitions and get to correct answers") to "click" for me (insofar as it has "clicked"). But the way that it's represented in the post is really helpful, I think, because it eliminates even the need to imagine that there are more doors; it addresses the specifics of that actual problem, and you can't argue with the numbers!</p></div>
cupholder on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23sw
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23sw2010-06-03T21:41:15.812086+00:00
<div class="md"><blockquote>
<p>I don't get it really. I mean, I get the method, but not the formula. Is this useful for anything though?</p>
</blockquote>
<p>Quite a bit! (<a href="http://scholar.google.com/scholar?q=" rel="nofollow" title="Bayes+theorem&quot;+&quot;Bayesian+estimation+of">A quick Google Scholar search</a> turns up about 1500 papers on methods and applications, and there are surely more.)</p>
<p>The formula tells you how to change your strength of belief in a hypothesis in response to evidence (this is 'Bayesian updating', sometimes shortened to just 'updating'). Because the formula is a trivial consequence of the definition of a conditional probability, it holds <em>in any situation</em> where you can quantify the evidence and the strength of your beliefs as probabilities. This is why many of the people on this website treat it as the foundation of reasoning from evidence; the formula is very general.</p>
<p>Eliezer Yudkowsky's <a href="http://yudkowsky.net/rational/bayes" rel="nofollow">Intuitive Explanation of Bayes' Theorem</a> page goes into this in more detail and at a slower pace. It has a few nice Java applets that you can use to play with some of the ideas with specific examples, too.</p></div>
RobinZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23sj
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23sj2010-06-03T21:08:55.705964+00:00
<div class="md"><blockquote>
<p>I don't get it really. I mean, I get the method, but not the formula. Is this useful for anything though?</p>
</blockquote>
<p>There's a significant population of people - disproportionately represented here - who consider Bayesian reasoning to be theoretically superior to the ad hoc methods habitually used. An introductory essay on the subject that many people here read and agreed with <a href="http://yudkowsky.net/rational/technical" rel="nofollow">A Technical Explanation of Technical Explanation</a>.</p></div>
RomanDavis on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23si
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23si2010-06-03T21:05:20.140401+00:00
<div class="md"><blockquote>
<p>Also, a simpler method of explaining the Monty Hall problem is to think of it if there were more doors. Lets say there were a million (thats alot ["a lot" grammar nazis] of goats.) You pick one and the host elliminates every other door except one. The probability you picked the right door is one in a million, but he had to make sure that the door he left unopened was the one that had the car in it, unless you picked the one with a car in it, which is a one in a million chance.</p>
</blockquote>
<p>That's awesome. I shall use it in the future. Wish I could multi upvote.</p></div>
mhomyack on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wo
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wo2010-06-04T16:29:37.792223+00:00
<div class="md"><p>The way I like to think of the Monty Hall problem is like this... if you had the choice of picking either one of the three doors or two of the three doors (if the car is behind either, you win it), you would obviously pick two of the doors to give yourself a 2/3 chance of winning. Similarly, if you had picked your original door and then Monty asked if you'd trade your one door for the other two doors (all sight unseen), it would again be obvious that you should make the trade. Now... when you make that trade, you know that at least one of the doors you're getting in trade has a goat behind it (there's only one car, you have two doors, so you have to have at least one goat). So, given that knowledge and the certainty that trading one door for two is the right move (statistically), would seeing the goat behind one of the doors you're trading for before you make the trade change the wisdom of the trade? You KNOW that you're getting at least one goat in either case. Most people who I've explained it to in this way seem to see that making the trade still makes sense (and is equivalent to making the trade in the original scenario).</p>
<p>I think the struggle is that people tend to dismiss the existance of the 3rd door once they see what's behind it. It sort of drops out of the picture as a resolved thing and then the mind erroneously reformulates the situation with just the two remaining doors. The scary thing is that people are generally quite easily manipulated with these sorts of puzzles and there are plenty of circumstances (DNA evidence given during jury trials comes to mind) when the probabilities being presented are wildly misleading as the result of erroneously eliminating segments of the problem space because they are "known".</p></div>
Vive-ut-Vivas on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23tb
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23tb2010-06-03T22:11:54.824369+00:00
<div class="md"><p>One more application of Bayes I should have mentioned: <a href="http://wiki.lesswrong.com/wiki/Aumann%27s_agreement_theorem">Aumann's Agreement Theorem</a>.</p></div>
hegemonicon on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23oe
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23oe2010-06-03T12:35:00.320734+00:00
<div class="md"><p>This is a fantastic explanation (which I like better than the 'simple' explanation retired urologist links to below), and I'll tell you why.</p>
<p>You've transformed the theorem into a spatial representation, which is always great - since I rarely use Bayes Theorem I have to essentially 'reconstruct' how to apply it every time I want to think about it, and I can do that much easier (and with many fewer steps) with a picture like this than with an example like breast cancer (which is what I would do previously).</p>
<p>Critically, you've represented the WHOLE problem visually - all I have to do is picture it in my head and I can 'read' directly off of it, I don't have to think about any other concepts or remember what certain symbols mean. Another plus, you've included the actual numbers used for maximum transparency into what transformations are actually taking place. It's a very well done series of diagrams.</p>
<p>If I had one (minor) quibble, it would be that you should represent the probabilities for various hypotheses occuring visually as well - perhaps using line weights, or split lines like in <a href="http://www.geni.org/globalenergy/library/energytrends/world-regions/usa-canada/graphics/energy-use_big.jpg" rel="nofollow">this</a> diagram.</p>
<p>But very well done, thank you.</p>
<p>(edit: I'd also agree with cousin_it that the first half of the post is the stronger part. The diagrams are what make this so great, so stick with them!)</p></div>
DanielVarga on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23nb
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23nb2010-06-03T06:53:11.136950+00:00
<div class="md"><p>Wonderful. Are you aware of the Tuesday Boy problem? I think it could have been a more impressive second example.</p>
<blockquote>
<p>"I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?"</p>
</blockquote>
<p>(The intended interpretation is that I have two children, and at least one of them is a boy-born-on-a-Tuesday.)</p>
<p>I found it here: <a href="http://www.newscientist.com/article/dn18950-magic-numbers-a-meeting-of-mathemagical-tricksters.html?full=true" rel="nofollow">Magic numbers: A meeting of mathemagical tricksters</a></p></div>
ciphergoth on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23nx
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23nx2010-06-03T10:14:07.097324+00:00
<div class="md"><p>I always much prefer these stated as questions - you stop someone and say "Do you have exactly two children? Is at least one of them a boy born on a Tuesday?" and they say "yes". Otherwise you get into wondering what the probability they'd say such a strange thing given various family setups might be, which isn't precisely defined enough...</p></div>
Emile on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ol
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ol2010-06-03T13:17:35.543716+00:00
<div class="md"><p>Very true. The article DanielVarga linked to says:</p>
<blockquote>
<p>If you have two children, and one is a boy, then the probability of having two boys is significantly different if you supply the extra information that the boy was born on a Tuesday. Don't believe me? We'll get to the answer later.</p>
</blockquote>
<p>... which is just wrong: whether it is different depends on how the information was obtained. If it was:</p>
<blockquote>
<p>-- On which day of the week was your youngest boy born ?</p>
<p>-- On a Tuesday.</p>
</blockquote>
<p>... then there's zero new information, so the probability stays the same, 1/3rd.</p>
<p>(ETA: actually, to be closer to the original problem, it should be "Select one of your sons at random and tell me the day he was born", but the result is the same.)</p></div>
Christian_Szegedy on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ty
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ty2010-06-04T00:09:44.341848+00:00
<div class="md"><p>I think the only reasonable interpretation of the text is clear since otherwise other standard problems would be ambiguous as well:</p>
<p>"What is probability that a person's random coin toss is tails?"</p>
<p>It does not matter whether you get the information from an experimenter by asking "Tell me the result of your flip!" or "Did you get tails?". You just have to stick to the original text (tails) when you evaluate the answer in either case.</p>
<p>[[EDIT] I think I misinterpreted your comment. I agree that Daniel's <em>introduction</em> was ambiguous for the reasons you have given.</p>
<p>Still the wording "I have two children, and at least one of them is a boy-born-on-a-Tuesday." he has given clarifies it (and makes it well defined under the standard assumptions of indifference).</p></div>
DanielVarga on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23v3
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23v32010-06-04T08:33:00.238773+00:00
<div class="md"><p>Yesterday I told the problem to a smart non-math-geek friend, and he totally couldn't relate to this "only reasonable interpretation". He completely understood the argument leading to 13/27, but just couldn't understand why do we assume that the presenter is a randomly chosen member of the population he claims himself to be a member of. That sounded like a completely baseless assumption to him, that leads to factually incorrect results. He even understood that assuming it is our only choice if we want to get a well-defined math problem, and it is the only way to utilize all the information presented to us in the puzzle. But all this was not enough to convince him that he should assume something so stupid.</p></div>
Christian_Szegedy on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x7
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x72010-06-04T18:03:52.563758+00:00
<div class="md"><p>For me, the eye opener was this outstanding paper by E.T. Jaynes:</p>
<p><a href="http://bayes.wustl.edu/etj/articles/well.pdf" rel="nofollow">http://bayes.wustl.edu/etj/articles/well.pdf</a></p>
<p>IMO this describes the essence of the difference between the Bayesian and frequentist philosophy way better than any amount of colorful polygons. ;)</p></div>
ciphergoth on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ox
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ox2010-06-03T14:27:58.903736+00:00
<div class="md"><p>I get that assuming that genders and days of the week are equiprobable, of all the people with exactly two children, at least one of whom is a boy born on a Tuesday, 13/27 have two boys.</p></div>
Emile on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23pn
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23pn2010-06-03T15:53:05.659103+00:00
<div class="md"><p>True, but if you go around asking people-with-two-chidren-at-least-one-of-which-is-a-boy "Select one of your sons at random, and tell me the day of the week on which he was born", among those who answer "Tuesday", one-third will have two boys.</p>
<p>(for a sufficiently large set of people-with-two-chidren-at-least-one-of-which-is-a-boy who answer your question instead of giving you a weird look)</p>
<p>I'm just saying that the article used an imprecise formulation, that could be interpreted in different ways - especially the bit "if you supply the extra information that the boy was born on a Tuesday", which is why asking questions the way you did is better.</p></div>
neq1 on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23p5
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23p52010-06-03T14:49:38.563521+00:00
<div class="md"><p>It seems to me that the standard solutions don't account for the fact that there are a non-trivial number of families who are more likely to have a 3rd child, if the first two children are of the same sex. Some people have a sex-dependent stopping rule.</p>
<p>P(first two children different sexes | you have exactly two children) > P(first two children different sexes | you have more than two children)</p>
<p>The other issue with this kind of problem is the ambiguity. What was the disclosure algorithm? How did you decide which child to give me information about? Without that knowledge, we are left to speculate.</p></div>
JenniferRM on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23qg
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23qg2010-06-03T18:14:09.324068+00:00
<div class="md"><p>This issue is also sometimes raised in cultures where male children are much more highly prized by parents.</p>
<p>Most people falsely assume that such a bias, as it stands, changes gender ratios for the society, but its only real effect is that correspondingly larger and rarer families have lots of girls. Such societies typically <em>do</em> have weird gender ratios, but this is mostly due to higher death rates before birth because of selective abortion, or after birth because some parents in such societies feed girls less, teach them less, work them more, and take them to the doctor less.</p>
<p>Suppose the rules for deciding to have a child without selective abortion (and so with basically 50/50 odds of either gender) and no unfairness post-birth were: If you have a boy, stop; if you have no boy but have fewer than N children, have another. In a scenario where N > 2, two child families are either a girl and a boy, or two girls during a period when their parents still intend to have a third. Because that window is relatively small relative to the length of time that families exist to be sampled, most two child families (>90%?) would be gender balanced.</p>
<p>Generally, my impression is that parental preferences for one or the other sex (or for gender balance) are generally out of bounds in these kinds of questions because we're supposed to assume platonicly perfect family generating processes with <em>exact</em> 50/50 odds, and no parental biases, and so on. My impression is that cultural literacy is supposed to supply the platonic model. If non-platonic assumptions are operating then different answers are expected as different people bring in different evidence (like probabilities of lying and so forth). If real world factors sneak in later with platonic assumptions allowed to stand then its a case of a bad teacher who expects you to <a href="http://lesswrong.com/lw/iq/guessing_the_teachers_password/">guess the password</a> of precisely which evidence they want to be imported, and which excluded.</p>
<p>This issue of signaling which evidence to import is kind of subtle, and people get it wrong a lot when they try to tell a paradox. Having messed them up in the past, I think it's harder than telling a new joke the first time, and uses similar skills :-)</p></div>
Fyrius on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/249v
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/249v2010-06-07T12:56:09.636705+00:00
<div class="md"><p><strong>Note before you start calculating this:</strong> There's a distinction between the "first" and the "second" child made in the article. To avoid the risk of having to calculate all over again, take this into account if you want to compare your results to theirs.</p>
<p>I calculated the probability without knowing this, so that I just counted BG and GB as one scenario, where there's one girl and one boy. That means that without the Tuesday fact the probability of another boy is 1/2, not 1/3.</p>
<p>(I ended up at a posterior probability of 2/3, by the way.)</p></div>
Morendil on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/2ivx
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/2ivx2010-08-28T22:12:05.041479+00:00
<div class="md"><p>The Tuesday Boy was loads of fun to think about the first time I came across it - thanks to the parent comment. I worked through the calculations with my 14yo son, on a long metro ride, as a way to test my understanding - he seemed to be following along fine.</p>
<p>The discussion in the comments on <a href="http://johncarlosbaez.wordpress.com/2010/08/24/probability-puzzles-from-egan/" rel="nofollow">this blog on Azimuth</a> has, however, taken the delight to an entirely new level. Just wanted to share.</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23oh
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23oh2010-06-03T12:54:46.869625+00:00
<div class="md"><p>Actually, a Bayesian and a frequentist can have different answers to this problem. It resides on what distribution you are using to decide to tell me that a boy is born on Tuesday. The standard answer ignores this issue.</p></div>
DanielVarga on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ov
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ov2010-06-03T14:20:06.586485+00:00
<div class="md"><p>I don't know much about the philosophy of statistical inference. But I am dead sure that if the Bayesian and the frequentist really do ask the same question, then they will get the same answer. There is a nice <a href="http://groups.google.com/group/sci.math/msg/533b0c7a50bf291a" rel="nofollow">spoiler post</a> where the possible interpretations of the puzzle are clearly spelled out. Do you suggest that some of these interpretations are preferred by either a frequentist or a Bayesian?</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23oy
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23oy2010-06-03T14:32:30.499142+00:00
<div class="md"><p>Well, essentially, focusing on that coin flip is a very Bayesian thing to do. A frequentist approach to this problem won't imagine the prior coin flip often. See Eliezer's post about this <a href="http://lesswrong.com/lw/ul/my_bayesian_enlightenment/">here</a>. I agree however that a careful frequentist should get the same results as a Bayesian if they are careful in this situation. What results one gets depends in part on what exactly one means by a frequentist here.</p></div>
bigjeff5 on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a85n
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a85n2013-12-22T01:18:46.815211+00:00
<div class="md"><p>Just so it's clear, since it didn't seem super clear to me from the other comments, the solution to the Tuesday Boy problem given in that article is a really clever way to get the answer wrong.</p>
<p>The problem is the way they use the Tuesday information to confuse themselves. For some reason not stated in the problem anywhere, they assume that both boys cannot be born on Tuesday. I see no justification for this, as there is no natural justification for this, not even if they were born on the exact same day and not just the same day of the week! Twins exist! Using their same bizarre reasoning but adding the extra day they took out I get the correct answer of 50% (14/28), instead of the close but incorrect answer of 48% (13/27).</p>
<p>Using proper Bayesian updating from the prior probabilities of two children (25% boys, 50% one each, 25% girls) given the information that you have one boy, regardless of when he was born, gets you a 50% chance they're both boys. Since knowing only one of the sexes doesn't give any extra information regarding the probability of having one child of each sex, all of the probability for both being girls gets shifted to both being boys.</p></div>
Jiro on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a85z
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a85z2013-12-22T03:41:40.129875+00:00
<div class="md"><p>No, that's not right. They don't assume that both boys can't be born on Tuesday. Instead, what they are doing is pointing out that although there is a scenario where both boys are born on Tuesday, they can't count it twice--of the situations with a boy born on Tuesday, there are 6 non-Tuesday/Tuesday, 6 Tuesday/non-Tuesday, and only 1, not 2, Tuesday/Tuesday.</p>
<p>Actually, "one of my children is a boy born on Tuesday" is ambiguous. If it means "I picked the day Tuesday at random, and it so happens that one of my children is a boy born on the day I picked", then the stated solution is correct. If it means "I picked one of my children at random, and it so happens that child is a boy, and it also so happens that child was born on Tuesday", the stated solution is not correct and the day of the week has no effect on the probability</p></div>
bigjeff5 on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a862
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a8622013-12-22T04:16:49.832928+00:00
<div class="md"><p>No, read it again. It's confusing as all getout, which is why they make the mistake, but EACH child can be born on ANY day of the week. The boy on Tuesday is a red herring, he doesn't factor into the probability for what day the second child can be born on at all. The two boys are not the same boys, they are individuals and their probabilities are individual. Re-label them Boy1 and Boy2 to make it clearer:</p>
<p>Here is the breakdown for the Boy1Tu/Boy2Any option:</p>
<p>Boy1Tu/Boy2Monday
Boy1Tu/Boy2Tuesday
Boy1Tu/Boy2Wednesday
Boy1Tu/Boy2Thursday
Boy1Tu/Boy2Friday
Boy1Tu/Boy2Saturday
Boy1Tu/Boy2Sunday</p>
<p>Then the BAny/Boy1Tu option:</p>
<p>Boy2Monday/Boy1Tu
Boy2Tuesday/Boy1Tu
Boy2Wednesday/Boy1Tu
Boy2Thursday/Boy1Tu
Boy2Friday/Boy1Tu
Boy2Saturday/Boy1Tu
Boy2Sunday/Boy1Tu</p>
<p>Seven options for both. For some reason they claim either BTu/Tuesday isn't an option, or Tuesday/BTu isn't an option, but I see no reason for this. Each boy is an individual, and each boy has a 1/7 probability of being born on a given day. In attempting to avoid counting evidence twice you've skipped counting a piece of evidence at all! In the original statement, they never said one and ONLY one boy was born on Tuesday, just that one was born on Tuesday. That's where they screwed up - they've denied the second boy the option of being born on Tuesday for no good reason.</p>
<p>A key insight that should have triggered their intuition that their method was wrong was that they state that if you can find a trait rarer than being born on Tuesday, like say being born on the 27th of October, then you'll approach 50% probability. That is true because the actual probability is 50%.</p></div>
Jiro on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a863
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a8632013-12-22T04:24:10.044809+00:00
<div class="md"><blockquote>
<p>Here is the breakdown for the Boy1Tu/Boy2Any option:</p>
<p>Boy1Tu/Boy2Tuesda</p>
<p>Then the BAny/Boy1Tu option:</p>
<p>Boy2Tuesday/Boy1Tu</p>
</blockquote>
<p>You're double-counting the case where both boys are born on Tuesday, just like they said.</p>
<blockquote>
<p>A key insight that should have triggered their intuition that their method was wrong was that they state that if you can find a trait rarer than being born on Tuesday, like say being born on the 27th of October, then you'll approach 50% probability.</p>
</blockquote>
<p>If you find a trait rarer than being born on Tuesday, the double-counting is a smaller percentage of the scenarios, so being closer to 50% is expected.</p></div>
bigjeff5 on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a867
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a8672013-12-22T04:55:28.167034+00:00
<div class="md"><p>I see my mistake, here's an updated breakdown:</p>
<p>Boy1Tu/Boy2Any</p>
<p>Boy1Tu/Boy2Monday Boy1Tu/Boy2Tuesday Boy1Tu/Boy2Wednesday Boy1Tu/Boy2Thursday Boy1Tu/Boy2Friday Boy1Tu/Boy2Saturday Boy1Tu/Boy2Sunday</p>
<p>Then the Boy1Any/Boy2Tu option:</p>
<p>Boy1Monday/Boy2Tu Boy1Tuesday/Boy2Tu Boy1Wednesday/Boy2Tu Boy1Thursday/Boy2Tu Boy1Friday/Boy2Tu Boy1Saturday/Boy2Tu Boy1Sunday/Boy2Tu</p>
<p>See 7 days for each set? They aren't interchangeable even though the label "boy" makes it seem like they are.</p>
<p>Do the Bayesian probabilities instead to verify, it comes out to 50% even.</p></div>
shinoteki on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a86a
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a86a2013-12-22T05:25:35.031821+00:00
<div class="md"><p>What's the difference between</p>
<blockquote>
<p>Boy1Tu/Boy2Tuesday</p>
</blockquote>
<p>and</p>
<blockquote>
<p>Boy1Tuesday/Boy2Tu</p>
</blockquote>
<p>?</p></div>
bigjeff5 on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a86b
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a86b2013-12-22T05:35:18.222784+00:00
<div class="md"><p>In Boy1Tu/Boy2Tuesday, the boy referred to as BTu in the original statement is boy 1, in Boy2Tu/Boy1Tuesday the boy referred to in the original statement is boy2.</p>
<p>That's why the "born on tuesday" is a red herring, and doesn't add any information. How could it?</p></div>
Jiro on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a86c
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a86c2013-12-22T05:51:45.726084+00:00
<div class="md"><p>This sounds like you are trying to divide "two boys born on Tuesday" into "two boys born on Tuesday and the person is talking about the first boy" and "two boys born on Tuesday and the person is talking about the second boy".</p>
<p>That doesn't work because you are now no longer dealing with cases of equal probability. "Boy 1 Monday/Boy 2 Tuesday", "Boy 1 Tuesday/Boy 2 Tuesday", and "Boy 1 Tuesday/Boy 1 Monday" all have equal probability. If you're creating separate cases depending on which of the boys is being referred to, the first and third of those don't divide into separate cases but the second one does divide into separate cases, each with half the probability of the first and third.</p>
<blockquote>
<p>doesn't add any information. How could it?</p>
</blockquote>
<p>As I pointed out above, whether it adds information (and whether the analysis is correct) depends on exactly what you mean by "one is a boy born on Tuesday". If you picked "boy" and "Tuesday" at random first, and then noticed that one child met that description, that rules out cases where no child happened to meet the description. If you picked a child first and then noticed he was a boy born on a Tuesday, but if it was a girl born on a Monday you would have said "one is a girl born on a Monday", you are correct that no information is provided.</p></div>
bigjeff5 on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a86d
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a86d2013-12-22T06:06:39.978811+00:00
<div class="md"><p>The only relevant information is that one of the children is a boy. There is still a 50% chance the second child is a boy and a 50% chance that the second child is a girl. Since you already know that one of the children is a boy, the posterior probability that they are both boys is 50%.</p>
<p>Rephrase it this way:</p>
<p>I have flipped two coins. One of the coins came up heads. What is the probability that both are heads?</p>
<p>Now, to see why Tuesday is irrelevant, I'll re-state it thusly:</p>
<p>I have flipped two coins. One I flipped on a Tuesday and it came up heads. What is the probability that both are heads?</p>
<p>The sex of one child has no influence on the sex of the other child, nor does the day on which either child was born influence the day any other child was born. There is a 1/7 chance that child 1 was born on each day of the week, and there is a 1/7 chance that child 2 was born on each day of the week. There is a 1/49 chance that both children will be born on any given day (1/7*1/7), for a 7/49 or 1/7 chance that both children will be born on the same day. That's your missing 1/7 chance that gets removed inappropriately from the Tuesday/Tuesday scenario.</p></div>
bigjeff5 on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a866
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/a8662013-12-22T04:48:30.698997+00:00
<div class="md"><p>Which boy did I count twice?</p>
<p>Edit:</p>
<p>BAny/Boy1Tu in the above quote should be Boy2Any/Boy1Tu.</p>
<p>You could re-label boy1 and boy2 to be cat and dog and it won't change the probabilities - that would be CatTu/DogAny.</p></div>
ABranco on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23od
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23od2010-06-03T12:34:18.678168+00:00
<div class="md"><p>I'm so happy: I've just got this one right, before looking at the answer. It's damn beautiful.</p>
<p>Thanks for sharing.</p></div>
pjeby on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23p1
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23p12010-06-03T14:37:18.801818+00:00
<div class="md"><blockquote>
<p>I'm so happy: I've just got this one right, before looking at the answer. It's damn beautiful.</p>
</blockquote>
<p>Same here. It was a perfect test, as I've never seen the Tuesday Boy problem before. Took a little wrangling to get it all to come out in sane fractions, and I was staring at the final result going, "that <em>can't</em> be right", but sure enough, it was <em>exactly</em> right.</p>
<p>(Funny thing: my original intuition about the problem wasn't that far off. I was simply ignoring the part about Tuesday, and focusing on the prior probability that the other child was a boy. It gives a close, but not-quite-right answer.)</p></div>
benwade on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/b0rp
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/b0rp2014-06-20T00:51:28.246748+00:00
<div class="md"><p>Thank you Komponisto,</p>
<p>I have read many explanations of Bayesian theory, and like you, if I concentrated hard enough I could follow the reasoning , but I could never reason it out for myself. Now I can. Your explanation was perfect for me. It not only enabled me to "grok" the Monty Hall problem, but Bayesian calculations in general, while being able to retain the theory.</p>
<p>Thank you again, Ben</p></div>
jasonsaied on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/5yfl
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/5yfl2012-03-03T20:44:12.146096+00:00
<div class="md"><p>Thank you very much for this. Until you put it this way, I could not grasp the Monty Hall problem; I persisted in believing that there would be a 50/50 chance once the door was opened. Thank you for changing my mind.</p></div>
cousin_it on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23np
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23np2010-06-03T09:41:20.339768+00:00
<div class="md"><p>Sigh. Of course I upvoted this, but...</p>
<p>The first part, the abstract part, was a joy to read. But the Monty Hall part started getting weaker, and the Two Aces part I didn't bother reading at all. What I'd have done differently if your awesome idea for a post came to me first: remove the jarring false tangent in Monty Hall, make all diagrams identical in style to the ones in the first part (colors, shapes, fonts, lack of borders), never mix percentages and fractions in the same diagram, use cancer screening as your first motivating example, Monty Hall as the second example, Two Aces as an exercise for the readers - it's essentially a variant of Monty Hall.</p>
<p>Also, indicate more clearly in the Monty Hall problem statement that whenever the host can open two doors, it chooses each of them with probability 50%, rather than (say) always opening the lower-numbered one. Without this assumption the answer could be different.</p>
<p>Sorry for the criticisms. It's just my envy and frustration talking. Your post had the potential to be so completely awesome, way better than Eliezer's explanation, but the tiny details broke it.</p></div>
Alexandros on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23nv
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23nv2010-06-03T10:08:55.977115+00:00
<div class="md"><p>this does seem like the type of article that should be a community effort.. perhaps a wiki entry?</p></div>
moshez on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/3lrl
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/3lrl2011-02-25T21:02:02.432639+00:00
<div class="md"><p>When I tried to explain Bayes' to some fellow software engineers at work, I came up with <a href="http://moshez.wordpress.com/2011/02/06/bayes-theorem-for-programmers/" rel="nofollow">http://moshez.wordpress.com/2011/02/06/bayes-theorem-for-programmers/</a></p></div>
Lysseas on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/6c2e
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/6c2e2012-04-13T20:55:44.243249+00:00
<div class="md"><p>I tend to think out Monty Hall like this: The probability you have chosen the door hiding the car is 1/3. Once one of the other two doors is shown to hide a goat, the probablity of the third door hiding the car must be 2/3. Therefore you double your chances to win the car by switching.</p></div>
handoflixue on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/47oo
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/47oo2011-05-19T20:32:25.744876+00:00
<div class="md"><p>The Monty Hall problem seems like it can be simplified: Once you've picked the door, you can switch to instead selecting the two alternate doors. You know that one of the alternate doors contains a goat (since there's only one car), which is equivalent to having Monty open one of those two doors.</p>
<p>The trick is simply in assuming that Monty is actually introducing any new information.</p>
<p>Not sure if it's helpful to anyone else, but it just sort of clicked reading it this time :)</p></div>
Nanani on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/2489
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24892010-06-07T02:29:00.837644+00:00
<div class="md"><p>Wow, that was great!
I already had a fairly good understanding of the Theorem, but this helped cement it further and helped me compute a bit faster.</p>
<p>It also gave me a good dose of learning-tingles, for which I thank you.</p></div>
sguin2lesswrong on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23zh
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23zh2010-06-04T20:52:42.718306+00:00
<div class="md"><p>In figure 10 above, should the second blob have a value of 72.9% ? I noticed that the total of all the percents are only adding up to 97% with the current values.
I calculated as follows:
New 100% value: 10% + 35% + 3% = 48%
H1 : 10% / 48% = 20.833%
H2: 35% / 48% = 72.9166%
H3: 3% / 48% = 6.25%
Total: 99.99%</p>
<p>Also, I found this easy to understand visually.
Thanks.</p></div>
RobinZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/240r
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/240r2010-06-05T01:37:09.621531+00:00
<div class="md"><p>My mathematics agrees - 72.9%. Good catch!</p></div>
neq1 on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23w3
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23w32010-06-04T15:08:05.114290+00:00
<div class="md"><p>Perhaps a better title would be "Bayes' Theorem Illustrated (My Ways)"</p>
<p>In the first example you use shapes with colors of various sizes to illustrate the ideas visually. In the second example, you using plain rectangles of approximately the same size. If I was a visual learner, I don't know if your post would help me much.</p>
<p>I think you're on the right track in example one. You might want to use shapes that are easier to estimate the relative areas. It's hard to tell if one triangle is twice as big as another (as measured by area), but it's easier to do with rectangles of the same height (where you just vary the width). More importantly, I think it would help to show math with shapes. For example, I would suggest that figure 18 has P(door 2)= the orange triangle in figure 17 divided by the orange triangle plus the blue triangle from figure 17 (but where you show the division by shapes). When I teach, I sometimes do this with Venn diagrams (show division of chunks of circles and rectangles to illustrate conditional probability).</p></div>
gerg on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23v4
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23v42010-06-04T08:35:11.748184+00:00
<div class="md"><p>A presentation critique: psychologically, we tend to compare the relative <em>areas</em> of shapes. Your ovals in Figure 1 are scaled so that their <em>linear</em> dimensions (width, for example) are in the ratio 2:5:3; however, what we see are ovals whose areas are in ratio 4:25:9, which isn't what you're trying to convey. I think this happens for later shapes as well, although I didn't check them all.</p></div>
Oscar_Cunningham on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23v8
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23v82010-06-04T08:59:22.706623+00:00
<div class="md"><p>Really? I'd have said the exact opposite. For example in <a href="http://lesswrong.com/lw/2ax/open_thread_june_2010/23o8">this post</a>, the phrase "half the original's size" means that the linear dimensions are halved. This issue also come up in the production of bubble charts, where the size of a circle represents some value. When I look at a bubble chart it is often unclear whether the data is intended to be represented by the area or the radius.</p>
<p>It is certainly easier for me to compare linear dimensions than areas.</p></div>
RobinZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23yr
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23yr2010-06-04T19:16:31.158135+00:00
<div class="md"><p>Hence the popularity of bar charts, where the area and the linear dimension are coupled. But the visual impact <em>is</em> a function of area, more than length, even if it is hard to quantify in the eye - but quantification should be done by the quantitative numbers, not by graphical estimation.</p>
<p>(How many sentences can I start with conjunctions? Let me count the ways...)</p></div>
Kevin on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23n9
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23n92010-06-03T06:38:56.096497+00:00
<div class="md"><p>On Hacker News:</p>
<p><a href="http://news.ycombinator.com/item?id=1400640" rel="nofollow">http://news.ycombinator.com/item?id=1400640</a></p></div>
ABranco on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ob
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ob2010-06-03T12:22:03.336728+00:00
<div class="md"><p>Great visualizations.</p>
<p>In fact, this (only without triangles, squares,...) is how I've been intuitively calculating Bayesian probabilities in "everyday" life problems since I was young. But you managed to make it even clearer for me. Good to see it applied to Monty Hall.</p></div>
Blueberry on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23n8
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23n82010-06-03T06:34:34.568812+00:00
<div class="md"><p>This is really brilliant. Thanks for making it all seem so easy: for some reason I never saw the connection between an update and rescaling like that, but now it seems obvious.</p>
<p>I'd like to see this kind of diagram for the Sleeping Beauty problem.</p></div>
XiXiDu on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vg
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23vg2010-06-04T09:55:03.359298+00:00
<div class="md"><p>Why would you need more than plain English to intuitively grasp Monty-Hall-type problems?</p>
<p>Take the original Monty Hall 'Dilemma'. Just imagine there are two candidates, A and B. A and B both choose the same door. After the moderator picked one door A always stays with his first choice, B always changes his choice to the remaining third door. Now imagine you run this experiment 999 times. What will happen? Because A always stays with his initial choice, he will win 333 cars. But where are the remaining 666 cars? Of course B won them!</p>
<p>Or conduct the experiment with 100 doors. Now let’s say the candidate picks door 8. By rule of the game the moderator now has to open 98 of the remaining 99 doors behind which there is no car. Afterwards there is only one door left besides door 8 that the candidate has chosen. Obviously you would change your decision now! The same should be the case with only 3 doors!</p>
<p>There really is no problem here. You don’t need to simulate this. Your chance of picking the car first time is 1/3 but your chance of choosing a door with a goat behind it, at the beginning, is 2/3. Thus on average, 2/3 of times that you are playing this game you’ll pick a goat at first go. That also means that 2/3 of times that you are playing this game, and by definition pick a goat, the moderator will have to pick the only remaining goat. Because given the laws of the game the moderator knows where the car is and is only allowed to open a door with a goat in it. What does that mean? That on average, at first go, you pick a goat 2/3 of the time and hence the moderator is forced to pick the remaining goat 2/3 of the time. That means 2/3 of the time there is no goat left, only the car is left behind the remaining door. Therefore 2/3 of the time the remaining door has the car.</p>
<p>I don't need fancy visuals or even formulas for this. Do you really?</p></div>
Kaj_Sotala on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23we
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23we2010-06-04T16:04:51.892469+00:00
<div class="md"><p>I can testify that this isn't anywhere near as obvious to most people than it is to you. I, for one, had to have other people explain it to me the first time I ran into the problem, and even then it took a small while.</p></div>
XiXiDu on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/243a
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/243a2010-06-05T19:00:46.344524+00:00
<div class="md"><p>I think the very problem in understanding such issues is shown in your reply. People assume too much, they read too much into things. I never said it has been obvious to me. I asked why you would need more than plain English to understand it and gave some examples on how to describe the problem in an abstract way that might be ample to grasp the problem sufficiently. If you take things more literally and don't come up with options that were never mentioned it would be much easier to understand. Like calling the police in the case of the <a href="http://en.wikipedia.org/wiki/Trolley_problem" rel="nofollow">trolley problem</a> or whatever was never intended to be a rule of a particular game.</p></div>
Kaj_Sotala on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/245q
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/245q2010-06-06T06:50:54.039915+00:00
<div class="md"><p>Well, yeah. But if I recall, I <em>did</em> have a plain English explanation of it. There was an article on Wikipedia about it, though since this was at least five years ago, the explanation wasn't as good as it is in today's article. It still did a passing job, though, which wasn't enough for me to get it very quickly.</p></div>
XiXiDu on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/2462
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24622010-06-06T09:56:37.669371+00:00
<div class="md"><p>Yesterday, when falling asleep, I remembered that I indeed used the word 'obvious' in what I wrote. Forgot about it, I wrote the plain-English explanation from above earlier in a comment to the article '<a href="http://blogs.discovermagazine.com/notrocketscience/2010/04/02/pigeons-outperform-humans-at-the-monty-hall-dilemma/" rel="nofollow">Pigeons outperform humans at the Monty Hall Dilemma</a>' and just copied it from there.</p>
<p>Anyway, I doubt it is obvious to anyone the first time. At least anyone who isn't a trained Bayesian. But for me it was enough to read some plain-English (German actually) explanations about it to come to the conclusion that the right solution is obviously right and now also intuitively so.</p>
<p>Maybe the problem is also that most people are simply skeptical to accept a given result. That is, is it really <em>obvious</em> to me now or have I just accepted that it is the right solution, repeated many times to become intuitively fixed? Is 1 + 1 = 2 really obvious? The last page of Russel and Whitehead's proof that <a href="http://scienceblogs.com/goodmath/2006/06/extreme_math_1_1_2.php" rel="nofollow">1+1=2</a> could be found on page 378 of the Principia Mathematica. So is it really obvious or have we simply all, collectively, come to accept this '<em>axiom</em>' to be <em>right</em> and <em>true</em>?</p>
<p>I haven't had much time lately to get much further with my studies, I'm still struggling with basic Algebra. I have almost no formal education and try to educate myself now. That said, I started to watch a video series lately (<a href="http://www.youtube.com/view_play_list?p=6A1FD147A45EF50D" rel="nofollow">The Most IMPORTANT Video You'll Ever See</a>) and was struck when he said that to roughly figure out the <a href="http://en.wikipedia.org/wiki/Doubling_time" rel="nofollow">doubling time</a> you simply divide 70 by the percentage growth rate. I went to check it myself if it works and later looked it up. Well, it's NOT obvious why this is the case, at least not for me. Not even now that I have read up on the mathematical strict formula. But I'm sure, as I will think about it more, read more proofs and work with it, I'll come to regard it as obviously right. But will it be any more obvious than before? I will simply have collected some evidence for its truth value and its consistency. Things just start to make sense, or we think so because they work and/or are consistent.</p></div>
Blueberry on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/246w
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/246w2010-06-06T17:44:54.451504+00:00
<div class="md"><blockquote>
<p>But I'm sure, as I will think about it more, read more proofs and work with it, I'll come to regard it as obviously right. But will it be any more obvious than before?</p>
</blockquote>
<p>If you're interested, <a href="http://betterexplained.com/articles/the-rule-of-72/" rel="nofollow">here</a> is a good explanation of the derivation of the formula. I don't think it's obvious, any more than the quadratic formula is obvious: it's just one of those mathematical tricks that you learn and becomes second nature.</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/246z
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/246z2010-06-06T17:53:45.865148+00:00
<div class="md"><p>I'm not sure I'm completely happy with that explanation. They use the result that ln(1+x) is very close to x when x is small. This is due to the Taylor series expansion of ln(1+x) (edit:or simply on looking at the ratio of the two and using L'Hospital's rule), but if one hasn't had calculus, that claim is going to look like magic.</p></div>
XiXiDu on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/2463
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24632010-06-06T10:12:32.578570+00:00
<div class="md"><p>Here are more examples:</p>
<ul>
<li><a href="http://opinionator.blogs.nytimes.com/2010/02/14/the-enemy-of-my-enemy/" rel="nofollow">Why a negative times a negative should be a positive.</a></li>
<li><a href="http://www.youtube.com/watch?v=Tqpcku0hrPU" rel="nofollow">Intuition on why a^-b = 1/(a^b) (and why a^0 =1)</a></li>
</ul>
<p>Those explanations are really great. I've missed such in school when wondering WHY things behave like they do, when I was only shown HOW to use things to get what I want to do. But what do these explanations really <em>explain</em>. I think they are merely satisfying our idea that there is <em>more to it</em> than meets the eye. We think something is missing. What such explanations really do is to show us that the heuristics really work and that they are consistent on more than one level, they are <em>reasonable</em>.</p></div>
RobinZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24az
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24az2010-06-07T16:03:25.119299+00:00
<div class="md"><blockquote>
<p>That said, I started to watch a video series lately [...] and was struck when he said that to roughly figure out the <a href="http://en.wikipedia.org/wiki/Doubling_time" rel="nofollow">doubling time</a> you simply divide 70 by the percentage growth rate. I went to check it myself if it works and later looked it up. Well, it's NOT obvious why this is the case, at least not for me. Not even now that I have read up on the mathematical strict formula.</p>
</blockquote>
<p>Well, it's an approximation, that's all. Pi is approximately equal to 355/113 - yeah, there's good mathematical reasons for choosing that particular fraction as an approximation, but the accuracy justifies itself. <em>[edited sentence:]</em> You only need one real <em>revelation</em> to <em>not worry about</em> how true <em>Td</em> = 70/<em>r</em> is: that the doubling time is a smooth line - there's no jaggedy peaks randomly in the middle. After that, you can just look how good the fit is and say, "yeah, that works for 0.1 < <em>r</em> < 20 for the accuracy I need".</p></div>
Martin-2 on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/8gfv
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/8gfv2013-02-14T04:32:33.204207+00:00
<div class="md"><p>Although it's late, I'd like to say that XiXiDu's approach deserves more credit and I think it would have helped me back when I didn't understand this problem. Eliezer's Bayes' Theorem post cites the percentage of doctors who get the breast cancer problem right when it's presented in different but mathematically equivalent forms. The doctors (and I) had an easier time when the problem was presented with quantities (100 out of 10,000 women) than with explicit probabilities (1% of women).</p>
<p>Likewise, thinking about a large number of trials can make the notion of probability easier to visualize in the Monty Hall problem. That's because running those trials and counting your winnings <em>looks</em> like something. The percent chance of winning once does not look like anything. Introducing the competitor was also a great touch since now the cars I don't win are easy to visualize too; that smug bastard has them!</p>
<p>Or you know what? Maybe none of that visualization stuff mattered. Maybe the key sentence is "[Candidate] A always stays with his first choice". If you commit to a certain door then you might as well wear a blindfold from that point forward. Then Monty can open all 3 doors if he likes and it won't bring your chances any closer to 1/2.</p></div>
logical on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wr
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wr2010-06-04T16:43:08.089361+00:00
<div class="md"><p>Are you serious? Are you buying this? Ok - let me make this easy: There NEVER WAS a 33% chance. Ever. The 1-in-3 choice is a ruse. No matter what door you choose, Monty has at least one door with a goat behind it, and he opens it. At that point, you are presented with a 1-in-2 choice. The prior choice is completely irrelevant at this point! You have a 50% chance of being right, just as you would expect. Your first choice did absolutely nothing to influence the outcome! This argument reminds me of the time I bet $100 on black at a roulette table because it had come up red for like 20 consecutive times, and of course it came up red again and I lost my $$. A guy at the table said to me "you really think the little ball remembers what it previously did and avoids the red slots??". Don't focus on the first choice, just look at the second - there's two doors and you have to choose one (the one you already picked, or the other one). You got a 50% chance.</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wt
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wt2010-06-04T16:57:19.591245+00:00
<div class="md"><p>Think about it this way. Let's say you precommit before we play Monty's game that you won't switch. Then you win 1/3rd of the time, exactly when you picked the correct door first, yes?</p>
<p>Now, suppose you precommit to switching. Under what circumstances will you win? You'll win if you didn't pick the correct door to start with. That means you have a 2/3rd chance of winning since you win whenever your first door wasn't the correct choice.</p>
<p>Your comparison to the roulette wheel doesn't work: The roulette wheel has no memory, but in this case, the car isn't reallocated between the two remaining doors, it was chosen before the process started.</p></div>
Sideways on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ww
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23ww2010-06-04T17:02:04.168328+00:00
<div class="md"><p>Your analogy doesn't hold, because each spin of the roulette wheel is a separate trial, while choosing a door and then having the option to choose another are causally linked.</p>
<p>If you've really thought about XiXiDu's analogies and they haven't helped, here's another; this is the one that made it obvious to me.</p>
<p><a href="http://tvtropes.org/pmwiki/pmwiki.php/Main/AWizardDidIt" rel="nofollow">Omega</a> transmutes a single grain of sand in a sandbag into a diamond, then pours the sand equally into three buckets. You choose one bucket for yourself. Omega then pours the sand from one of his two buckets into the other one, throws away the empty bucket, and offers to let you trade buckets.</p>
<p>Each bucket analogizes to a door that you may choose; the sand analogizes to <a href="http://en.wikipedia.org/wiki/Probability_mass_function" rel="nofollow">probability mass</a>. Seen this way, it's clear that what you want is to get as much sand (probability mass) as possible, and Omega's bucket has more sand in it. Monty's unopened door doesn't inherit anything tangible from the opened door, but it does inherit the opened door's probability mass.</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wy
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wy2010-06-04T17:08:36.679713+00:00
<div class="md"><p>That works better for you? That's deeply surprising. Using entities like Omega and transmutation seems to make things more abstract and much harder to understand what the heck is going on. I must need to massively update my notions about what sort of descriptors can make things clear to people.</p></div>
Sideways on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x2
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x22010-06-04T17:19:15.717304+00:00
<div class="md"><p>I use entities outside human experience in thought experiments for the sake of preventing Clever Humans from trying to game the analogy with their inferences.</p>
<p>"If Monty 'replaced' a grain of sand with a diamond then the diamond might be near the top, so I choose the first bucket."</p>
<p>"Monty wants to keep the diamond for himself, so if he's offering to trade with me, he probably thinks I have it and wants to get it back."</p>
<p>It might seem paradoxical, but using 'transmute at random' instead of 'replace', or 'Omega' instead of 'Monty Hall', actually simplifies the problem for me by establishing that all relevant facts to the problem have already been included. That never seems to happen in the real world, so the world of the analogy is usefully unreal.</p></div>
Blueberry on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23zq
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23zq2010-06-04T21:14:33.880031+00:00
<div class="md"><p>I really like this technique.</p></div>
AlephNeil on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/240o
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/240o2010-06-05T01:06:38.931517+00:00
<div class="md"><p>I'm not keen on this analogy because you're comparing the effect of the new information to an agent <em>freely choosing</em> to pour sand in a particular way. A confused person won't understand why Omega couldn't decide to distribute sand some other way - e.g. equally between the two remaining buckets.</p>
<p>Anyway, I think JoshuaZ's explanation is the clearest I've ever seen.</p></div>
logical on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x0
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x02010-06-04T17:13:11.708810+00:00
<div class="md"><p>"Your analogy doesn't hold, because each spin of the roulette wheel is a separate trial, while choosing a door and then having the option to choose another are causally linked."</p>
<p>No, they are not causally linked. It does not matter what door you choose, you don't influence the outcome in any way at all. Ultimately, you have to choose between two doors. In fact, you don't "choose" a door at first at all. Because there is always at least one goat behind a door you didn't choose, you cannot influence the next action, which is for Monty to open a door with a goat. At that point it's a choice between two doors.</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x1
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x12010-06-04T17:16:05.208930+00:00
<div class="md"><p>At this point you've had this explained to you multiple times. May I suggest that if you don't get it at this point, maybe be a bit of an empiricist and write a computer program to repeat the game many times and see what fraction switching wins? Or if you don't have the skill to do that (in which case learning to program should be on your list of things to learn how to do. It is very helpful and forces certain forms of careful thinking) play the game out with a friend in real life.</p></div>
mattnewport on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23xg
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23xg2010-06-04T18:22:13.577772+00:00
<div class="md"><blockquote>
<p>play the game out with a friend in real life.</p>
</blockquote>
<p>If logical wants to play for real money I volunteer my services.</p></div>
Sideways on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x4
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x42010-06-04T17:51:05.541178+00:00
<div class="md"><p>If--and I mean do mean if, I wouldn't want to spoil the empirical test--logical doesn't understand the situation well enough to predict the correct outcome, there's a good chance he won't be able to program it into a computer correctly regardless of his programming skill. He'll program the computer to perform his misinterpretation of the problem, and it will return the result he expects.</p>
<p>On the other hand, if he's right about the Monty Hall problem and he programs it correctly... it will still return the result he expects.</p></div>
khafra on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x9
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23x92010-06-04T18:08:00.940412+00:00
<div class="md"><p>He could try <a href="http://jtauber.com/blog/2007/06/19/monty_hall_python/" rel="nofollow">one</a> of <a href="http://antoniocangiano.com/2009/01/01/monte-carlo-simulation-of-the-monty-hall-problem-in-ruby-and-python/" rel="nofollow">many</a> already-written <a href="http://rosettacode.org/wiki/Monty_Hall_problem" rel="nofollow">programs</a> if he lacks the skill to write one.</p></div>
Sideways on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23xl
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23xl2010-06-04T18:24:24.073861+00:00
<div class="md"><p>Sure, but then the question becomes whether the other programmer got the program right...</p>
<p>My point is that if you don't understand a situation, you can't reliably write a good computer simulation of it. So if logical believes that (to use your first link) James Tauber is wrong about the Monty Hall problem, he has no reason to believe Tauber can program a good simulation of it. And even if he can read Python code, and has no problem with Tauber's implementation, logical might well conclude that there was just some glitch in the code that he didn't notice--which happens to programmers regrettably often.</p>
<p>I think implementing the game with a friend is the better option here, for ease of implementation and strength of evidence. That's all :)</p></div>
TylerK on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23xh
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23xh2010-06-04T18:22:16.022848+00:00
<div class="md"><p>The thing you might be overlooking is that Monty does not open a door at random, he opens a door guaranteed to contain a goat. When I first heard this problem, I didn't get it until that was explicitly pointed out to me.</p>
<p>If Monty opens a door at random (and the door could contain a car), then there is no causal link and therefore the probability would be as you describe.</p></div>
mhomyack on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wu
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wu2010-06-04T16:58:08.468391+00:00
<div class="md"><p>Fail.</p></div>
MagnetoHydroDynamics on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/8gxn
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/8gxn2013-02-15T22:23:01.802170+00:00
<div class="md"><p>I find that Monty Hall is easier to understand with N doors, N > 2.</p>
<p>N doors, one hides a car. You pick a door at random yielding 1/N probability of getting car. Host now opens N-2 doors which are not your door, all containing goats. The probability the other door left has a car is not (N-1)/N.</p>
<p>Set N to 1000 and people generally agree that switching is good. Set N to 3 and they disagree.</p></div>
oracleaide on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/85e8
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/85e82012-12-26T19:15:51.927367+00:00
<div class="md"><p>The challenge with Bayes' illustrations is to simultaneously.show 1) relations 2) ratios.
The suggested approach works well. I am suggesting to combine Venn diagrams and Pie charts:</p>
<p><a href="http://oracleaide.wordpress.com/2012/12/26/a-venn-pie/" rel="nofollow">http://oracleaide.wordpress.com/2012/12/26/a-venn-pie/</a></p>
<p>Happy New Year!</p></div>
Ab3 on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/5en7
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/5en72011-12-07T18:42:59.173548+00:00
<div class="md"><p>Thank you Komponisto! Apparently, my brain works similar to yours on this matter. Here is a video by Richard Carrier explaining Bayes' theorem that I also found helpful.</p>
<p><a href="http://www.youtube.com/watch?v=HHIz-gR4xHo" rel="nofollow">http://www.youtube.com/watch?v=HHIz-gR4xHo</a></p></div>
jlborges on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/d5n8
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/d5n82016-03-07T22:02:08.506834+11:00
<div class="md"><p>This problem is not so difficult to solve if we use a binomial tree to try to tackle it. Not only we will come to the same (correct) mathematical answer (which is brilliantly exposed in the first post in this thread) but logically is more palatable.</p>
<p>I will exposed the logically, semantically derived answer straight away and then I will jump in to the binomial tree for the proof-out.</p>
<p>The probability of the situation exposed here, which for the sake of being brief I’m going to put it as “contestant choose a door, a goat is revealed in a different door, then contestant switches from the original choice to win the car”, is the same probability as being wrong in an scenario where the contestant only needs to choose one door and that’s it. This is 66%. The probability of the path the contestant needs to go through to win a car IF he/she always switches is exactly the same probability as being wrong in the first place.</p>
<p>Why?</p>
<p>There are two world-states right at beginning of the situation. This is being right, which means having chosen the door with the car behind with a probability of 33% and being wrong, which means choosing a door with a goat behind with a probability of 66%. Being wrong is the only world-state that will put the contestant in the right path IF and only IF he/she then switches the door. Doing so guarantees (at a 100%) the contestant to choose the door with the car behind. That’s why is so counter-intuitive because being wrong in the first place will increase (double!) the probability of winning the car given the path that the host is offering. Therefore, if the path the contestant needs to take (being wrong) for then to switch (switching to only one door, the one left, there is no probability here as there is only one choice left) has a probability of 66% the answer to this problem is 66%!</p>
<p>The binomial tree will illustrate this better:</p>
<p><img src="https://patanium.files.wordpress.com/2016/03/capture-1.jpg" alt="" title="" /></p>
<p>World-state #1 will never, ever, give the contestant a car if he/she switches his/her first choice. The fundamental thing here and where it’s so easy to get confused is that the problem is essentially defining paths, not states.</p>
<p>Binomial trees are excellent tools for real-life options such as this example. However they are mostly used to price options and other financial instruments with some optionality embedded in them. The forking paths can get really complex. In this case it worked very well because there are only two cycles and the second cycle of the second world-state has only one option for winning the car. Many cycles and probabilities within the cycles and world-states will compound the complexity of these structures. Regardless, I find them very powerful tools to visually represent these kind of problems .The key here is that this is a branching-type probabilistic problem and the world-states are mutually exclusive with their own probability distributions. Something that the classic probabilistic analysis fails to represent and make the analyst being aware to, as it’s so very easy to see the problem as one world, one situation. They are not.</p></div>
Elund on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/bhfo
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/bhfo2014-10-21T07:05:05.451796+00:00
<div class="md"><p>Thanks for posting this. Your explanations are fascinating and helpful. That said, I do have one quibble. I was misled by the Two Aces problem because I didn't know that the two unknown cards (2C and 2D) were precluded from also being aces or aces of spaces. It might be better to edit the post to make that clear.</p></div>
xamdam on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24cj
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24cj2010-06-07T19:54:22.125795+00:00
<div class="md"><p>While on topic, GREAT demo of conditional probability.</p>
<p><a href="http://www.cut-the-knot.org/Curriculum/Probability/ConditionalProbability.shtml" rel="nofollow">http://www.cut-the-knot.org/Curriculum/Probability/ConditionalProbability.shtml</a></p></div>
logical on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wv
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23wv2010-06-04T16:58:25.597067+00:00
<div class="md"><p>Are you serious? Are you buying this? Ok - let me make this easy: There NEVER WAS a 33% chance. Ever. The 1-in-3 choice is a ruse. No matter what door you choose, Monty has at least one door with a goat behind it, and he opens it. At that point, you are presented with a 1-in-2 choice. The prior choice is completely irrelevant at this point! You have a 50% chance of being right, just as you would expect. Your first choice did absolutely nothing to influence the outcome! This argument reminds me of the time I bet $100 on black at a roulette table because it had come up red for like 20 consecutive times, and of course it came up red again and I lost my $$. A guy at the table said to me "you really think the little ball remembers what it previously did and avoids the red slots??". Don't focus on the first choice, just look at the second - there's two doors and you have to choose one (the one you already picked, or the other one). You got a 50% chance.
(by the way - sorry if I posted this twice?? Or in the wrong place?)</p></div>
pjeby on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23xi
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/23xi2010-06-04T18:23:25.617033+00:00
<div class="md"><blockquote>
<p>You got a 50% chance.</p>
</blockquote>
<p>No, you don't. Switching gives you the right door 2 out of 3 times. Long before reading this article, I was convinced by a program somebody wrote that actually simulates it by counting up how many times you would win or lose in that situation... and it comes out that you win by switching, 2 out of 3 times.</p>
<p>So, the interesting question at that point is, <em>why</em> does it work 2 out of 3 times?</p>
<p>And so now, you have an opportunity to learn <em>another</em> reason why your intuition about probabilities is wrong. It's not just the lack of "memory" that makes probabilities weird. ;-)</p></div>
binLager on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/d51u
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/d51u2016-02-28T16:37:22.001708+00:00
<div class="md"><p>My God. I never bothered reading your explanation because you spent so much time whining about how everyone else had failed you. I suggest you just enroll in university and learn the math like the rest of us.</p></div>
TaeKahn on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24g6
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24g62010-06-08T14:05:47.989055+00:00
<div class="md"><p>I have a small issue with the way you presented the Monty python problem. In my opinion, the setup could be a little clearer.
The Bayesian model you presented holds true iff you make an assumption about the door you picked; either goat (better) or car (less wrong). If you pick a door at random with no presuppositions (I believe this is the state most people are in), then you have no basis to decide to switch or not switch, and have a truly 50% chance either way. If instead you introduce the assumption of goat, when the host opens up the other goat, you know you had a 2/3 chance to pick a goat. With both goats known or presumed, the last door must be the car with an error rate of 1/3.</p></div>
RobinZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24gm
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24gm2010-06-08T15:23:00.875975+00:00
<div class="md"><p>As far as I can see, one-third of the time the first door you picked had the car. What happens afterward cannot change that one-third. The only way it could change your one-third <em>credence</em> is if <em>sometimes</em> Monty Hall did one thing and sometimes another, depending on whether you picked the car.</p></div>
TaeKahn on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24h6
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24h62010-06-08T17:39:55.166750+00:00
<div class="md"><p>While the overall probabilities for the game will never change, the contestant’s perception of the current state of the game will cause them to affect their win rate. To elaborate on what I was saying, imagine the following internal monologue of a contestant:</p>
<p>I’ve eliminated one goat. Two doors left. One is a goat, one is a car. No way to tell which is which, so I’ll just randomly pick one.</p>
<p>IMO, this is probably what most contestants believe when faced with the final choice.
Obviously, there is a way to have a greater success rate, but this person is evaluating in a vacuum. If contestants were aware of the actual probabilities involved, I think we would see less “agonizing” moments as the contestants decide if they should switch or not. By randomly picking door A or B, irrespective of the entirety game, you’ve lost your marginal advantage and lowered your win rate.
That being said, if they still “randomly” pick switch every their win rate will be the expected, actual probability.</p>
<p>Edit:
The same behaviour can be seen in Deal or No Deal. If for some insane reason, they go all the way to the final two cases, the correct choice is to switch.
I don’t know exactly how many cases you have to choose from, but the odds are greatly against you that you picked the 1k case. If the case is still on the board, the chick is holding it. Yet, people make the choice to switch based entirely on the fact that there are two cases and 1 has 1k and the other has 0.01. They think they have a 50/50 shot, so they make their odds essentially 50/50 by randomly choosing. In other words, they might as well as have flipped a coin to make the decision between the two cases.</p></div>
Jack on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24lh
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24lh2010-06-09T15:06:14.705072+00:00
<div class="md"><p>Actually, I just realized... there is no reason to swap on Deal or No Deal. The reason why you swap in Monty Hall is that Monty knows which door has the goats and there is no chance he will open a door to reveal a car. But in Deal or No Deal the cases that get opened are chosen by the contestant with no knowledge of what is inside them. It's like if the contestant got to pick which of the two remaining doors to open instead of Monty, there is a 1/3 chance the contestant would open the door with the car leaving her with only goats to choose from. The fact the the contestant got lucky and didn't open the door with the car wouldn't tell her anything about which of the two remaining doors the car is really behind.</p>
<p>ETA: Basically Deal or No Deal is just a really boring game.</p></div>
thomblake on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24lm
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24lm2010-06-09T15:26:29.753337+00:00
<div class="md"><blockquote>
<p>Basically Deal or No Deal is just a really boring game.</p>
</blockquote>
<p>Well, it's exciting for those who like high-stakes randomness. And there are expected utility considerations at every opportunity for a deal (I don't remember if there's a consistent best choice based on the typical deal).</p></div>
Jack on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24ln
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24ln2010-06-09T15:29:15.610420+00:00
<div class="md"><blockquote>
<p>And there are expected utility considerations at every opportunity for a deal (I don't remember if there's a consistent best choice based on the typical deal).</p>
</blockquote>
<p>I was talking about this in <a href="http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24hx">my other comment</a>.</p></div>
gwern on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24my
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24my2010-06-09T17:44:52.402361+00:00
<div class="md"><p>Maybe it could be interesting if you treat it as a psychology game - trying to predict, based on the person's appearance, body language, and statements, whether they will conform to expected-utility or not?</p></div>
Jack on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24hx
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24hx2010-06-08T20:00:57.716016+00:00
<div class="md"><blockquote>
<p>If for some insane reason, they go all the way to the final two cases</p>
</blockquote>
<p>The way the deals work going down to the final two cases can end up the best strategy. Basically they weight the deals to encourage higher ratings. As long as there is a big money case in play they won't offer the contestant the full average of the cases- presumably viewers like to watch people play for the big money so the show wants these contestants to keep going. If all the big money cases get used up the banker immediately offers the contestant way more than they are worth to get them off the stage and make way for a new contestant.</p>
<p>(This was my conclusion after watching until I thought I had figured it out. I could be reading this into a more complex or more random pattern.)</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24hb
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24hb2010-06-08T18:04:13.664416+00:00
<div class="md"><p>Actually, people are much more likely to not switch if they think they are both equal. I've just finished reading Jason Rosenhouse's excellent book on the Monty Hall Problem and there's a strong tendency for people not to switch. Apparently people are worried that they'll feel bad if they switch and that action causes them to lose. (The book does a very good job discussing a lot about both the problem and psych studies about how people react to it or different versions. I strongly recommend it.)</p></div>
RobinZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24h8
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24h82010-06-08T17:43:51.999695+00:00
<div class="md"><p>On the actual show, sometimes Monty Hall did one thing and sometimes another, so far as I am told. We're not talking about actual behavior of contestants in an actual contest, we're talking about optimal behavior of contestants in a fictional contest.</p>
<p>Edit: I'm sorry, I really don't know what you're arguing. Am I making sense?</p></div>
TaeKahn on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24ho
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24ho2010-06-08T19:21:55.712297+00:00
<div class="md"><p>You are making perfect sense; it’s me that is not. I had thought to clarify the issue for people that might still not “get it” after reading the article. Instead, I’ve only muddied the waters.</p></div>
thomblake on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24he
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24he2010-06-08T18:21:41.899193+00:00
<div class="md"><p>Well the Monty Hall problem as stated never occurred on Let's Make a Deal. He's even on record saying he won't let a contestant switch doors after picking, in response to this problem.</p></div>
Douglas_Knight on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24jd
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24jd2010-06-09T03:53:54.814463+00:00
<div class="md"><p>Robin's description is <a href="https://www.nytimes.com/1991/07/21/us/behind-monty-hall-s-doors-puzzle-debate-and-answer.html?pagewanted=all" rel="nofollow">correct</a>. I'm not sure what you're saying.</p>
<p><strong>ETA</strong>: this thread has gotten ridiculous. I'm deleting the rest of my comments on it. The best source for info on Monty Hall is <a href="https://www.youtube.com/watch?v=WKR6dNDvHYQ" rel="nofollow">youtube</a>. He does <em>everything</em>. One thing that makes it rather different is that it is usually not clear how many good and bad prizes there are.</p></div>
Jack on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24kc
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24kc2010-06-09T12:09:57.701405+00:00
<div class="md"><p>I'm really shocked by the reactions of the mathematicians. I remember solving that problem in like the third week of my Intro to Computer Science Class. And before that I had heard of it and thought through why it was worth switching. I didn't realize it caused so much confusion as recently as 20 years ago.</p></div>
JoshuaZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24kj
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24kj2010-06-09T13:39:49.163748+00:00
<div class="md"><p>The problem causes a lot of confusion. There are studies which show that this is in fact cross-cultural. It seems to deeply conflict with a lot of heuristics humans use for working out probability. See Donald Granberg, "Cross-Cultural Comparison of Responses to the Monty Hall Dilemma" Social Behavior and Personality, (1999), 27:4 p 431-448. There are other relevant references in Jason Rosenhouse's book "The Monty Hall Problem." The problem clashes with many common heuristics. It isn't that surprising that some mathematicians have had trouble with it. (Although I do think it is surprising that some of the mathematicians who have had trouble were people like Erdos who was unambiguously first-class)</p></div>
arundelo on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24l8
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24l82010-06-09T14:36:02.771274+00:00
<div class="md"><blockquote>
<p>Erdos</p>
</blockquote>
<p>Wow! I looked this up and it turns out it's described in a
book I read a long time ago,
<a href="http://www.amazon.com/dp/0786863625/" rel="nofollow"><em>The Man Who Loved Only Numbers</em></a>
(do a "Search Inside This Book" for "Monty Hall"). <strong>Edit:</strong> In this book, the phrase "Book proof"
refers to a maximally elegant proof, seen as being in "God's Book of Proofs".</p>
<p>I encountered the problem for the first time in a collection
of vos Savant's Parade pieces. It was unintuitive of
course, but most striking for me was the utter
unconvincibility of some of the people who wrote to her.</p></div>
thomblake on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24la
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24la2010-06-09T14:41:09.989989+00:00
<div class="md"><blockquote>
<p>the utter unconvincibility</p>
</blockquote>
<p>Yes, my fallback if my intuition on a probability problem seems to fail me is always to code a quick simulation - so far, it's always taken on about a minute to code and run. That anyone bothered to write her a letter, even way back in the 70's, is mind-boggling.</p></div>
AlephNeil on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24lc
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24lc2010-06-09T14:50:45.045009+00:00
<div class="md"><p>Yeah it's remarkable isn't it?</p>
<p>I suppose the thing about the Monty-Hall problem which makes it 'difficult' is that there is another agent with more information than you, who gives you a systematically 'biased' account of their information. (There's an element of 'deceitfulness' in other words.)</p>
<p>An analogy: Suppose you had a coin which you knew was either 2/3 biased towards heads or 2/3 biased towards tails, and the bias is actually towards heads. Say there have been 100 coin tosses, and you don't know any of the outcomes but someone else ("Monty") knows them all. Then they can feed you 'biased information' by choosing a sample of the coin tosses in which most outcomes were tails. The analogous confusion would be to ignore this possibility and assume that Monty is 'honestly' telling you everything he knows.</p></div>
MartinB on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24rz
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24rz2010-06-10T01:34:13.194513+00:00
<div class="md"><p>Expert confidence. I read vos Savants book with all the letters she got and like how the problem seems to really be a test for the mental clarity and politeness of the actors involved.</p>
<p>Anyone knows of problems that get similarly violent reactions from experts?</p></div>
thomblake on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24ko
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24ko2010-06-09T13:43:43.357701+00:00
<div class="md"><p>From <a href="http://en.wikipedia.org/wiki/Monty_Hall_problem" rel="nofollow">Wikipedia</a>:</p>
<blockquote>
<p>Monty Hall did open a wrong door to build excitement, but offered a known lesser prize—such as $100 cash—rather than a choice to switch doors. As Monty Hall wrote to Selvin:
And if you ever get on my show, the rules hold fast for you—no trading boxes after the selection. (Hall 1975)</p>
</blockquote>
<p>The citation is from a letter from Monty himself, available online <a href="http://www.letsmakeadeal.com/problem.htm" rel="nofollow">here</a>.</p>
<p>I'm not sure how the article you linked to is relevant. It does describe an instance of Monty Hall actually performing the experiment, but it was in his home, not on the show.</p></div>
Douglas_Knight on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24ll
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24ll2010-06-09T15:25:21.547903+00:00
<div class="md"><blockquote>
<p>Was Mr. Hall cheating? Not according to the rules of the show, because he did have the option of not offering the switch, and he usually did not offer it.</p>
</blockquote>
<p>exactly as Robin said.</p></div>
RobinZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24m2
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24m22010-06-09T16:16:59.493228+00:00
<div class="md"><p>thomblake's remark was relevant too, though - from what I said, you might imagine that Monty Hall let people switch on the show. All the clarifications are relevant and good.</p></div>
RobinZ on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24k9
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24k92010-06-09T11:49:20.642857+00:00
<div class="md"><p>Aaargh! And I had upvoted that, believing a random Internet comment over a reliable offline source! That's a little embarrassing.</p>
<p>The article is awesome, by the way. Thanks!</p></div>
thomblake on Bayes' Theorem Illustrated (My Way)
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24kp
http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24kp2010-06-09T13:44:18.286965+00:00
<div class="md"><p>See response <a href="http://lesswrong.com/lw/2b0/bayes_theorem_illustrated_my_way/24ko">here</a></p></div>