# Drawing Two Aces

Suppose I have a deck of four cards: The ace of spades, the ace of hearts, and two others (say, 2C and 2D).

You draw two cards at random.

*Scenario 1:* I ask you "Do you have the ace of spades?" You say "Yes." Then the probability that you are holding both aces is 1/3: There are three equiprobable arrangements of cards you could be holding that contain AS, and one of these is AS+AH.

*Scenario 2:* I ask you "Do you have an ace?" You respond "Yes." The probability you hold both aces is 1/5: There are five arrangements of cards you could be holding (all except 2C+2D) and only one of those arrangements is AS+AH.

Now suppose I ask you "Do you have an ace?"

You say "Yes."

I then say to you: "Choose one of the aces you're holding at random (so if you have only one, pick that one). Is it the ace of spades?"

You reply "Yes."

What is the probability that you hold two aces?

*Argument 1:* I now know that you are holding at least one ace and that one of the aces you hold is the ace of spades, which is just the same state of knowledge that I obtained in Scenario 1. Therefore the answer must be 1/3.

*Argument 2:* In Scenario 2, I know that I can *hypothetically* ask you to choose an ace you hold, and you must *hypothetically *answer that you chose either the ace of spades or the ace of hearts. My posterior probability that you hold two aces should be the same either way. The expectation of my future probability must equal my current probability: If I expect to change my mind later, I should just give in and change my mind now. Therefore the answer must be 1/5.

Naturally I know which argument is correct. Do you?

## Comments (84)

Best*13 points [-]Argument 2 is correct. When the person replies "Yes," after choosing randomly, you learn not only that he has the ace of spades, but also that on one trial, he selected it after choosing randomly from his aces. This makes the combinations "AS, 2C" and "AS, 2D" more probable than "AS, AH", since the first two combinations give a 100% chance of a positive response, while the third gives only a 50% chance of a positive response. So each of the first two combinations is twice as likely as the third, so the probability of the third combination, namely two aces, goes to 1/5.

*7 points [-]I reached the same solution.

This looks like a riff on the Monty Hall problem, whose solution also hinges on the fact that the host opens a door randomly or non-randomly depending on your initial choice.

This is exactly what I thought when I read the problem.

Argument 2.

The second question of the last scenario is exactly equivalent to "show me the ace you've chosen", which gives zero new information about whether you have two aces or not.

This is the clearest answer I've seen.

But Argument 1 has an intuition pump based on equivalent states of knowledge, too, and it's wrong! ;)

This equivalence works but does not generalize well, as it would fail on a three As two 2s variant.

Answering without looking at other comments, will check those after:

The assumption in the first argument is wrong, you in fact have

differentinformation than the info you have in scenario 1. In scenario 1, the information you have is "I answer yes when asked if I have a spade"In this situation you have the information that I have an ace and if I pick one of the aces at random to reveal to you, I reveal that I have a spade.

The relevant likelihoods are

DIFFERENT:P("I have at least one ace, and if I choose one at random (or just the one if I have only one), to reveal to you, I reveal to you I have a spade" | I have both aces) = 1/2

P("You asked me if I have a spade? Well, yes, I do have a spade" | I have both aces) = 1

(modulo standard disclaimers on assigning P = 1 to any state of affairs, of course)

Argument 2 is correct. What changes is the probability that I hold just an ace of spades or just an ace of hearts.

*0 points [-]Yup.

In other words, the remaining possibilities are the same in both scenarios (AS/AH, AS/2C, AS/2D), but the probabilities attached to these possibilities are different. In scenario 2 the observation that one of the cards is the ace of spades does more than rule out possibilities, it

alsoshifts the probability mass from the AS/AH possibility toward the AS/2C and AS/2D possibilities, in such a way that the probability attached to the AS/AH possibility goes 'back' to what it was when there were five possibilities.It shows that drawing little boxes corresponding to each possibility and carefully crossing out those that were contradicted by the evidence is only a poor approximation of Bayes' theorem.

Well, drawing boxes and crossing out would work here if you explicitly have boxes for "how does the card holder answer the questions" or in this case "how does the card holder answer the "if you have aces, pick one at random, is it a spade"?

It just takes a bit more detail.

I made a diagram already. Note that we were using 'prefer' to refer to the ace you pick when you pick between the two if you have both.

Ditto. Psy-Kosh has correctly observed that argument 2 is correct, and has correctly pointed out the flaw in argument 1.

*6 points [-]Here's code to compute the probability empirically (I got an answer of 0.2051384 for 10000 draws. It's written in C# but can be readily converted to other functional languages such as Haskell).

Notice that isPairOfAces is not a function of isRandomAceASpade. In other words, the suit of the random ace doesn't affect the probability of there being a pair of aces. Computer programs don't suffer Information Bias. OTOH I do so let me know if I screwed up... ;)

For those interested in the complete working code:

You have three sets of 100 cards - either all aces, no aces, or exactly one ace; all three are equally likely. You lay the cards out in front of you.

I either (a) ask you whether you've got an ace, and you say yes, or (b) turn over one card chosen at random, and find that it's an ace.

In both cases I now know that you've got at least one ace, but the posterior probability that you have all aces is 1/2 in case "a" , and (I think) 100/101 in case "b".

The question I think is interesting is: When you throw away probability information and keep only possibility information, like when you go from "a random ace was a spade" in Scenario 2 to "there was an ace which was a spade" in Scenario 1, when does that cause bias? How do you have to think about the information you have left? When can you just condition on your information being true, and when do you have to think about the probability that you would have been left with this information and not some other information?

*4 points [-]Argument 2 isn't expressed clearly enough for me to understand it; it seems to be missing some steps. But the answer is 1/5.

Argument 2. Bayes' Law doesn't lie.

I'm wondering about your reasons for posting a straightforward probability question (as a top-level post rather than an Open Thread comment, no less). Are you trying to take a reading of how competent the average LW contributor currently is on trivial questions? Are you setting up a real-world problem analogous to this, where most people get the wrong answer? Or is it something else entirely?

If the problem is too easy, consider the meta-problem: what makes argument 1 seductive, and how can we teach ourselves to easily see through such arguments in the future?

(In this case it was easy to see the flaw in argument 1 because argument 2 was laid out right beside it. What if all we had was argument 1?)

I think perhaps our intuitive understanding of "state of knowledge" is wrong, and we need to fix it, but I'm not sure how.

*0 points [-]In this particular case, all we need to do is encode our "state of knowledge" formally into the relevant probabilities, mooting all appeals to intuition.

However this is a "toy problem", in real-world situations I expect that it will not be practical to enumerate all possible outcomes.

I am helping a colleague of mine investigate application of Bayesian inference methods to the question of software testing, and we're seeing much the same difficulty: on an extremely simplified problem we can draw definite conclusions, but we don't yet know how to extend those conclusions to situations the industry would consider relevant.

I occasionally get requests for more homework problems. You're also correct that I was curious about the average skill level on LW.

In retrospect, though, what I should have done was start a Bayesian Fun Problems Thread. Will try to remember to do that next time I have a puzzle.

*3 points [-]The probability is 1/5 (as independently calculated by me). I've no idea if argument 2 is correct, because I don't understand it. My reasoning:

There are 6 combinations of 2 cards: AsAh, As2c, As2d, Ah2c, Ah2d, 2c2d.

Of these, only the first 3 (AsAh, As2c, As2d) could I have answered yes to both questions (assuming I'm not lying, wihch is outside the context).

But if I have AsAh, only 1/2 the time would I have answered yes to the second question. So AsAh needs 1/2 the weight of the other 2 possibilities.

So the probability is (1/2)/(2+1/2) = 1/5.

*3 points [-]I'm sorry to confess that I fell for the equivalent of

Argument 1in a similar puzzle in the past.In terms of the current puzzle, I realized my mistake only when I realized that my state of knowledge

isn'tjust that you're holding the Ace of Spades. I also know that youtoldme that you're holding the Ace of Spades, instead of some other card that you might have told me about. This knowledge introduces a new conditional probability into the Bayesian computation, which then yields the correct answer.*3 points [-]Argument 2, I figure. Finding out if a randomly chosen ace is a AS or not tells you nothing about p(2A) - since the chance of choosing AS is 50-50 for both 2A and across all 4 of the !2A cases.

That's assuming a genuine random choice. If there's a bias that causes the choice to become non-random once hearing the AS question, that might mess things up.

Argument 2 is correct. There are lots of ways to show this, since it has a numeric conclusion, and there are lots of correct ways to arrive at that number.

Where precisely does Argument 1 fail?

Argument 1 says "which is just the same state of knowledge..." but it is flat-out lying. One direction is fine: if you answer "yes, yes" to Scenario 3, then you would answer "yes" in Scenario 1. But if you would answer "yes" to Scenario 1, then you would not answer "yes, yes" to Scenario 3. This is not the same state of knowledge so we cannot conclude the answer is the same.

(Or different, for that matter, just from knowing the knowledge is different - maybe the knowledge differences are only in irrelevant details by some happy coincidence. Like the differences between Scenario 3 and Scenario 2.)

I have to agree that both arguments seem to be lacking, and the correct method for finding the answer is one of (enumerate the possibilities | do some math).

ETA: I should stop using regex-style disjunctions given the syntax for "given" used extensively here for probability.

*2 points [-]I think Scenario 2 is wrong.

If you know, that one card is an ace, the probability that the second card is also an ace is 1/3, because there are two non aces and one ace remaining.

The tricky thing is, that it´s completely irrelevant, if the ace is of spades or of hearts. Remember, the question is if you hold both aces! Just distinguish between aces and non-aces..

*2 points [-]It helps to enumerate the possible worlds:

In the beginning you can have:

(1) AS, AH

(2) AS, 2C

(3) AS, 2D

(4) AH, 2C

(5) AH, 2D

(6) 2C, 2D

After answering "Yes" to "Do you have an ace," the possible worlds are

(1) AS, AH

(2) AS, 2C

(3) AS, 2D

(4) AH, 2C

(5) AH, 2D

That is, in world (6) you would not answer "Yes," so it is eliminated.

After picking one of your aces randomly, the possible worlds are:

(1a) AS, AH -> AS in one possible sub-world

(1b) AS, AH -> AH in another possible sub-world

(2) AS, 2C -> AS

(3) AS, 2D -> AS

(4) AH, 2C -> AH

(5) AH, 2D -> AH

You're counting (1a) and (1b) as 1 each, then dividing by the six worlds to get 1/3, but the trick is that those two worlds are not as likely as the others - (

edited for clarity) half the time world (1) evolves into (1a); half the time it evolves into (1b).So if you count (1a) and (1b) as 0.5 each and the rest as 1 each, then the probability of having both aces is (0.5 + 0.5) / (0.5 + 0.5 + 1 + 1 + 1 + 1) = 1/5.

I don't get what your saying... see my probability tree diagram and tell me why you are weighting them differently?

I don't understand what your tree is trying to show.

However, I think my explanation above shows why world's (1a) and (1b) are weighted less than (2) through (5) - they both come from parent world (1). If you gave (1a) and (1b) weight=1, you would be giving parent world 1 weight=2.

*2 points [-]Argument 2 is correct. Showing an ace provides no relevant information. If you want to do the math, I like Blueberry's solution. Or: if you have 2 aces, then your chances of saying "yes, it's AS" are 50-50 since you were equally likely to have chosen either ace (you picked one at random), and if you have 1 ace, then your chances of saying "yes, it's AS" are 50-50 since you are equally likely to have either ace, so your "yes" answer does not provide any information about how many aces you have.

Argument 1 is wrong because in Scenario 1 if you have the AS you are guaranteed to say "yes," but in this case if you have AS you still might say "no," so the information provided is not the same.

Before drawing the cards, I decided randomly whether to "prefer" hearts or spades, so that if I had both cards I would tell you about the preferred one. That gives me twelve scenarios, of which five result in the answers that I gave you, of which I hold both aces in only one. Therefore 1/5.

If your second question was instead "Are you holding the Ace of Spades?" as in Scenario 1, then I'm twice as likely to answer "Yes" in the instance that I really have both aces as I was before - ie there are now six scenarios allowed by my answers and I hold aces in two, so the probability becomes 1/3 as stated.

This is too easy - where's the catch? Is a more counterintuitive version of this on the way?

*2 points [-]Argument 2 is correct, but I'm having considerably difficulty putting the reason into succinct terms in a way that'd feel satisfying for me, even after reading all the comments here.

However, here's the thoroughly worked out answer for anyone who wants it:

We start with no questions asked. All of the six possible sets are equiprobable.

Next you ask the first question, "do you have an ace?" I respond "yes". This eliminates the (2C 2D) possibility. This leaves the following sets:

Now the probability of me holding both aces is 1/5, the probability that I have (only) the Ace of Spades is 2/5 (3/5), and the probability that I have (only) the Ace of Hearts is 2/5 (3/5).

Suppose that after this, you're about to ask me "is a randomly picked ace the ace of spades?" The prior probability that I'll answer "yes" is, for the different scenarios:

Totaling 5/10, or 1/2.

Assuming that I have answered "yes", that leaves three possible sets:

However, they are not equiprobable, as there was previously a random element involved. Now that we know I have the Ace of Spades, let's take another look at the probabilities for "is a randomly picked ace the ace of spades?"

Totaling 5/6. So now, suppose you only know that I have an ace. What is the probability that I have both aces, given that a randomly chosen ace from my hand produced the Ace of Spades?

By Bayes' rule, P(A|B) = P(B|A)P(A) / P(B). In this case, A = I have both aces, B = A randomly chosen ace from my hand produces the Ace of Spades. Naturally, that makes B|A = A randomly chosen ace from my hand produces the Ace of Spades, given that I have both aces.

P(A) = P(I have both aces) = 1/5

P(B) = P(A randomly chosen ace from my hand produces the Ace of Spades) = 1/2

P(B|A) = P(A randomly chosen ace from my hand produces the Ace of Spades, given that I have both aces) = 1/2

Thus P(A|B) = P(B|A)P(A) / P(B) = (1/2)(1/5) / (1/2) = (1/10) / (1/2) = 1/5.

*0 points [-]I disagree with this step (my rot13d explanation hasn't garnered much attention).

I don't think the sets are equiprobable. Consider the following tree

The first question represents asking what the first ace drawn was, the second question what the other card was. As the first question is 50:50 either way and the second each card has a equal probability. However as AHAS comes up twice on the tree it has twice the weighting and 1/3 probability from the start.

Or to think of it another way. You know they have one ace, what are the options for the other card. They are equally probably 2C, 2D and the other Ace. So I say it is 1/3 the chance of getting two aces, once you know they have one ace.

I believe AdeleneDawner is right: yes, there are three options each in the last branch, but they aren't all equally likely. Though I'm uncertain of how to show it from the top off my head: give me a while.

I'm almost done putting a diagram together, if you want it.

Please do show it.

*4 points [-]The colors of the squares in the grids show how you'd answer the question 'Is your preferred ace the ace of spades?' and whether you have 1 or 2 aces. The 'P=' notation in the corner of each grid shows what you're preferring; in the first case you always prefer the first ace drawn; the latter two are meant to be read together and assume that you're picking which ace you prefer ahead of time with a coin toss. The red and green squares to the side show how many of each response you could see in each case.

Thanks that cleared it up for me. I've been trying analyse where I went wrong. I reformulated the question in a way that I didn't notice lost information

Also, simply asking "Do you have the ace of spades?' returns a chart that looks like the P=AS one; red (and peach, if 'Do you have an ace?' isn't asked first) squares are instances where you answer 'No', and the remaining 4 light green and 2 dark green squares show the 1 in 3 chance that you have 2 aces given that you answered 'Yes'.

What about the instances where you get 2C or 2D first, and then one of the aces?

My first question is phrased as "first ace drawn", the second question is "other card drawn". This card could have been drawn before or after, it doesn't matter which (unless it is the other ace in which case it couldn't have been drawn before).

Picking the first ace is really just a way to fix what one definite unknown ace is in some way, so you can ask what the other cards are.

*0 points [-]Missed that on my first read-through, but it still kind of points in the direction of the problem with your chart. Assume that the first ace is AS. There's two instances where the other card could be 2C (AS then 2C, or 2C then AS), two instances where it could be 2D (AS then 2D, or 2D then AS), and one instance where it could be AH (AS then AC, but not AC then AS). The three branches for 'other card' are not equally likely.

Okay I'll dispense with draw order entirely. Imagine if instead of asking them if they had an ace, ask them if they had an ace and mentally select one of their aces to be the primary ace at random.

They don't tell you what it is or give any other information. So the first question on my tree is what is their primary ace, and the second question is what is their other card.

Their primary ace still has a 50:50 chance of being either (if they only have one ace, it could have been either drawn from the deck, and if they have two then it is selected randomly by the person with the cards). If you guess that their primary ace is one of the aces then the other cards are drawn from a pool of three possibilities.

Does this clear what I am getting at up for you?

I see what you're doing, but I still think you're making a mistake: Just because there are three possibilities, doesn't mean that those possibilities are equally likely. It's similar to flipping a fair coin twice; you could get two heads, two tails, or one of each. There are three possible outcomes, but the 'one of each' option is twice as likely as either of the other two.

That's how I did it too, thanks for saving my the typing.

Heh! After the first 4 answers, there is an even split!

I'm not so great at this kind of thing but a quick simulation script says: Argument 2 is correct (or at least the answer is 1/5).

I recently ran a quick simulation to estimate the answer to 7 x 5. In case anyone is wondering: it's 35.000.

*4 points [-]I think the point here is “Why simulate when you can get an exact answer?” In which case, the consideration is whether it is easier to ‘see’ that the simulation program is correct or that the reasoning for the exact answer is correct.

A similar situation that comes to mind is “exact” symbolic integration vs. “approximate” numerical integration; symbolic integration is not always possible (in terms of “simple” operations) whereas numeric integration is straightforward to perform, no matter how complex the original formula, but inexact.

∫(0 to 7) 5 dx ≈ 35.000

Yes - while reasoning through the problem might give you a deeper understanding, if you just want to know the answer it can sometimes be easier to be sure that your program is correct than that your mathematical reasoning is correct.

I'm curious: how do you

estimatethe product of seven and five?*2 points [-]I ran a bunch of trials where I randomly chose floating point values A and B from the interval [1, 1000]. Then I either added A to itself B times or added B to itself A times. Then I took an average of all the sums, weighting each by the "relevance factor" (5/A)(7/B).

I know this was trying to be funny, but that algorithm didn't really use simulation to estimate 7 x 5. It just calculates 7 x 5 a bunch of times and takes the average, with the added step of multiplying and dividing by AB.

But then, I'm maybe not creative enough to come up with an algorithm that would actually output an approximation of 7 x 5 using some probabilistic method that doesn't include calculating 7 x 5.

*6 points [-]Throw darts at a unit square, take the fraction that hit a point (x < .7, y < .5) and multiply by 100. (Also works to calculate pi.)

*3 points [-]I get 36.0.

If I knew how to take fractions, I would have just done 7/(1/5).

Yeah, I guess I should have made the effort to understand the principles of the subject I was reading about rather than do a random trivial programming exercise with no general applicability whose dominance by simple mathematics I could have predicted a priori.

*1 point [-]Follow-up puzzle: what if the second question would, instead of asking to pick an ace at random, have asked "is at least one of the cards you're holding the Ace of Spades?" After establishing that you have at least one ace, that is. One could make the same two arguments as for the original question.

Maybe I misread but this looks just like scenario 1 to me.

The first reply eliminates the no-ace case from six equally likely cases, leaving two aces as one of five equally likely cases. So the probability is one fifth. (The second question is irrelevant, by symmetry.)

If we have an ace in the hand, 2 of those "equally likely cases" are no longer possible. (2 of those cases involve the other ace and a non-ace card.)

*1 point [-]I didn't read either argument before cranking up Bayes' Theorem. After the first question, the five remaining possibilities are equiprobable, so the posterior probabilities are proportional to the likelihoods of each possibility. The two non-pairs containing the ace of spades have likelihood 1 and the pair of aces has likelihood 0.5. Normalizing the likelihoods gives 1/5 chances for the pair of aces.

Argument 2 is right. Here is how I think about it. If I am holding one ace the probability it is a spade is 1/2 (there are only two aces). If I am holding two aces the probability that one selected randomly is the spade is also 1/2. The cases aren't distinguished by the expected response to the question "Do you hold the ace of spades?" so that information cannot possibly be used to update the prior answer of 1/5.

There's an easy way to figure out the probability: say that the person holding the cards flips a coin. If he has two Aces, when asked to pick one, heads means he picks the Ace of Spades, and tails means he picks the Ace of Hearts.

There are twelve possible outcomes: 6 possible two-card hands times 2 possibilities for the coin flip. The person's responses have ruled out all but five: 1) heads, AS, AH; 2) heads, AS, 2C; 3) tails, AS, 2C; 4) heads, AS, 2D; 5) tails, AS, 2D. Each is equally likely and he has two Aces in 1), so the probability must be 1/5.

(We don't have the same information as in Scenario 1 because the coin flip made it less likely that he has two Aces, as Unknowns explained.)

The conclusion of the second is correct, but to arrive at that conclusion I had to write out all the possibilities and observe how the sequence of answers pruned them. I only understood the second argument when I realised that the symmetry of spades and hearts is what makes it work.

By that symmetry, the posterior probability of having two aces after answering yes to the final question about the ace of spades -- P(2A|AS=yes) for short -- must equal P(2A|AH=yes). But P(2A|AH=yes) = P(2A|AS=no). So the posterior after asking about the ace of spades is independent of the answer, therefore etc.

If we add the ace of wands to the pack, I make the probability of having two aces in each scenario to be S1: 1/2, S2: 2/9, S3: 1/3. At the end of S3, there are the same four possibilities remaining as in S1, but they are not equiprobable: AS+AH and AS+AW have had half their probability mass pruned away by the random selection of an ace.

Even further: the probability that I can guess your hand is 1/5. However, the probability that you have 2 aces is 1/3. No?

Further: write out your 12 possible trial pulls. Raw odds of ace-ace =2/12. Once an ace is pulled, do two things: cross out the two null-null pulls. The odds of ace-ace appear to be 2/10, but... We didn't draw out Schrodinger's ace; it is either ah or as. pick one (it doesnt matter which, they are symmetrical in distribution) and cross out the combinations that do not have this ace. As the waveform collapses the true odds of ace-ace appear- 2/6. Do you see that? We didnt draw an equally hearty or spadey ace, it had to be one of them or the other, which made combinations without it no longer a factor in our investigation.

Yes you can initially reduce the twelve trials to the six unique combinations, but ONLY because each of the six appears the same number of times as each of the other six (they are equally probable). In a set which favors some combinations, reducing the probability set to the possibility set will lose any meaningfulness.

Sorry to zombie this thread, but I could use some help.

Hmm.. I'm going with 1/3 for 2 reasons. i might be wrong, but maybe I can explain how I am wrong well enough for someone to help me see it, because I'm stumped here.

reason one, from the bottom end: when the answers are yes-yes, there are only 3 possible types hands, ace-ace, and (2) ace-null, each as likely. Weighting probabilities based on how one may answer "yes-no" and still have ace-ace seems erroneous when the answers are yes-yes.

reason two, from the top end: It seems that a false set is being used in the explanations favoring argument 2. here's how I think this occurs. the set of possibilities does NOT equal the set(s) of probabilities. random pairings yield six possibilities. having an ace eliminates one of them, leaving ace-ace, and (4) ace-null pairings. this trimmed possibility set is being inherited to form a probability tree, erroneously in my opinion. the way i see it, once an ace is detected, the POSSIBILITIES equal 6-1=5 but the PROBABILITIES are within one of two distinct, exclusive sets each of ace-ace and (2) ace-nulls. the possible combinations of pairings do not represent the scope of the probability. in other words one may have [(ah-as or ah-2c or ah-2d) OR (as-ah or as-2c or as-2d)]. so if one knows that the hand contains at least 1 ace, the likelihood of having the other ace is 1 out of 3. notice that the ace-ace appears in each set. to call the probability 1/5 (by propagating the possibility set as a probability tree) is to mesh two exclusive sets and throw out one of the (2) ace-ace pairings before doing the math. detecting an ace is the key, knowing which type shouldn't change anything.

i think maybe the Bayesian concept could have been demonstrated by asking how many yes-no's may have ace-ace or something else.

Combination vs. permutation, right? I don't care which ace is where; I just care if, among the three cards that the other card COULD be, that card is the one ace left.

But how about this: I deal each of us two cards out of a deck of AS, AH, 2C, and 2D. I ask you if you have an ace and you say yes. Surely the odds of me having both 2C and 2D are the same as above.

*0 points [-]I figured this before reading your arguments:

If you have two aces, the probability that the one you pick is the ace of spades is 0.5. If you have only one ace, the probability that it's the ace of spades is also 0.5 (and if so the probability that you pick it is 1).

The conditional probabilities are 50/50 in both cases. Therefore the evidence has no influence on the prior probability. The probability of two aces remains 1/5.

As for the arguments:

I think argument 1 is oversimplifying. We'd have the same information as in scenario 1 if we'd just ask "is one of them the ace of spades?", without asking them to pick one. But asking them to pick one introduces a more complex conditional probability of an ace of spades, meaning the posterior probability can't be the same.

I'm not sure I understand argument 2. I can interpret it to be the same reasoning I employed above (in which case it is of course a brilliant piece of pure genius :P), or I can interpret it as the nonsensical thought that evidence you can imagine should have the same weight as evidence you actually find.

The key to this is to understand that in half of all cases where two aces were drawn, the second random step will pick the heart 1/2 the time.

Thus - the first step selects 5 out of 6 possible outcomes. In the second step, 4 of the 5 cases have fixed outcomes; the other (Aa) has two posssible outcomes.

Thus there are six possible final states:

1 a D showing a (1/5) 2 a d showing a (1/5) 3 A D showing A (1/5) 4 A d showing A (1/5) 6 A a showing a (1/10) 6 A a showing A (1/10)

The answer to the second question is yes in cases 2, 3, and 6. The collective probability of these cases is 1/5 + 1/5 + 1/10 = 1/2.

1/10 over 1/2 is 1/5.

I didn't quite understand the verbal arguments well enough to confidently evaluate them directly (though argument 2 seems to hit the correct applause buttons), but I diagrammed out the options on a spreadsheet and got the same answer as argument 2.

Essentially the reason argument 1 is wrong is because in enumerating the different possible outcomes compatible with the response given, the hand with 2 aces should receive half the weight of the other hands containing the ace of spades, since it will only yield a "yes" half the time, and therefore has a 50% chance of being excluded from scenario 2.

So instead of 3 * 1 possibilities (of which one has a 2-ace hand), there are 2 * 1 possibilities with one ace plus 1 * 1/2 possibilities with 2 aces.

*0 points [-]In some detail:

edit: I haven't looked at the post in detail, but I think Morendil did basically the same thing I do here.

Let S count the number of aces of spades you draw, and H the number of aces of hearts. Select one ace randomly from your hand, with equal probability of selection each of the aces you hold. Let R = 1 if the ace of spades is selected, and 0 otherwise. By Baye's Rule we have:

P(S+H=2) = 1/6 since there are 4 choose 2 = 6 possible hands you can have, and only one has both aces in it.

P(S+H >= 1 and R=1) = P(S+H >= 1)*P(R=1|S+H >= 1) and:

P(S+H >= 1 and R=1 | S+H=2) = P(R=1 | S+H = 2) = 1/2 by the definition of R.

Putting it all together we have:

Which is to say, I'll go with argument 2.

Alternatively, argument 1 is wrong because the state of knowledge is not the same as in scenario 1.

There's something interesting about the answer "I wrote a script to figure it out". Does that amount to giving a frequentist answer to a Bayesian question, or am I all wet ?

If the latter, what does your example teach about frequentist vs Bayesian reasoning, Eliezer ?

*1 point [-]Running a script is just coming up with a model of the problem where our uncertainties about the problem are isomorphic, or at least approximately isomorphic, to our uncertainties about the model. In this case, our uncertainties about the model are our uncertainties about the underlying algorithms in the script.

It's just like dropping a flat disc on a Plinko board and saying "hey, at each junction the disc could go either way with roughly equal probability, so this is like a coin flip, so let's simulate 1000 games of Plinko with coin flips and see what happens."

I don't see it teaching anything about the difference, but if it does I'd be glad to hear it. I think cousin it is right: this problem, like the monty hall problem, hinges on the difference between choosing something and choosing something randomly. Frequentists are well aware of the monty hall problem - it was one of my assigned problems last semester in my stat theory course, straight out of the text (Statistical Inference, 2nd edition, by Casella and Berger).

*0 points [-](EDITED) Computing this the long way, by straightforward application of the product rule, yields probability 1/5.

There is a shortcut corresponding to argument 2. Call H the hypothesis that you have two aces, and E the evidence constituted by my confirming the ace I pick at random is spades. It turns out that P(E|H)=P(E|!H), so the evidence can't change my probability assignment. P(E|H) = P(spades | 2 aces) = 1/2. P(E|!H) = P (spades | 1 ace) = 1/2.

Now do it the long way: I start from P(2A,SR|A), that is, the probability that I hold two aces AND that I chose one at random, getting the ace of spaces, GIVEN that I hold one ace.

By the product rule P(2A,SR|A)=P(2A|SR,A)P(SR|A)=P(SR|2A,A)P(2A|A) and therefore the answer we seek is P(SR|2A,A)P(2A|A)/P(SR|A).

The numerator is easy. P(SR|2A,A)=P(SR|2A)=1/2 and P(2A|A) is as in scenario 2.

The denominator is trickier, requiring one more application of the product rule to get P(SR|A)=P(SR,A)/P(A) and factoring A into the six mutually exclusive possibilities to get P(SR,A). Of these the three which matter are (AS,2C),(AS,2D),(AS,AH) - for these P(SR)=1 for the first two and 1/2 for the last.

Numerically, P(SR|A) comes out to 1/2, which cancels out with the other 1/2, leaving the answer of 1/5. The reason they cancel out is precisely the "shortcut" above: P(SR|A)=P(SR|2A).

*0 points [-]If the player has the SPADE:

So 5/6 of the time he chooses the SPADE, but only 1/6 of the time does he choose the SPADE while having the HEART.

Thus, the chance of him having the HEART when he has chosen the SPADE is 1/5.

My posterior probability that you hold two aces should be the same either wayYes, but the posterior probability is 1/3, not 1/5. p(two aces|AH) = 1/3 (As the possible options are, AH+AS, AH+2D, AH+2C) p(two aces|AS) = 1/3 (AS+AH, AS+2D, AS+2C)

However, if you had interpreted argument 2 as asking p(two aces|ace of hearts OR ace of spades) you would end up with 1/5, which is the same result as the prior p(two aces|have an ace). I think the fallacious reasoning here is that conditioning on the disjunction of having either ace, p(both aces|AH OR AS) = 1/5, does not provide you with any new information, as it is the same query as before. But actually selecting one ace and conditioning on that information gives you the correct result, namely that p(both aces|AH) = p(both aces|AS) = 1/3. So argument 1 is correct.

*1 point [-]When you say that the posterior probability is 1/3, this depends on the three combinations being equally likely, but as I said in my other comment, they are not equally likely, given your way of obtaining the information.

I see, your solution seems correct now in retrospect. I mistook scenario 2 for being exactly the same as scenario 1, but the two situations where you are not holding the other ace are indeed twice as likely as having both aces (due to selecting the ace at random), so the answer should be 1/5. Looks like I should brush up on my basic probability...

It seems that either: argument 1 is correct or scenario 1 is not valid?

*3 points [-]Note that in scenario 1 there is no random choosing.

In the main question, there's the possibility of holding AS - but randomly not choosing it. That never happens in scenario 1. So: it seems a bit different...

Ah, I see where I went wrong now. Thanks!

*0 points [-]V guvax gur reebe vf urer.

Engure gur cebonovyvgl lbh unir gur n fcrpvsvp npr vf 1/2 naq gur cbffvovyvgl gerr sbe rnpu npr frcnengryl tvirf n 1/3 punapr bs univat obgu nprf. Tvivat n 2/6 be 1/3 punapr bs univat obgu nprf va gbgny.

Va fubeg, vg vf abg rdhvyvxryl gb unir rnpu bs gur 5 pbzovangvbaf.

Edit: This is wrong feel free to ignore.