Monsterwithgirl_2

    Yesterday I spoke of the Mind Projection Fallacy, giving the example of the alien monster who carries off a girl in a torn dress for intended ravishing—a mistake which I imputed to the artist's tendency to think that a woman's sexiness is a property of the woman herself, woman.sexiness, rather than something that exists in the mind of an observer, and probably wouldn't exist in an alien mind.

    The term "Mind Projection Fallacy" was coined by the late great Bayesian Master, E. T. Jaynes, as part of his long and hard-fought battle against the accursèd frequentists.  Jaynes was of the opinion that probabilities were in the mind, not in the environment—that probabilities express ignorance, states of partial information; and if I am ignorant of a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon.

    I cannot do justice to this ancient war in a few words—but the classic example of the argument runs thus:

    You have a coin.
    The coin is biased.
    You don't know which way it's biased or how much it's biased.  Someone just told you, "The coin is biased" and that's all they said.
    This is all the information you have, and the only information you have.

    You draw the coin forth, flip it, and slap it down.

    Now—before you remove your hand and look at the result—are you willing to say that you assign a 0.5 probability to the coin having come up heads?

    The frequentist says, "No.  Saying 'probability 0.5' means that the coin has an inherent propensity to come up heads as often as tails, so that if we flipped the coin infinitely many times, the ratio of heads to tails would approach 1:1.  But we know that the coin is biased, so it can have any probability of coming up heads except 0.5."

    The Bayesian says, "Uncertainty exists in the map, not in the territory.  In the real world, the coin has either come up heads, or come up tails.  Any talk of 'probability' must refer to the information that I have about the coin—my state of partial ignorance and partial knowledge—not just the coin itself.  Furthermore, I have all sorts of theorems showing that if I don't treat my partial knowledge a certain way, I'll make stupid bets.  If I've got to plan, I'll plan for a 50/50 state of uncertainty, where I don't weigh outcomes conditional on heads any more heavily in my mind than outcomes conditional on tails.  You can call that number whatever you like, but it has to obey the probability laws on pain of stupidity.  So I don't have the slightest hesitation about calling my outcome-weighting a probability."

    I side with the Bayesians.  You may have noticed that about me.

    Even before a fair coin is tossed, the notion that it has an inherent 50% probability of coming up heads may be just plain wrong.  Maybe you're holding the coin in such a way that it's just about guaranteed to come up heads, or tails, given the force at which you flip it, and the air currents around you.  But, if you don't know which way the coin is biased on this one occasion, so what?

    I believe there was a lawsuit where someone alleged that the draft lottery was unfair, because the slips with names on them were not being mixed thoroughly enough; and the judge replied, "To whom is it unfair?"

    To make the coinflip experiment repeatable, as frequentists are wont to demand, we could build an automated coinflipper, and verify that the results were 50% heads and 50% tails.  But maybe a robot with extra-sensitive eyes and a good grasp of physics, watching the autoflipper prepare to flip, could predict the coin's fall in advance—not with certainty, but with 90% accuracy.  Then what would the real probability be?

    There is no "real probability".  The robot has one state of partial information.  You have a different state of partial information.  The coin itself has no mind, and doesn't assign a probability to anything; it just flips into the air, rotates a few times, bounces off some air molecules, and lands either heads or tails.

    So that is the Bayesian view of things, and I would now like to point out a couple of classic brainteasers that derive their brain-teasing ability from the tendency to think of probabilities as inherent properties of objects.

    Let's take the old classic:  You meet a mathematician on the street, and she happens to mention that she has given birth to two children on two separate occasions.  You ask:  "Is at least one of your children a boy?"  The mathematician says, "Yes, he is."

    What is the probability that she has two boys?  If you assume that the prior probability of a child being a boy is 1/2, then the probability that she has two boys, on the information given, is 1/3.  The prior probabilities were:  1/4 two boys, 1/2 one boy one girl, 1/4 two girls.  The mathematician's "Yes" response has probability ~1 in the first two cases, and probability ~0 in the third.  Renormalizing leaves us with a 1/3 probability of two boys, and a 2/3 probability of one boy one girl.

    But suppose that instead you had asked, "Is your eldest child a boy?" and the mathematician had answered "Yes."  Then the probability of the mathematician having two boys would be 1/2.  Since the eldest child is a boy, and the younger child can be anything it pleases.

    Likewise if you'd asked "Is your youngest child a boy?"  The probability of their being both boys would, again, be 1/2.

    Now, if at least one child is a boy, it must be either the oldest child who is a boy, or the youngest child who is a boy.  So how can the answer in the first case be different from the answer in the latter two?

    Or here's a very similar problem:  Let's say I have four cards, the ace of hearts, the ace of spades, the two of hearts, and the two of spades.  I draw two cards at random.  You ask me, "Are you holding at least one ace?" and I reply "Yes."  What is the probability that I am holding a pair of aces?  It is 1/5.  There are six possible combinations of two cards, with equal prior probability, and you have just eliminated the possibility that I am holding a pair of twos.  Of the five remaining combinations, only one combination is a pair of aces.  So 1/5.

    Now suppose that instead you asked me, "Are you holding the ace of spades?"  If I reply "Yes", the probability that the other card is the ace of hearts is 1/3.  (You know I'm holding the ace of spades, and there are three possibilities for the other card, only one of which is the ace of hearts.)  Likewise, if you ask me "Are you holding the ace of hearts?" and I reply "Yes", the probability I'm holding a pair of aces is 1/3.

    But then how can it be that if you ask me, "Are you holding at least one ace?" and I say "Yes", the probability I have a pair is 1/5?  Either I must be holding the ace of spades or the ace of hearts, as you know; and either way, the probability that I'm holding a pair of aces is 1/3.

    How can this be?  Have I miscalculated one or more of these probabilities?

    If you want to figure it out for yourself, do so now, because I'm about to reveal...

    That all stated calculations are correct.

    As for the paradox, there isn't one.  The appearance of paradox comes from thinking that the probabilities must be properties of the cards themselves.  The ace I'm holding has to be either hearts or spades; but that doesn't mean that your knowledge about my cards must be the same as if you knew I was holding hearts, or knew I was holding spades.

    It may help to think of Bayes's Theorem:

    P(H|E) = P(E|H)P(H) / P(E)

    That last term, where you divide by P(E), is the part where you throw out all the possibilities that have been eliminated, and renormalize your probabilities over what remains.

    Now let's say that you ask me, "Are you holding at least one ace?"  Before I answer, your probability that I say "Yes" should be 5/6.

    But if you ask me "Are you holding the ace of spades?", your prior probability that I say "Yes" is just 1/2.

    So right away you can see that you're learning something very different in the two cases.  You're going to be eliminating some different possibilities, and renormalizing using a different P(E).  If you learn two different items of evidence, you shouldn't be surprised at ending up in two different states of partial information.

    Similarly, if I ask the mathematician, "Is at least one of your two children a boy?" I expect to hear "Yes" with probability 3/4, but if I ask "Is your eldest child a boy?" I expect to hear "Yes" with probability 1/2.  So it shouldn't be surprising that I end up in a different state of partial knowledge, depending on which of the two questions I ask.

    The only reason for seeing a "paradox" is thinking as though the probability of holding a pair of aces is a property of cards that have at least one ace, or a property of cards that happen to contain the ace of spades.  In which case, it would be paradoxical for card-sets containing at least one ace to have an inherent pair-probability of 1/5, while card-sets containing the ace of spades had an inherent pair-probability of 1/3, and card-sets containing the ace of hearts had an inherent pair-probability of 1/3.

    Similarly, if you think a 1/3 probability of being both boys is an inherent property of child-sets that include at least one boy, then that is not consistent with child-sets of which the eldest is male having an inherent probability of 1/2 of being both boys, and child-sets of which the youngest is male having an inherent 1/2 probability of being both boys.  It would be like saying, "All green apples weigh a pound, and all red apples weigh a pound, and all apples that are green or red weigh half a pound."

    That's what happens when you start thinking as if probabilities are in things, rather than probabilities being states of partial information about things.

    Probabilities express uncertainty, and it is only agents who can be uncertain.  A blank map does not correspond to a blank territory.  Ignorance is in the mind.

    New Comment
    195 comments, sorted by Click to highlight new comments since: Today at 4:58 AM
    Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

    It seems to me you're using "perceived probability" and "probability" interchangeably. That is, you're "defining" probability as the probability that an observer assigns based on certain pieces of information. Is it not true that when one rolls a fair 1d6, there is an actual 1/6 probability of getting any one specific value? Or using your biased coin example: our information may tell us to assume a 50/50 chance, but the man may be correct in saying that the coin has a bias--that is, the coin may really come up heads 80% of the... (read more)

    "Is it not true that when one rolls a fair 1d6, there is an actual 1/6 probability of getting any one specific value?"

    No. The unpredictability of a die roll or coin flip is not due to any inherent physical property of the objects; it is simply due to lack of information. Even with quantum uncertainty, you could predict the result of a coin flip or die roll with high accuracy if you had precise enough measurements of the initial conditions.

    Let's look at the simpler case of the coin flip. As Jaynes explains it, consider the phase space for the coin's motion at the moment it leaves your fingers. Some points in that phase space will result in the coin landing heads up; color these points black. Other points in the phase space will result in the coin landing tails up; color these points white. If you examined the phase space under a microscope (metaphorically speaking) you would see an intricate pattern of black and white, with even a small movement in the phase space crossing many boundaries between a black region and a white region.

    If you knew the initial conditions precisely enough, you would know whether the coin was in a white or black region of phase space, and you... (read more)

    Case in point:

    There are dice designed with very sharp corners in order to improve their randomness.

    If randomness were an inherent property of dice, simply refining the shape shouldn't change the randomness, they are still plain balanced dice, after all.

    But when you think of a "random" throw of the dice as a combination of the position of the dice in the hand, the angle of the throw, the speed and angle of the dice as they hit the table, the relative friction between the dice and the table, and the sharpness of the corners as they tumble to a stop, you realize that if you have all the relevant information you can predict the roll of the dice with high certainty.

    It's only because we don't have the relevant information that we say the probabilities are 1/6.

    1Juno_Watt11y
    Not necessarily, because of quantum uncertainty and indeterminism -- and yes, they can affect macroscopic systems. The deeper point is, whilst there is a subjective ignorance-based kind of probability, that does not by itself mean there is not an objective, in-the-territory kind of 0<p<1 probability. The latter would be down to how the universe works, and you can't tell how the universe works by making conceptual, philosophical-style arguments. So the kind of probability that is in the mind is in the mind, and the other kind is a separate issue. (Of course, the existence of objective probability doesn't follow from the existence of subjective probability any more than its non existence does).
    3BeanSprugget3y
    I'm curious about how how quantum uncertainty works exactly. You can make a prediction with models and measurements, but when you observe the final result, only one thing happens. Then, even if an agent is cut off from information (i.e. observation is physically impossible), it's still a matter of predicting/mapping out reality. I don't know much about the specifics of quantum uncertainty, though.

    GBM:

    Q: What is the probability for a pseudo-random number generator to generate a specific number as his next output?

    A: 1 or 0 because you can actually calculate the next number if you have the available information.

    Q: What probability do you assign to a specific number as being it's next output if you don't have the information to calculate it?

    Replace pseudo-random number generator with dice and repeat.

    Even more important, I think, is the realization that, to decide how much you're willing to bet on a specific outcome, all of the following are essentially the same:

    • you do have the information to calculate it but haven't calculated it yet
    • you don't have the information to calculate it but know how to obtain such information.
    • you don't have the information to calculate it

    The bottom line is that you don't know what the next value will be, and that's the only thing that matters.

    So therefore a person with perfect knowledge would not need probability. Is this another interpretation of "God does not play dice?" :-)

    8dlthomas12y
    I think this is the only interpretation of "God does not play dice."
    4Nornagest12y
    At least in its famous context, I always interpreted that quote as a metaphorical statement of aesthetic preference for a deterministic over a stochastic world, rather than an actual statement about the behavior of a hypothetical omniscient being. A lot of bullshit's been spilled on Einstein's religious preferences, but whatever the truth I'd be very surprised if he conditioned his response to a scientific question on something that speculative.
    9dlthomas12y
    This is more or less what I was saying, but left (perhaps too) much of it implicit. If there were an entity with perfect knowledge of the present ("God"), they would have perfect knowledge of the future, and thus "not need probability", iff the universe is deterministic. (If there is an entity with perfect knowledge of the future of a nondeterministic reality, we have described our "reality" too narrowly - include that entity and it is necessarily deterministic or the perfect knowledge isn't).
    The Bayesian says, "Uncertainty exists in the map, not in the territory. In the real world, the coin has either come up heads, or come up tails."

    Alas, the coin was part of an erroneous stamping, and is blank on both sides.

    Here is another example me, my dad and my brother came up with when we were discussing probability.

    Suppose there are 4 card, an ace and 3 kings. They are shuffled and placed face side down. I didn't look at the cards, my dad looked at the first card, my brother looked at the first and second cards. What is the probability of the ace being one of the last 2 cards. For me: 1/2 For my dad: If he saw the ace it is 0, otherwise 2/3. For my brother: If he saw the ace it is 0, otherwise 1.

    How can there be different probabilities of the same event? It is because probability is something in the mind calculated because of imperfect knowledge. It is not a property of reality. Reality will take only a single path. We just don't know what that path is. It is pointless to ask for "the real likelihood" of an event. The likelihood depends on how much information you have. If you had all the information, the likelihood of the event would be 100% or 0%.

    The competent frequentist would presumably not be befuddled by these supposed paradoxes. Since he would not be befuddled (or so I am fairly certain), the "paradoxes" fail to prove the superiority of the Bayesian approach. Frankly, the treatment of these "paradoxes" in terms of repeated experiments seems to straightforward that I don't know how you can possibly think there's a problem.

    4[anonymous]11y
    Say you have a circle. On this circle you draw the inscribed equilateral triangle. Simple, right? Okay. For a random chord in this circle, what is the probability that the chord is longer than the side in the triangle? So, to choose a random chord, there are three obvious methods: 1. Pick a point on the circle perimeter, and draw the triangle with that point as an edge. Now when you pick a second point on the circle perimeter as the other endpoint of your chord, you can plainly see that in 1/3 of the cases, the resulting chord will be longer than the triangles' side. 2. Pick a random radius (line from center to perimeter). Rotate the triangle so one of the sides bisect this radius. Now you pick a point on the radius to be the midpoint of your chord. Apparently now, the probability of the chord being longer than the side is 1/2. 3. Pick a random point inside the circle to be the midpoint of your chord (chords are unique by midpoint). If the midpoint of a chord falls inside the circle inscribed by the triangle, it is longer than the side of the triangle. The inscribed circle has an area 1/4 of the circumscribing circle, and that is our probability. WHAT NOW?! The solution is to choose the distribution of chords that lets us be maximally indifferent/ignorant. I.e. the one that is both scale, translation and rotation invariant (i.e. invariant under Affine transformations). The second solution has those properties. Wikipedia article)

    "Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind."

    Eliezer, in quantum mechanics, one does not say that one does not have knowledge of both position and momentum of a particle simultaneously. Rather, one says that one CANNOT have such knowledge. This contradicts your statement that ignorance is in the mind. If quantum mechanics is true, then ignorance/uncertainty is a part of nature and not just something that agents have.

    2[anonymous]11y
    Wither knowledge. It is not knowledge that causes this effect, it is the fact that momentum amplitude and position amplitude relates to one another by a fourier transform. A narrow spike in momentum is a wide blob in position and vice versa by mathematical necessity. Quantum mechanics' apparent weirdness comes from wanting to measure quantum phenomena with classical terms.

    "Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind."

    Eliezer, in quantum mechanics, one does not say that one does not have knowledge of both position and momentum of a particle simultaneously. Rather, one says that one CANNOT have such knowledge. This contradicts your statement that ignorance is in the mind. If quantum mechanics is true, then ignorance/uncertainty is a part of nature and not just something that agents have.

    -1[anonymous]11y
    You have a double post there, mate.

    Constant: The competent frequentist would presumably not be befuddled by these supposed paradoxes.

    Not the last two paradoxes, no. But the first case given, the biased coin whose bias is not known, is indeed a classic example of the difference between Bayesians and frequentists. The frequentist says:

    "The coin's bias is not a random variable! It's a fixed fact! If you repeat the experiment, it won't come out to a 0.5 long-run frequency of heads!" (Likewise when the fact to be determined is the speed of light, or whatever.) "If you flip the coin 10 times, I can make a statement about the probability that the observed ratio will be within some given distance of the inherent propensity, but to say that the coin has a 50% probability of turning up heads on the first occasion is nonsense - that's just not the real probability, which is unknown."

    According to the frequentist, apparently there is no rational way to manage your uncertainty about a single flip of a coin of unknown bias, since whatever you do, someone else will be able to criticize your belief as "subjective" - such a devastating criticism that you may as well, um, flip a coin. Or consul... (read more)

    2radfordd13y
    Eliezer: You're repeating the wrong experiment. The correct experiment for a frequentist to repeat is one where a coin is chosen from a pool of biased coins, and tossed once. By repeating that experiment, you learn something about the average bias in the pool of coins. For a symmetrically biased pool, the frequency of heads would approach 0.5. So your original premise is wrong. A frequentist approach requires a series of trials of the correct experiment. Neither the frequentist nor the Bayesian can rationally evaluate unknown probabilities. A better way to say that might be, "In my view, it's okay for both frequentists and Bayesians to say "I don't know.""
    9buybuydandavis12y
    I think EY's example here should actually should be targeted at the probability as propensity theory of Von Mises (Richard, not Ludwig), not the frequentist theory, although even frequentists often conflate the two. The probability for you is not some inherent propensity of the physical situation, because the coin will flip depending on how it is weighted and how hard it is flip. The randomness isn't in the physical situation, but in our limited knowledge of the physical situation. The argument against frequentist thinking is that we're not interested in a long term frequency of an experiment. We want to know how to bet now. If you're only going to talk about long term frequencies of repeatable experiments, you're not that useful when I'm facing one con man with a biased coin. That singular event is what it is. If you're going to argue that you have to find the right class of events in your head to sample from, you're already halfway down the road to bayesianism. Now you just have to notice that the class of events is different for the con man than it is for you, because of your differing states of knowledge, you'll make it all the way there. Notice how you thought up a symmetrically biased pool. Where did that pool come from? Aren't you really just injecting a prior on the physical characteristics into your frequentist analysis? If you push frequentism past the usual frequentist limitations (physical propensity, repeated experiments), you eventually recreate bayesianism. "Inside every Non-bayesian, there is a bayesian struggling to get out".
    0TheAncientGeek8y
    yep.
    -2Peterdjones13y
    In you opinion. Many Worlds does not make sense in the opinions of its critics. You are entitled to back an interpretation as you are entitled to back a football team. You are not entitled to portray your favourite interpretation of quantum mechanics as a matter of fact. If interpretations were proveable, they wouldn't be called interpretations.
    4Perplexed13y
    As I understand it, EY's commitment to MWI is a bit more principled than a choice between soccer teams. MWI is the only interpretation that makes sense given Eliezer's prior metaphysical commitments. Yes rational people can choose a different interpretation of QM, but they probably need to make other metaphysical choices to match in order to maintain consistency.
    -3Peterdjones13y
    He still shouldn't be stating it as a fact when it based on "commitments".
    0[anonymous]13y
    Aumann's agreement theorem.
    2Eugine_Nier13y
    assumes common priors, i.e., a common metaphysical commitment.
    3[anonymous]13y
    The metaphysical commitment necessary is weaker than it looks.
    0MarkusRamikin13y
    This theorem (valuable though it may be) strikes me as one of the easiest abused things ever. I think Ayn Rand would have liked it: if you don't agree with me, you're not as committed to Reason as I am.
    0[anonymous]13y
    Except that isn't what I said. If MWI is wrong, I want to believe that MWI is wrong. If MWI is right, I want to believe MWI is right.
    1jsalvatier13y
    I believe he's saying that rational people should agree on metaphysics (or probability distributions over different systems). In other words, to disagree about MWI, you need to dispute EY's chain of reasoning metaphysics->evidence->MWI, which Perplexed says is difficult or dispute EY's metaphysical commitments, which Perplexed implies is relatively easier.
    0Islander13y
    That's interesting. The only problem now is to find a rational person to try it out on.
    2[anonymous]11y
    MWI distinguishes itself from Copenhagen by making testable predictions. We simply don't have the technology yet to test them to a sufficient level of precisions as to distinguish which meta-theory models reality. See: http://www.hedweb.com/manworld.htm#unique In the mean time, there are strong metaphysical reasons (Occam's razor) to trust MWI over Copenhagen.
    5OccamsTaser11y
    Indeed there are, but this is not the same as strong metaphysical reasons to trust MWI over all alternative explanations. In particular, EY argued quite forcefully (and rightly so) that collapse postulates are absurd as they would be the only "nonlinear, non CPT-symmetric, acausal, FTL, discontinuous..." part of all physics. He then argued that since all single-world QM interpretations are absurd (a non-sequitur on his part, as not all single-world QM interpretations involve a collapse), many-worlds wins as the only multi-world interpretation (which is also slightly inaccurate, not that many-minds is taken that seriously around here). Ultimately, I feel that LW assigns too high a prior to MW (and too low a prior to bohmian mechanics).
    2[anonymous]11y
    It's not just about collapse - every single-world QM interpretation either involves extra postulates, non-locality or other surprising alterations of physical law, or yields falsified predictions. The FAQ I linked to addresses these points in great detail. MWI is simple in the Occam's razor sense - it is what falls out of the equations of QM if you take them to represent reality at face value. Single-world meta-theories require adding additional restrictions which are at this time completely unjustified from the data.
    2TobyBartels13y
    I always found it really strange that EY believes in Bayesianism when it comes to probability theory but many worlds when it comes to quantum physics. Mathematically, probability theory and quantum physics are close analogues (of which quantum statistical physics is the common generalisation), and this extends to their interpretations. (This doesn't apply to those interpretations of quantum physics that rely on a distinction between classical and quantum worlds, such as the Copenhagen interpretation, but I agree with EY that these don't ultimately make any sense.) There is a many-worlds interpretation of probability theory, and there is a Bayesian interpretation of quantum physics (to which I subscribe). I need to write a post about this some time.
    5endoself13y
    Both of these are false. Consider the trillionth binary digit of pi. I do not know what it is, so I will accept bets where the payoff is greater than the loss, but not vice versa. However, there is obviously no other world where the trillionth binary digit of pi has a different value. The latter is, if I understand you correctly, also wrong. I think that you are saying that there are 'real' values of position, momentum, spin, etc., but that quantum mechanics only describes our knowledge about them. This would be a hidden variable theory. There are very many constraints imposed by experiment on what hidden variable theories are possible, and all of the proposed ones are far more complex than MWI, making it very unlikely that any such theory will turn out to be true.
    4TobyBartels13y
    I am saying that the wave function (to be specific) describes one's knowledge about position, momentum, spin, etc., but I make no claim that these have any ‘real' values. In the absence of a real post, here are some links: * John Baez (ed, 2003), Bayesian Probability Theory and Quantum Mechanics (a collection of Usenet posts, with an introduction); * Carlton Caves et al (2001), Quantum probabilities as Bayesian probabilities (a paper published in Physical Reviews A). By the way, you seem to have got this, but I'll say it anyway for the benefit of any other readers, since it's short and sums up the idea: The wave function exists in the map, not in the territory.
    6endoself13y
    I have not read the latter link yet, though I intend to. What do you have knowledge of then? Or is there some concept that could be described as having knowledge of something without that thing having an actual value? From Baez: This is horribly misleading. Bayesian probability can be applied perfectly well in a universe that obeys MWI while being kept completely separate mathematically from the quantum mechanical uncertainty.
    3TobyBartels13y
    As a mathematical statement, what Baez says is certainly correct (at least for some reasonable mathematical formalisations of ‘probability theory’ and ‘quantum mechanics’). Note that Baez is specifically discussing quantum statistical mechanics (which I don't think he makes clear); non-statistical quantum mechanics is a different special case which (barring trivialities) is completely disjoint from probability theory. Of course, the statement can still be misleading; as you note, it's perfectly possible to interpret quantum statistical physics by tacking Bayesian probability on top of a many-worlds interpretation of non-statistical quantum mechanics. That is, it's possible but (I argue) unwise; because if you do this, then your beliefs do not pay rent! The classic example is a spin-1/2 particle that you believe to be spin-up with 50% probability and spin-down with 50% probability. (I mean probability here, not a superposition.) An alternative map is that you believe that the particle is spin-right with 50% probability and spin-left with 50% probability. (Now superposition does play a part, as spin-right and spin-left are both equally weighted superpositions of spin-up and spin-down, but with opposite relative phases.) From the Bayesian-probability-tacked-onto-MWI point of view, these are two very different maps that describe incompatible territories. Yet no possible observation can ever distinguish these! Specifically, if you measure the spin of the particle along any axis, both maps predict that you will measure the spin to be in one direction with 50% probability and in the other direction with 50% probability. (The wavefunctions give Born probabilities for the observations, which are then weighted according to your Bayesian probabilities for the wavefunctions, giving the result of 50% every time.) In statistical mechanics as it is practised, no distinction is made between these two maps. (And since the distinction pays no rent in terms of predictions, I argue
    1endoself13y
    I definitely don't disagree with that. They can give different predictions. Maybe I can ask my friend who prepared they quantum state and ey can tell me which it really is. I might even be able to use that knowledge to predict the current state of the apparatus ey used to prepare the particle. Of course, it's also possible that my friend would refuse to tell me or that I got the particle already in this state without knowing how it got there. That would just be belief in the implied invisible. "On August 1st 2008 at midnight Greenwich time, a one-foot sphere of chocolate cake spontaneously formed in the center of the Sun; and then, in the natural course of events, this Boltzmann Cake almost instantly dissolved." I would say that this hypothesis is meaningful and almost certainly false. Not that it is "meaningless". Even though I cannot think of any possible experimental test that would discriminate between its being true, and its being false. A final possibility is that there never was a pure state; the universe started off in a mixed state. In this example, whether this should be regarded as an ontologically fundamental mixed state or just a lack of knowledge on my part depends on which hypothesis is simpler. This would be too hard to judge definitively given our current understanding. In MWI, the Born probabilities aren't probabilities, at least not is the Bayesian sense. There is no subjective uncertainty; I know with very high probability that the cat is both alive and dead. Of course, that doesn't tell us what they are, just what they are not. I think a large majority of physicists would agree that the collapse of the wavefunction isn't an actual process. How would you analyze the Wigner's friend thought experiment? In order for Wigner's observations to follow the laws of QM, both versions of his friend must be calculated, since they have a chance to interfere with each other. Wouldn't both streams of conscious experience occur?
    1TobyBartels13y
    I don't understand what you're saying in these paragraphs. You're not describing how the two situations lead to different predictions; you're describing the opposite: how different set-ups might lead to the two states. Possibly you mean something like this: In situation A, my friend intended to prepare one spin-down particle, but I predict with 50% chance that they hooked up the apparatus backward and produced a spin-up particle instead. In situation B, they intended to prepare a spin-right particle, with the same chance of accidental reversal. These are different situations, but the difference lies in the apparatus, my friend's mind, the lab book, etc, not in the particle. It would be much the same if I knew that the machine always produced a spin-up particle and the up/down/right/left dial did nothing: the situations are different, but not because of the particle produced. (However, in this case, the particle is not even entangled with the dial reading.) I especially don't know what you mean by this. The states that most people talk about when discussing quantum physics (including Eliezer in the Sequence) are pure states, and mixed states are probabilistic mixtures of these. If you're a Bayesian when it comes to classical probability (even if you believe in the wave function when it comes to purely quantum indeterminacy), then you should never believe that the real wave function is mixed; you just don't know which pure state it is. Unless you distinguish between the map where the particle is spin-up or -down with equal odds from the map where the particle is definitely in the fullymixed state in the territory? Then you have an even greater plethora of distinctions between maps that pay no rent! For Schrödinger's Cat or Wigner's Friend, in any realistic situation, the cat or friend would quickly decohere and become entangled in my observations, leaving it in a mixed state: the common-sense situation where it's alive/happy/etc with 50% chance and dead/sad/etc wit
    0endoself13y
    I did not explain this very well. My point was that when we don't know the particle's spin, it is still a part of the simplest description that we have of reality. It should not be any more surprising that a belief about a quantum mechanical state does not have any observable consequences than that a belief about other parts of the universe that cannot be seen due to inflation does not have any observable consequences. I included this just in case a theory that implies such a thing ever turn out to be simpler than alternatives. I thought this was relevant because I mistakenly thought that you had mentioned this distinction. What if your friend and the cat are implemented on a reversible quantum computer? The amplitudes for your friend's two possible states may both affect your observations, so both would need to be computed.
    3TobyBartels13y
    Sure, the spin of the particle is a feature of the simplest description that we have. Nevertheless, no specific value of the particle's spin is a feature of the simplest description that we have; this is true in both the Bayesian interpretation and in MWI. To be fair, if reality consists only of a single particle with spin 1/2 and no other properties (or more generally if there is a spin-1/2 particle in reality whose spin is not entangled with anything else), then according to MWI, reality consists (at least in part) of a specific direction in 3-space giving the axis and orientation of the particle's spin. (If the spin is greater than 1/2, then we need something a little more complicated than a single direction, but that's probably not important.) However, if the particle is entangled with something else, or even if its spin is entangled with some another property of the particle (such as its position or momentum), then the best that you can say is that you can divide reality mathematically into various worlds, in each of which the particle has a spin in a specific direction around a specific axis. (In the Bohmian interpretation, it is true that the particle has a specific value of spin, or rather it has a specific value about any axis. But presumably this is not what you mean.) As for which is the simplest description of reality, the Bayesian interpretation really is simpler. To fully describe reality as best I can with the knowledge that I have, in other words to write out my map completely, I need to specify less information in the fully Bayesian interpretation (FBI) than in MWI with Bayesian classical probability on top (MWI+BCP). This is because (as in the toy example of the spin-1/2 particle) different MWI+BCP maps correspond to the same FBI map; some additional information must be necessary to distinguish which MWI+BCP map to use. If you're an objective Bayesian in the sense that you believe that the correct prior to use is determined entirely by what inf
    1TobyBartels13y
    I wrote: I've begun to think that this is probably not a good example. It's mathematically simple, so it is good for working out an example explicitly to see how the formalism works. (You may also want to consider a system with two spin-1/2 particles; but that's about as complicated as you need to get.) However, it's not good philosophically, essentially since the universe consists of more than just one particle! Mathematically, it is a fact that, if a spin-1/2 particle is entangled with anything else in the universe, then the state of the particle is mixed, even if the state of the entire universe is pure. So a mixed state for a single particle suggests nothing philosphically, since we can still believe that the universe is in a pure state, which causes no problems for MWI. Indeed, endoself immediately looks at situations where the particle is so entangled! I should have taken this as a sign that my example was not doing its job. I still stand by my responses to endoself, as far as they go. One of the minor attractions of the Bayesian interpretation for me is that it treats the entire universe and single particles in the same way; you don't have to constantly remind yourself that the system of interest is entangled with other systems that you'd prefer to ignore, in order to correctly interpret statements about the system. But it doesn't get at the real point. The real point is that the entire universe is in a mixed state; I need to establish this. In the Bayesian interpretation, this is certainly true (since I don't have maximal information about the universe). According to MWI, the universe is in a pure state, but we don't know which. (I assume that you, the reader, don't know which; if you do, then please tell me!) So let's suppose that |psi> and |phi> are two states that the universe might conceivably be in (and assume that they're orthogonal to keep the math simple). Then if you believe that the real state of the universe is |psi> with 50% chance and |phi>
    0nshepperd11y
    On the other hand, if the particle is spin up, the probability of observing "up" in an up-down measurement is 1, while the probability is 0 if the particle is down. So in the case of an up-down prior, observing "up" changes your probabilities, while in the case of a left-right prior, it does not.
    0TobyBartels11y
    That's a good point. It seems to me another problem with the MWI (or specifically, with Bayesian classical probability on top of quantum MWI) that making an observation could leave your map entirely unchanged. However, in practice, followers of MWI have another piece of information: which world we are in. If your prior is 50% left and 50% right, then either way you believe that the universe is a superposition of an up world and a down world. Measuring up tells you that we are in the up world. For purposes of future predictions, you remember this fact, and so effectively you believe in 100% up now, the same as the person with the 50% up and 50% down prior. Those two half-Bayesians disagree about how many worlds there are, but not about what the up world —the world that we're in— is like.
    0nshepperd11y
    To be precise, if your prior is 50% left and 50% right, then you generally believe that the world you are in is either a left world or a right world, and you don't know which. A left or right world itself factorises into a tensor product of (rest of the world) × (superposition of up particle and down particle). Measuring the particle along the up/down axis causes the rest of the world to be become entangled with the particle along that axis, splitting it into two worlds, of which you observe yourself to be in the 'up' one. Of course, observing the particle along the up/down axis tells you nothing about whether its original spin was left or right, and leaves you incapable of finding out, since the two new worlds are very far apart, and it's the phase difference between those two worlds that stores that information.
    6Wei Dai13y
    Please explain how you know this? ETA: Also, whatever does exist in the territory, it has to generate subjective experiences, right? It seems possible that a wave function could do that, so saying that "the wave function exists in the territory" is potentially a step towards explaining our subjective experiences, which seems like should be the ultimate goal of any "interpretation". If, under the all-Bayesian interpretation, it's hard to say what exists in the territory besides that the wave function doesn't exist in the territory, then I'm having trouble seeing how it constitutes progress towards that ultimate goal.
    -1TobyBartels13y
    I wouldn't want to pretend that I know this, just that this is the Bayesian interpretation of quantum mechanics. One might as well ask how we Bayesians know that probability is in the map and not the territory. (We are all Bayesians when it comes to classical probability, right?) Ultimately, I don't think that it makes sense to know such things, since we make the same physical predictions regardless of our interpretation, and only these can be tested. Nevertheless, we take a Bayesian attitude toward probability because it is fruitful; it allows us to make sense of natural questions that other philosophies can't and to keep things mathematically precise without extra complications. And we can extend this into the quantum realm as well (which is good since the universe is really quantum). In both realms, I'm a Bayesian for the same reasons. A half-Bayesian approach adds extra complications, like the two very different maps that lead to same predictions. (See this comment's cousin in reply to endoself.) ETA: As for knowing what exists in the territory as an aid to explaining subjective experience, we can still say that the territory appears to consist ultimately of quark fields, lepton fields, etc, interacting according to certain laws, and that (built out of these) we appear to have rocks, people, computers, etc, acting in certain ways. We can even say that each particular rock appears to have a specific value of position and momentum, up to a certain level of precision (which fails to be infinitely precise first because the definition of any particular rock isn't infinitely precise, long before the level of quantum indeterminacy). We just can't say that each particular quark has a specific value of position and momentum beyond a certain level of precision, despite being (as far as we know) fundamental, and this is true regardless of whether we're all-Bayesian or many-worlder. (Bohmians believe that such values do exist in the territory, but these are unobservable
    0Juno_Watt11y
    Where "extending" seems to mean "assuming". I find it more fruitful to come up with tests of (in)determinsm, such as Bell's Inequalitites.
    0TobyBartels11y
    I'm not sure what you mean by ‘assuming’. Perhaps you mean that we see what happens if we assume that the Bayesian interpretation continues to be meaningful? Then we find that it works, in the sense that we have mutually consistent degrees of belief about physically observable quantities. So the interpretation has been extended.
    2Juno_Watt11y
    If the universe contains no objective probabilities, it will still contain subjective, ignorance based probabilities. If the universe contains objective probabilities, it will also still contain subjective, ignorance based probabilities. So the fact subjective probabilities "work" doesn't tell you anything about the universe. It isn't a test. Aspect's experiment to test Bells theorem is a test. It tells you there isn't (local, single-universe) objective probability.
    1TobyBartels11y
    OK, I think that I understand you now. Yes, Bell's inequalities, along with Aspect's experiment to test them, really tell us something. Even before the experiment, the inequalities told us something theoretical: that there can be no local, single-world objective interpretation of the standard predictions of quantum mechanics (for a certain sense of ‘objective’); then the experiment told us something empirical: that (to a high degree of tolerance) those predictions were correct where they mattered. Like Bell's inequalities, the Bayesian interpretation of quantum mechanics tells us something theoretical: that there can be a local, single-world interpretation of the standard predictions of quantum mechanics (although it can't be objective in the sense ruled out by Bell's inequalities). So now we want the analogue of Aspect's experiment, to confirm these predictions where it matters and tell us something empirical. Bell's inequalities are basically a no-go theorem: an interpretation with desired features (local, single-world, objective true value of all potentially observable quantities) does not exist. There's a specific reason why it cannot exist, and Aspect's experiment tests that this reason applies in the real world. But Fuchs et al's development of the Bayesian interpretation is a go theorem: an interpretation with some desired features (local, single-world) does exist. So there's no point of failure to probe with an experiment. We still learn something about the universe, specifically about the possible forms of maps of it. But it's a purely theoretical result. I agree that Bell's inequalities and Aspect's experiment are a more interesting result, since we get something empirical. But it wasn't a surprising result (which might be hindsight bias on my part). There seem to be a lot of people here (although that might be my bad impression) who think that there is no local, single-world interpretation of the standard predictions of quantum mechanics (or even no s
    -4Peterdjones13y
    That is not an uncontroversial fact. For instance, Roger Penrose, from the Emperor's New Mind OBJECTIVITY AND MEASURABILITY OF QUANTUM STATES Despite the fact that we are normally only provided with probabilities for the outcome of an experiment, there seems to be something objective about a quantum-mechanical state. It is often asserted that the state-vector is merely a convenient description of 'our knowledge' concerning a physical system or, perhaps, that the state-vector does not really describe a single system but merely provides probability information about an 'ensemble' of a large number of similarly prepared systems. Such sentiments strike me as unreasonably timid concerning what quantum mechanics has to tell us about the actuality of the physical world. Some of this caution, or doubt, concerning the 'physical reality' of state-vectors appears to spring from the fact that what is physically measurable is strictly limited, according to theory. Let us consider an electron's state of spin, as described above. Suppose that the spin-state happens to be |a), but we do not know this; that is, we do not know the direction a in which the electron is supposed to be spinning. Can we determine this direction by measurement? No, we cannot. The best that we can do is extract 'one bit' of information that is, the answer to a single yes no question. We may select some direction P in space and measure the electron's spin in that direction. We get either the answer YES or NO, but thereafter, we have lost the information about the original direction of spin. With a YES answer we know that the state is now proportional to |p), and with a NO answer we know that the state is now in the direction opposite to p. In neither case does this tell us the direction a of the state before measurement, but merely gives some probability information about a. On the other hand, there would seem to be something completely objective about the direction a itself, in which the electron 'happened

    Maybe I'm stupid here... what difference does it make?

    Sure, if we had a coin-flip-predicting robot with quick eyes it might be able to guess right/predict the outcome 90% of the time. And if we were precognitive we could clean up at Vegas.

    In terms of non-hypothetical real decisions that confront people, what is the outcome of this line of reasoning? What do you suggest people do differently and in what context? Mark cards?

    B/c currently, as far as I can see, you're saying, "The coin won't end up 'heads or tails' -- it'll end up heads, or it'll end u... (read more)

    Sudeep: the inverse certainy of the position and momentum is a mathematical artifact and does not depend upon the validity of quantum mechanics. (Er, at least to the extent that math is independent of the external world!)

    PK: I like your posts, and don't take this the wrong way, but, to me, your example doesn't have as much shocking unintuitiveness as the ones Eliezer Yudkowsky (no underscore) listed.

    I'd like to understand: Are frequentist "probability" and subjective "probability" simply two different concepts, to be distinguished carefully? Or is there some true debate here?

    I think that Jaynes shows a derivation follownig Bayesian principles of the frequentist probability from the subjective probability. I'd love to see one of Eliezer's lucid explanations on that.

    You can derive frequentist probabilities from subjective probabilities but not the other way around.

    2Ronny Fernandez13y
    Please elaborate EY. I think it would be a wonderfully clarifying post if you were to write a technical derivation of frequentest probability from the "probability in the mind" concept o Bayesian probability. If you decide to do this, or anyone knows where i could find such a text, please let me know. related question: Is there an algebra that describes the frequentest interpretation of probability? If so, where is it isomorphic to Bayesian algebra and where does it diverge? I want to know if the dispute has to do just with the semantic interpretation of 'P(a)', or if the 'P(a)' of the frequentest actually behaves differently than the Bayesian 'P(a)' syntactically.
    4JGWeissman13y
    If a well calibrated rationalist, for a given probability p, independantly believes N different things each with probability p, then you can expect about p*N of those beliefs to be correct. See the discussion of calibration in the Technical Explanation.
    1buybuydandavis13y
    Jayne's book shows how frequencies are estimated in his system, and somewhere, maybe his book, he compares and contrasts his ideas with frequentists and Kolmogorov. In fact, he expends great effort in contrasting his views to those of frequentists.

    Silas: My post wasn't meant to be "shockingly unintuitive", it was meant to illustrate Eliezer's point that probability is in the mind and not out there in reality in a ridiculously obvious way.

    Am I somehow talking about something entirely different than what Eliezer was talking about? Or should I complexificationafize my vocabulary to seem more academic? English isn't my first language after all.

    If I'm being asked to accept or reject a number meant to correspond to the calculated or measured likelihood of heads coming up, and I trust the information about it being biased, then the only correct move is to reject the 0.5 probability.

    Alas, no. Here's the deal: implicit in all the coin toss toy problems is the idea that the observations may be modeled as exchangeable. It really really helps to have a grasp on what the math looks like when we assume exchangeability.

    In models where (infinite) exchangeability is assumed, the concept of long-run frequen... (read more)

    Eliezer, I have no argument with the Bayesian use of the probability calculus and so I do not side with those who say "there is no rational way to manage your uncertainty", but I think I probably do have an argument with the insistence that it is the one true way. None of the problems you have so far outlined, including the coin one, really seem to doom either frequentism specifically, or more generally, an objective account of probability. I agree with this:

    Even before a fair coin is tossed, the notion that it has an inherent 50% probability of
    ... (read more)
    3ksvanhorn13y
    "But frequentists emphatically are not talking about individual tosses. They are talking about infinitely repeated tosses." These infinite sequences never exist, and very often they don't even exist approximately. We only observe finite numbers of events. I think this is one of the things Jaynes had in mind when he talked about the proper handling of infinities -- you should start by analyzing the finite case, and look for a well-defined limit as n increases without bound. Unfortunately, frequentist statistics starts with the limit at infinity. As an example of how these limiting frequencies taken over infinite sequences often make no sense in real-world situations, consider statistical models of human language, such as are used in automatic speech recognition. Such models assign a prior probability to each possible utterance a person could make. What does it mean, from a frequentist standpoint, to say that there is a probability of 1e-100 that a person will say "The tomatoe flew dollars down the pipe"? There haven't been 1e100 separate utterances by all human beings in all of human history, so how could a probability of 1e-100 possibly correspond to some sort of long-run frequency?

    (Replace the link to "removable singularity" with one to removable discontinuity.)

    No way to do it other way around? Nothing along the lines of, say, considering a set of various "things to be explained" and for each a hypothesis explaining it, and then talk about subsets of those? ie, a subset in which 1/10 of the hypothesies in that subset are objectively true would be a set of hypothesies assigned .1 probability, or something?

    Yeah, the notion of how to do this exactly is, admittedly, fuzzy in my head, but I have to say that it sure does seem like there ought to be some way to use the notion of frequentist probability to construct subjective probability along these lines.

    I may be completely wrong though.

    "Suppose our information about bias in favour of heads is equivalent to our information about bias in favour of tail. Our pdf for the long-run frequency will be symmetrical about 0.5 and its expectation (which is the probability in any single toss) must also be 0.5. It is quite possible for an expectation to take a value which has zero probability density."

    What I said: if all you know is that it's a trick coin, you can lay even odds on heads.

    "We can refuse to believe that the long-run frequency will converge to exactly 0.5 while simultaneou... (read more)

    But frequentists emphatically are not talking about individual tosses. They are talking about infinitely repeated tosses.

    In other words, they are talking about tail events. That a frequentist probability (i.e., a long-run frequency) even exists can be a zero-probability event -- but you have to give axioms for probability before you can even make this claim. (Furthermore, I'm never going to observe a tail event, so I don't much care about them.)

    Conrad,

    Okay, so unpack "ungrounded" for me. You've used the phrases "probability" and "calculated or measured likelihood of heads coming up", but I'm not sure how you're defining them.

    I'm going to do two things. First, I'm going to Taboo "probability" and "likelihood" (for myself -- you too, if you want). Second, I'm going to ask you exactly which specific observable event it is we're talking about. (First toss? Twenty-third toss? Infinite collection of tosses?) I have a definite feeling that our disagreement is about word usage.

    If you honestly subscribe to this view of probability, please never give the odds for winning the lottery again. Or any odds for anything else.

    What does telling me your probability that you assign something actually tell me about the world? If I don't know the information you are basing it on, very little.

    I'm also curious about a formulation of probability theory that completely ignores random numbers and other theories that are based upon them (e.g. The law of large numbers, Central limit theorem).

    Heck a re-write of http://en.wikipedia.org/wiki/Probability_theory with all mention of probabilities in the external world removed might be useful.

    I'm not sure the many-worlds interpretation fully eliminates the issue of quantum probability as part of objective reality. You can call it "anthropic pseudo-uncertainty" when you get split and find that your instances face different outcomes. But what determines the probability you will see those various outcomes? Just your state of knowledge? No, theory says it is an objective element of reality, the amplitude of the various elements of the quantum wave function. This means that probability, or at least its close cousin amplitude, is indeed an ... (read more)

    Will Pearson, I'm having trouble determining to whom your comment is addressed.

    Roland and Ian C. both help me understand where Eliezer is coming from. And PK's comment that "Reality will only take a single path" makes sense. That said, when I say a die has a 1/6 probability of landing on a 3, that means: Over a series of rolls in which no effort is made to systematically control the outcome (e.g. by always starting with 3 facing up before tossing the die), the die will land on a 3 about 1 in 6 times. Obviously, with perfect information, everything can be calculated. That doesn't mean that we can't predict the probability of... (read more)

    2bigjeff513y
    Place a Gomboc on a non-flat surface and that "inherent" property goes away. If it were inherent, it would not go away. Therefore, its probability is not inherent, it is an evaluation we can make if we have enough information about the prior conditions. In this case "on a flat surface" is plenty of information, and we can assign it a 100% probability. But what is its probability of righting itself on a surface angled 15 degrees? Is it still 100%? I doubt it, but I don't know. Very cool shape, by the way.
    1Jake_NB2y
    Then "Gomboc righting itself when on a flat surface" will have an inherent 100% probability. This doesn't refute the example.

    ::Okay, so unpack "ungrounded" for me. You've used the phrases "probability" and "calculated or measured likelihood of heads coming up", but I'm not sure how you're defining them.::

    Ungrounded: That was a good movie. Grounded: That movie made money for the investors. Alternatively: I enjoyed it and recommend it. -- is for most purposes grounded enough.

    ::I'm going to do two things. First, I'm going to Taboo "probability" and "likelihood" (for myself -- you too, if you want). Second, I'm going to ask you... (read more)

    GBM:: ..That said, when I say a die has a 1/6 probability of landing on a 3, that means: Over a series of rolls in which no effort is made to systematically control the outcome (e.g. by always starting with 3 facing up before tossing the die), the die will land on a 3 about 1 in 6 times.::

    --Well, no: it does mean that, but don't let's get tripped up that a measure of probability requires a series of trials. It has that same probability even for one roll. It's a consequence of the physics of the system, that there are 6 stable distinguishable end-states and explosively many intermediate states, transitioning amongst each other chaotically.

    Conrad.

    I have to say that it sure does seem like there ought to be some way to use the notion of frequentist probability to construct subjective probability along these lines.

    Assign a measure to each possible world (the prior probabilities). For some state of knowledge K, some set of worlds Ck is consistent with K (say, the set in which there is a brain containing K). For some proposition X, X is true in some set of worlds Cx. The subjective probability P(X|K) = measure(intersection(Ck,Cx)) / measure(Ck). Bayesian updating is equivalent to removing worlds from K. To make it purely frequentist, give each world measure 1 and use multisets.

    Does that work?

    Who else thinks we should Taboo "probability", and replace it two terms for objective and subjective quantities, say "frequency" and "uncertainty"?

    The frequency of an event depends on how narrowly the initial conditions are defined. If an atomically identical coin flip is repeated, obviously the frequency of heads will be either 1 or 0 (modulo a tiny quantum uncertainty).

    -1Peterdjones13y
    Yes, it looks like an argument about apples versus oranges to me.
    3Perplexed13y
    I think that we should follow Jaynes and insist upon 'probability' as the name of the subjective entity. But so-called objective probability should be called 'propensity'. Frequency is the term for describing actual data. Propensity is objectively expected frequency. Probability is subjectively expected frequency. That is the way I would vote.

    Oops, removing worlds from Ck, not K.

    GBM, I think you get the idea. The reason we don't want to say that the gomboc has an inherent probability of one for righting itself (besides that we, um, don't use probability one), is that as it is with the gomboc, so it is with the die or anything else in the universe. The premise is that determinism, in the form of some MWI, is (probably!) true, and so no matter what you or anyone else knows, whatever will happen is sure to happen. Therefore, when we speak of probability, we can only be referring to a state of knowledge. It is still of course the case... (read more)

    Cyan, sorry. My comment was to Eliezer and statements such as

    "that probabilities express ignorance, states of partial information; and if I am ignorant of a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon."

    I think there's still room for a concept of objective probability -- you'd define it as anything that obeys David Lewis's "Principal Principle" which this page tries to explain (with respect to some natural distinction between "admissible" and "inadmissible" information).

    Before accepting this view of probability and the underlying assumptions about the nature of reality one should look at the experimental evidence. Try Groeblacher, Paterek, et al arXiv.0704.2529 (Aug 6 2007) These experiments test various assumptions regarding non=local realism and conclude= "...giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned"

    Standard reply from MWIers is that MWI keeps realism and locality by throwing away a different hidden assumption called "counterfactual definiteness".

    Nick Tarleton:

    Who else thinks we should Taboo "probability", and replace it two terms for objective and subjective quantities, say "frequency" and "uncertainty"?

    I second that, this would probably clear a lot of the confusion and help us focus on the real issues.

    The "probability" of an event is how much anticipation you have for that event occurring. For example if you assign a "probability" of 50% to a tossed coin landing heads then you are half anticipating the coin to land heads.

    What about when you're dealing with a medication that might kill someone, or not: in the absence of any information, do you say that's 50-50?

    You've already given me information by using the word medication -- implicity, you're asking me to recall what I know about medications before I render an answer. So no, those outcomes aren't necessarily equally plausible to me. Here's a situation which is a much better approximation(!) of total absence of information: either event Q or event J has happened just now, and I will tell you which in my next comment. The... (read more)

    Now, if at least one child is a boy, it must be either the oldest child who is a boy, or the youngest child who is a boy. So how can the answer in the first case be different from the answer in the latter two?

    Because they obviously aren't exclusive cases. I simply don't see mathematically why it's a paradox, so I don't see what this has to do with thinking that "probabilities are a property of things."

    The "paradox" is that people want to compare it to a different problem, the problem where the cards are ordered. In that case, if you ... (read more)

    Or, I suppose, I would compare it to the other noted statistical paradox, whereby a famous hospital has a better survival rate for both mild and severe cases of a disease than a less-noted hospital, but a worse overall survival rate because it sees more of the worst cases. Merely because people don't understand how to do averages has little to do with them requiring an agent.

    The estimated Bayesian probability has nothing to do with the coin. If it did, assigning a probability of 0.5 to one of the two possible outcomes would be necessarily incorrect, because one of the few things we know about the coin is that it's not fair.

    The estimate is of our confidence in using that outcome as an answer. "How confident can I be that choosing this option will turn out to be correct?" We know that the coin is biased, but we don't know which outcome is more likely. As far as we know, then, guessing one way is as good as guessing... (read more)

    Another way to look at it: if you repeatedly select a coin with a random bias (selected from any distribution symmetric about .5) and flip it, H/T will come out 50/50.

    Silas: The uncertainty principle comes from the fact that position and momentum are related by Fourier transform. Or, in laymans terms, the fact that particles act like waves. This is one of the fundamental principles of QM, so yeah, it sort of does depend on the validity thereof. Not the Schrodinger equation itself perhaps, but other concepts.

    As for whether QM proves that all probabilities are inherent in a system, it doesn't. It just prevents mutual information in certain situations. In coin flips or dice rolls, theoretically you could predict the o... (read more)

    Z. M. Davis: Thank you. I get it now.

    Follow-up question: If Bob believes he has a >50% chance of winning the lottery tomorrow, is his belief objectively wrong? I would tentatively propose that his belief is unfounded, "unattached to reality", unwise, and unreasonable, but that it's not useful to consider his belief "objectively wrong".

    If you disagree, consider this: suppose he wins the lottery after all by chance, can you still claim the next day that his belief was objectively wrong?

    Nick Tarleton: Not sure I entirely correctly understood your suggestion, need to think about it more.

    However, my initial thought is that it may require/assume logical omnicience.

    ie, what of updating based on "subjective guesses" of which worlds are consistent or inconsistent with the data. That is, as consistent as you can tell, given bounded computational resources. I'm not sure, but your model, at least at first glance, may not be able to say useful stuff about those that are not logically ominicent.

    Also, I'm unclear, could you clarify what it ... (read more)

    Hal, I'd say probability could be both part of objective physics and a mental state in this sense: Given our best understanding of objective physics, for any given mental state (including the info it has access to) there is a best rational set of beliefs. In quantum mechanics we know roughly the best beliefs, and we are trying to use that to infer more about the underlying set of states and info.

    Rolf Nelson: "Follow-up question: If Bob believes he has a >50% chance of winning the lottery tomorrow, is his belief objectively wrong? I would tentatively propose that his belief is unfounded, "unattached to reality", unwise, and unreasonable, but that it's not useful to consider his belief "objectively wrong"."

    It all depends on what information Bob has. He might have carefully doctored the machines and general setup of the lottery draw to an extent that he might have enough information to have that probability. Now if Bo... (read more)

    However, my initial thought is that it may require/assume logical omnicience.

    Probably. Bayes is also easier to work with if you assume logical omniscience (i.e. knowledge of P(evidence|X) and P(E|~X)).

    Also, I'm unclear, could you clarify what it is you'd be using a multiset for? Do you mean "increase measure only by increasing number of copies of this in the multiset, and no other means allowed" or did you intend something else?

    Yes, using multisets of worlds with identical measure is equivalent to (for rational measures only) but 'more frequentis... (read more)

    You have to lay £1 on heads or tails on a biased coin toss. Your probability is in your mind, and your mind has no information either way. Hence, you lay the pound on either. Hence you assign a 0.5 probability to heads, and also to tails.

    If your argument is 'I don't mean my personal probability, I mean the actual probability', abandon all hope. All probability is 'perceived'. Unless you think you have all the evidence.

    All probability is 'perceived'. Unless you think you have all the evidence.

    Some probabilities are objective, inherent properties of bits of the universe, and the universe does have all the evidence. The coin possesses an actual probability independent of what anyone knows or believes about it.

    if the vast majority of the measure of possible worlds given Bob's knowledge is in worlds where he loses, he's objectively wrong.

    That's a self-consistent system, it just seems to me more useful and intuitive to say that:

    "P" is true => P
    "Bob believes P" is true => Bob believes P

    but not

    "Bob's belief in P" is true => ...er, what exactly?

    Also, I frequently need to attach probabilities to facts, where probability goes from [0,1] (or, in Eliezer's formulation, (-inf, inf)). But it's rare for me to have to any reason to att... (read more)

    I second tabooing probability, but I think that we need more than two words to replace it. Casually, I think that we need, at the least, 'quantum measure', 'calibrated confidence', and 'justified confidence'. Typically we have been in the habit of calling both "Bayesian", but they are very different. Actual humans can try to be better approximations of Bayesians, but we can't be very close. Since we can't be Bayesian, due to our lack of logical omniscience, we can't avoid making stupid bets and being Dutch Booked by smarter minds. It's there... (read more)

    just fyi, there's no such thing as the 'eldest' of two boys; there's just an elder and a younger. superlatives are reserved for groups of three or more.

    as i'm a midget among giants here, i'm afraid that's all i have to add. :)

    Enginerd: The uncertainty inherent in determining a pair of conjugate variables - such as the length and pitch of a sound - is indeed a core part of QM, but is not probabilistic. In this case, the term "uncertainty" is not about probabilities, even if QM is probabilistic in general, rather a consequence of describing states in terms of wave functions, which can be interpreted probabilistically. This causes many to mistakenly think the Heisenberg's "Uncertainty Principle" is the probabilistic part of QM. As Wikipedia[1] puts it: &quo... (read more)

    You're equating perceived probability with physical probability, and this is false, when either you or anyone else ignores that distinction.

    However, your whole argument depends on a deterministic universe. Research quantum mechanics; we can't really say that we have a deterministic universe, and physics itself can only assign a probability at a certain point.

    @Daniel:

    You're attacking the wrong argument. Just look up the electron double-slit experiment. (http://en.wikipedia.org/wiki/Double-slit_experiment) Its not only about the observer effect, but how the probability that you say doesn't exist causes interference to occur unless an observer is present. The observer is the one who collapses the probability wave down to a deterministic bayesian value.

    It sounds like both you and the author of this blog do not understand Schrodinger's cat.

    Let me further explain my point. Somewhere earlier said that reality only takes one path. Unless an observer is present, the electron double slit experiment proves that this assumption is false.

    Welcome to Overcoming Bias, anon! Try to to avoid triple-posting. The author of this post has actually just written a series on quantum mechanics, which begins with "Quantum Explanations." He argues forcefully for a many-worlds interpretation, which is deterministic "from the standpoint of eternity," although not for any particular observer due to indexical uncertainty. (You might say that, yes, reality does not take only one path, but it might as well have, because neither do observers!)

    @Z. M. Davis

    Thanks for the welcome. While I disagree with the etiquette, I'll try to follow it. A three post limit serves only to stifle discussion; there are other ways to deal with abusive posters than limiting the abilities of non-abusive posters. Also, I'm pretty sure my comment is still valid, relevant, and an addition to the discussion, regardless of whether I posted it now or a couple hours ago.

    Back to the many worlds approach, as an individual observer of the universe myself, it seems to me that attempting to look at the universe "from the sta... (read more)

    That the probability assigned to flipping a coin depends on what the assigner knows doesn't prove probability's subjectivity, only that probability isn't an objective property of the coin . Rather, if the probability is objective, it must be a property of a system, including the throwing mechanism. Two other problems with Eliezer's argument. 1) Rejecting objective interpretations of probability in empirical science because, in everyday usage, probability is relative to what's known, is to provide an a priori refutation of indeterminism, reasoning which do... (read more)

    Stephen R. Diamond, there are two distinct things in play here: (i) an assessment of the plausibility of certain statements conditional on some background knowledge; and (ii) the relative frequency of outcomes of trials in a counterfactual world in which the number of trials is very large. You've declared that probability can't be (i) because it's (ii) -- actually, the Kolmogorov axioms apply to both. Justification for using the word "probability" to refer to things of type (i) can be found in the first two chapters of this book. I personally cal... (read more)

    Like Butters in that South Park episode, I can't help after all these posts but to notice that I am confused.

    "Renormalizing leaves us with a 1/3 probability of two boys, and a 2/3 probability of one boy one girl." help me with this one, i'm n00b. If one of the kids is known to be a boy (given information), then doesn't the other one has 50/50 chances to be either a boy or a girl? And then having 50/50 chances for the couple of kids to be either a pair of boys or one boy one girl?

    0Morendil13y
    That's not the given; it is that "at least one of the two is a boy". Different meaning. For me, the best way to get to understand this kind of exercise intuitively is to make a table of all the possibilities. So two kids (first+second) could be: B+B, B+G, G+B, G+G. Each of those is equiprobable, so since there are four, each has 1/4 of the probability. Now you remove G+G from the table since "at least one of the two is a boy". You're left with three: B+B, B+G, G+B. Each of those three is still equiprobable, so since there are three each has 1/3 of the total.
    0matteri13y
    And in hope of clarifying for those still confused over why the answer to the other question - "is your eldest/youngest child a boy" - is different: if you get a 'yes' to this question you eliminate the fact that having a boy and a girl could mean both that the boy was born first (B+G) and that the girl was born first (G+B). Only one of those will remain, together with B+B.
    0prase13y
    This sort of problem is often easier to understand when modified to make the probabilities more different. E.g. suppose ten children and information that at least nine of them are boys. The incorrect reasoning leads to 1/2 probability of ten boys, while actually the probability is only 1/11. You can even write a program which generates a sequence of ten binary values, 0 for a boy and 1 for a girl, and then prompts you whenever it encounters at least nine zeros and compare the relative frequencies. If the generated binary numbers are converted to decimals, it means that you generate an integer between 0 and 1023, and get prompted whenever the number is a power of 2, which correspond to 9 boys and 1 girl (10 possible cases), or zero, which corresponds to 10 boys (1 case only). Such modification works well as an intuition pump in case of the Monty Hall problem, maybe is not so illustrative here. But Monty Hall is isomorphic to this one.

    Conrad wrote:

    ps - Ofc, knowing, or even just suspecting, the coin is rigged, on the second throw you'd best bet on a repeat of the outcome of the first.

    I think it would be worthwhile to examine this conclusion - as it might seem to be an obvious one to a lot of people. Let us assume that there is a very good mechanical arm that makes a completely fair toss of the coin in the opinion of all humans so that we can talk entirely about the bias of the coin.

    Let's say that the mechanism makes one toss; all you know is that the coin is biased - not how. Assume... (read more)

    5Alicorn13y
    It is not necessary to know the exact bias to enact the following reasoning: "Coins can be rigged to display one face more than the other. If this coin is rigged in this way, then the face I have seen is more likely than the other to be the favored side. If the coin is not rigged in this way, it is probably fair, in which case the side I saw last time is equally likely to come up next by chance. It is therefore a better bet to expect a repeat." Key phrase: judgment under uncertainty.
    -1matteri13y
    I am not arguing against betting on the side that showed up in the first toss. What is interesting though is that even under those strict conditions, if you don't know the bias beforehand, you never will. Considering this; how could anyone ever argue that there are known probabilities in the world where no such strict conditions apply?
    0Alicorn13y
    Your definition of "know" is wrong.
    0matteri13y
    Very well, I could have phrased it in a better way. Let me try again; and let's hope I am not mistaken. Considering that even if there is such a thing as an objective probability, it can be shown that such information is impossible to acquire (impossible to falsify); how could it be anything but religion to believe in such a thing?
    0Alicorn13y
    See here.
    2soreff13y
    This seems like it is asking too much of the results of the coin tosses. Given some prior for the probability distribution of biased coins, each toss result updates the probability distribution. Given a prior probability distribution which isn't too extreme (e.g. no zeros in the distribution), after enough toss results, the posterior distribution will narrow towards the observed frequencies of heads and tails. Yes, at no point is the exact bias known. The distribution doesn't narrow to a delta function with a finite number of observations. So?

    "Or here's a very similar problem: Let's say I have four cards, the ace of hearts, the ace of spades, the two of hearts, and the two of spades. I draw two cards at random. You ask me, "Are you holding at least one ace?" and I reply "Yes." What is the probability that I am holding a pair of aces? It is 1/5. There are six possible combinations of two cards, with equal prior probability, and you have just eliminated the possibility that I am holding a pair of twos. Of the five remaining combinations, only one combination is a p... (read more)

    0thomblake13y
    The standard way of quoting is to use a single greater-than sign (>) before the paragraph, and then leave a blank line before your response. Note the 'Help' link below the comment editing box.

    The unpredictability of a die roll or coin flip is not due to any inherent physical property of the objects; it is simply due to lack of information. Even with quantum uncertainty, you could predict the result of a coin flip or die roll with high accuracy if you had precise enough measurements of the initial conditions.

    That is quite debatable. For one thing, it is possible to for quantum indeterminism, if there is any, to leak into the macroscopic world. Even if it were not possible, there is still the issue of microscopic indeterminism. You cannot pro... (read more)

    Hate to be a stickler for this sort of thing, but even in the bayesian interpretation there are probabilities in the world, it's just that they are facts about the world and the knowledge the agents have of the world in combination. It's a fact that a perfect bayesian given P(a), P(a|b), and P(a|~b) will ascribe P(b|a), a probability of P(a|b)P(a) / P(b), and that that is the best value to give P(b|a).

    If an agent has perfect knowledge then it need not ascribe any non-1 probability to any proposition it holds. But it is a fact about agents in the world tha... (read more)

    1wedrifid13y
    You may appreciate Probability is Subjectively Objective. It's the followup to this post and happens to be my favorite post on lesswrong!
    0Ronny Fernandez13y
    I can see why it is your favorite post. It's also extremely relevant to the position I expressed in my post, thank you. But I'm not sure that I can't hold my position above while being an objectively-subjective bayesian; I'll retract my post if I find that I can't.
    0wedrifid13y
    My impression was not that you would be persuaded to retract but that you'd feel vindicated. The positions are approximately the same (with slightly different labels attached). I don't think I disagree with you at all.

    Does this mean that there is nothing that is inherently uncertain? I guess another way to put that would be, could Laplace's Demon infer the entire history of the universe back to front from a single moment? It might seem obvious that there are singularities moving backwards through time (i.e. processes whose result does not give you information about their origin), so couldn't the same thing exist moving forward through time?

    Anyway, great article!

    My first post, so be gentle. :)

    I disagree that there is a difference between "Bayesian" and "Frequentist;" or at least, that it has anything to do with what is mentioned in this article. The field of Probability has the unfortunate property of appearing to be a very simple, well defined topic. But it actually is complex enough to be indefinable. Those labels are used by people who want to argue in favor of one definition - of the indefinable - over another. The only difference I see is where they fail to completely address a problem.

    Tak... (read more)

    3[anonymous]13y
    I can't speak for the rest of your post, but is pretty clearly wrong. (In fact, it looks a lot like you're establishing a prior distribution, and that's uniquely a Bayesian feature.) The probability of an event (the result of the flip is surely an event, though I can't tell if you're claiming to the contrary or not) to a frequentist is the limit of the proportion of times the event occurred in independent trials as the number of trials tends to infinity. The probability the coin landed on heads is the one thing in the problem statement that can't be 1/2, because we know that the coin is biased. Your calculation above seems mostly ad hoc, as is your introduction of additional random variables elsewhere. However, I'm not a statistician.
    0nshepperd13y
    I think they are arguing that the "independent trials" that are happening here are instances of "being given a 'randomly' biased coin and seeing if a single flip turns up heads". But of course the techniques they are using are bayesian, because I'd expect a frequentist to say at this point "well, I don't know who's giving me the coins, how am I supposed to know the probability distribution for the coins?".
    0JeffJo13y
    The random process a frequentist should repeat is flipping a random biased coin, and getting a random bias b and either heads or tails. You are assuming it is flipping the same* biased coin with fixed bias B, and getting heads or tails. The probability a random biased coins lands heads is 1/2, from either point of view. And for nshepperd, the point is that a Frequentist doesn't need to know what the bias is. As long as we can't assume it is different for b1 and 1-b1, when you integrate over the unknown distribution (yes, you can do that in this case) the answer is 1/2.
    0JeffJo12y
    Say a bag contains 100 unique coins that have been carefully tuned to be unfair when flipped. Each is stamped with an integer in the range 0 to 100 (50 is missing) representing its probability, in percent, of landing on heads. A single coin is withdrawn without revealing its number, and flipped. What is the probability that the result will be heads? You are claiming that anybody who calls himself a Frequentist needs to know the number on the coin to answer this question. And that any attempt to represent the probability of drawing coin N is specifying a prior distribution, an act that is strictly prohibited for a Frequentist. Both claims are absurd. Prior distributions are a fact of the mathematics of probability, and belong to Frequentist and Bayesian alike. The only differences are (1) the Bayesian may use information differently to determine a prior, sometimes in situations where a Frequentist wouldn't see one at all; (2) The Bayesian will prefer solutions based explicitly on that prior, while the Frequentist will prefer solutions based on the how the prior affects repeated experiments; and (3) Some Frequentists might not realize when they have enough information to determine a prior, and/or its effects, that should satisfy them. If both get answers, and they don't agree, somebody did something wrong. The answer is 50%. The Bayesian says that, based on available information, neither result can be favored over the other so they must both have probability 50%. The Frequentist says that if you repeat the experiment 100^2 times, including the part where you draw a coin from the bag of 100 coins, you should count on getting each coin 100 times. And you should also count, for each coin, on getting heads in proportion to its probability. That way, you will count 5,000 heads in 10,000 trials, making the answer 50%. Both solutions are based on the same facts and assumptions, just organized differently. The answer Eliezer_Yudkowsky attributes to Frequentists, for the s

    I used to be a frequentist, and say that the probability of the unfair coin landing heads is either 4/5 or 1/5, but I don't know exactly which. But that is not to say that I saw probabilities on things instead of on information. I'll explain.

    If someone asked me if it will it rains tomorrow, I would ask which information am I supposed to use? If it rained in the past few days? Or would I consider tomorrow as a random day and pick the frequency of rainy days in the year? Or maybe I should consider the season we are in. Or am I supposed to use all available i... (read more)

    Thinking of probabilities as levels of uncertainty became very obvious to me when thinking about the Monty Hall problem. After the host has revealed that one of the three doors has a booby prize behind it, you're left with two doors, with a good prize behind one of them.

    If someone walks into the room at that stage, and you tell them that there's a good prize behind one door and a booby prize behind another, they will say that it's a 50/50 chance of selecting the door with the prize behind it. They're right for themselves, however the person who had been... (read more)

    I'm sorry, why isn't the prior probability that you say "why yes, I am holding the ace of spades" = 1/4?

    Edit: unless you meant "draw a pair", in which case yes, the ace of spades would show up in three out of six possible pairings.

    Even before a fair coin is tossed, the notion that it has an inherent 50% probability of coming up heads may be just plain wrong. Maybe you're holding the coin in such a way that it's just about guaranteed to come up heads, or tails, given the force at which you flip it, and the air currents around you. But, if you don't know which way the coin is biased on this one occasion, so what?

    Maybe it isn't really 50%, and it isn't really 100% how-it-came up either. That it is rational to make estimates based on our own ignorance is not proof that the universe... (read more)

    So, I've been on this site for awhile. When I first came here, I had never had a formal introduction to Bayes' theorem, but it sounded a lot like ideas that I had independently worked out in my high school and college days (I was something of an amateur mathematician and game theorist).

    A few days ago I was reading through one of your articles - I don't remember which one - and it suddenly struck me that I may not actually understand priors as well as I think I do.

    After re-reading some fo the series, and then working through the math, I'm now reasonably con... (read more)

    -2MugaSofer11y
    Well, without a sense that can detect color, it would just be an arbitrary undetectable property something might have, right? So it would be ... dependent on what other objects B9 is aware of, I think. The precise hypothesis of "all [objects that we know are blue] share a common property I cannot perceive with this camera" should be highly conjunctive, and therefore low, unless B9 has observed humans reacting to them because of their coloration. And even then, "blue" would be defined only in terms of what other objects have it, not a specific input type from the camera. I suspect I'm missing the point of this question, somehow.
    -2Kindly11y
    Without knowing anything about what "blue" is? I'd say 1/2.
    3wuncidunci11y
    Your question is not well specified. Event though you might think that the proposition "its favorite ball is blue" is something that has a clear meaning, it is highly dependent on to which precision it will be able to see colours, how wide the interval defined as blue is, and how it considers multicoloured objects. If we suppose it would categorise the observed wavelength into one of 27 possible colours (one of those being blue), and further suppose that it knew the ball to be of a single colour and not patterned, and further not have any background information about the relative frequencies of different colours of balls or other useful prior knowledge, the prior probability would be 1/27. If we suppose that it had access to internet and had read this discussion on LW about the colourblind AI, it would increase its probability by doing an update based on the probability of this affecting the colour of its own ball.
    -2TheOtherDave11y
    I don't claim to be any kind of Bayesian expert here, but, well, I seem to be replying anyway. Don't take my reply too seriously. B9 has never heard of "colors". I take that to mean, not only that nobody has used that particular word to B9, but that B9 has been exposed to no inputs that significantly depend on it... e.g., nobody has talked about whether their shirts match their pants, nobody has talked about spectroscopic analysis of starlight or about the mechanism of action of clorophyll or etc... that B9 has no evidentiary basis from which to draw conclusions about color. (That is, B9 is the anti-Mary.) Given those assumptions, a universal prior is appropriate... 50% chance that "My ball is blue" is true, 50% chance that it's false. If those assumptions aren't quite true, and B9 has some information that usefully pertains, however indirectly, to the color of the ball, then insofar as that information is evidence one way or another, B9 ideally updates that probability accordingly.
    5Kawoomba11y
    You and Kindly both? Very surprising. Consider you as B9, reading on the internet about some new and independent property of items, "bamboozle-ness". Should you now believe that P("My monitor is bamboozled") = 0.5? That it is as likely that your monitor is bamboozled as that it's not bamboozled? If I offered you a bet of 100 big currency units, if it turns out your monitor was bamboozled, you'd win triple! Or 50x! Wouldn't you accept, based on your "well, 50% chance of winning" assessment? Am I bamboozled? Are you bamboozled? Notice that B9 has even less reason to believe in colors than you in the example above - it hasn't even read about them on the internet. Instead of assigning 50-50 odds, you'd have to take that part of the probability space which represents "my belief in other models than my main model", identify the miniscule prior for that specific model containing "colors", or "bamboozleness", then calculate from assuming that model the odds of blue versus not-blue, then weigh back in the uncertainty from such an arbitrary model being true in lieu of your standard model.
    3TheOtherDave11y
    Given the following propositions: (P1) "My monitor is bamboozled." (P2) "My monitor is not bamboozled." (P3) "'My monitor is bamboozled' is not the sort of statement that has a binary truth value; monitors are neither bamboozled nor non-bamboozled." ...and knowing nothing at all about bamboozledness, never even having heard the word before, it seems I ought to assign high probability to P3 (since it's true of most statements that it's possible to construct) and consequently low probabilities to P1 and P2. But when I read about bamboozledness on the Internet (or am asked whether my ball is blue), my confidence in P3 seems to go up [EDIT: I mean down] pretty quickly, based on my experience with people talking about stuff. (Which among other things suggests that my prior for P3 wasn't all that low [EDIT: I mean high].) Having become convinced of NOT(P3) (despite still knowing nothing much about bamboozledness other than it's the sort of thing people talk about on the Internet), if I have very low confidence in P1, I have very high confidence in P2. If I have very low confidence in P2, I have very high confidence in P1. Very high confidence in either proposition seems unjustifiable... indeed, a lower probability for P1 than P2 or vice-versa seems unjustifiable... so I conclude 50%. If I'm wrong to do so, it seems I'm wrong to reduce my confidence in P3 in the first place. Which I guess is possible, though I do seem to do it quite naturally. But given NOT(P3), I genuinely don't see why I should believe P(P2) > P(P1). Just to be clear: you're offering me (300BCUs if P1, -100BCUs if P2)? And you're suggesting I shouldn't take that bet, because P(P2) >> P(P1)? It seems to follow from that reasoning that I ought to take (300BCUs if P2, -100BCUs if P1). Would you suggest I take that bet? Anyway, to answer your question: I wouldn't take either bet if offered, because of game-theoretical considerations... that is, the moment you offer me the bet, that's evidence that you
    0Vaniver11y
    I don't think the logic in this part follows. Some of it looks like precision: it's not clear to me that P1, P2, and P3 are mutually exclusive. What about cases where 'my monitor is bamboozled' and 'my monitor is not bamboozled' are both true, like sets that are both closed and open? Later, it looks like you want P3 to be the reverse of what you have it written as; there it looks like you want P3 to be the proposition that it is a well-formed statement with a binary truth value.
    2TheOtherDave11y
    Blech; you're right, I incompletely transitioned from an earlier formulation and didn't shift signs all the way through. I think I fixed it now. Your larger point about (p1 and p2) being just as plausible a priori is certainly true, and you're right that makes "and consequently low probabilities to P1 and P2" not follow from a properly constructed version of P3. I'm not sure that makes a difference, though perhaps it does. It still seems that P(P1) > P(P2) is no more likely, given complete ignorance of the referent for "bamboozle", than P(P1) < P(P2)... and it still seems that knowing that otherwise sane people talk about whether monitors are bamboozled or not quickly makes me confident that P(P1 XOR P2) >> P((P1 AND P2) OR NOT(P1 OR P2))... though perhaps it ought not do so.
    0Kawoomba11y
    Let's lift the veil: "bamboozledness" is a placeholder for ... phlogiston (a la "contains more than 30ppm phlogiston" = "bamboozled"). Looks like you now assign a probability of 0.5 to phlogiston, in your monitor, no less. (No fair? It could also have been something meaningful, but in the 'blue balls' scenario we're asking for the prior of a concept which you've never even seen mentioned as such (and hopefully never experienced), what are the chances that a randomly picked concept is a sensible addition to your current world view.) That's the missing ingredient, the improbability of a hitherto unknown concept belonging to a sensible model of reality: P("Monitor contains phlogiston" | "phlogiston is the correct theory" Λ "I have no clue about the theory other than it being correct and wouldn't know the first thing of how to guess what contains phlogiston") could be around 0.5 (although not necessarily exactly 0.5 based on complexity considerations). However, what you're faced with isn't "... given that colors exist", "... given that bamboozledness exists", "... given that phlogiston exists" (in each case, 'that the model which contains concepts corresponding to the aforementioned corresponds to reality'), it is simply "what is the chance that there is phlogiston in your computer?" (Wait, now it's in my computer too! Not only my monitor?) Since you have no (little - 'read about it on the internet') reason to assume that phlogiston / blue is anything meaningful, and especially given that in the scenario you aren't even asked about the color of a ball, but simply the prior which relies upon the unknown concept of 'blue' which corresponds to some physical property which isn't a part of your current model, any option which contains "phlogiston is nonsense"/"blue is nonsense", in the form of "monitor does not contain phlogiston", "ball is not blue", is vastly favored. I posed the bet to show that you wouldn't actually assign a 0.5 probability to a randomly picked con
    0TheOtherDave11y
    Well, meaningfulness is the crux, yes. As I said initially, when I read about bamboozledness on the Internet (or am asked whether my ball is blue), my confidence seems to grow pretty quickly that the word isn't just gibberish... that there is some attribute to which the word refers, such that (P1 XOR P2) is true. When I listen to a conversation about bamboozled computers, I seem to generally accept the premise that bamboozled computers are possible pretty quickly, even if I haven't the foggiest clue what a bamboozled computer (or monitor, or ball, or hot thing, or whatever) is. It would surprise me if this were uncommon. And, sure, perhaps I ought to be more skeptical about the premise that people are talking about anything meaningful at all. (I'm not certain of this, but there's certainly precedent for it.) Here's where you lose me. I don't see how an option can contain "X is nonsense" in the form of "monitor does not contain X". If X is nonsense, "monitor does not contain X" isn't true. "monitor contains X" isn't true either. That's kind of what it means for X to be nonsense. I'm not sure. The question that seems important here is "how confident am I, about that new attribute X, that a system either has X or lacks X but doesn't do both or neither?" Which seems to map pretty closely to "how confident am I that 'X' is meaningful?" Which may be equivalent to your formulation, but if so I don't follow the equivalence. (nods) As I said in the first place, if I eliminate the game-theoretical concerns, and I am confident that "bamboozled" isn't just meaningless gibberish, then I'll take either bet if offered.
    0Kawoomba11y
    You're just trying to find out whether X is binary, then - if it is binary - you'd assign even odds, in the absence of any other information. However, it's not enough for "blue" - "not blue" to be established as a binary attribute, we also need to weigh in the chances of the semantic content (the definition of 'blue', unknown to us at that time) corresponding to any physical attributes. Binarity isn't the same as "describes a concept which translates to reality". When you say meaningful, you (I think) refer to the former, while I refer to the latter. With 'nonsense' I didn't mean 'non-binary', but instead 'if you had the actual definition of the color attribute, you'd find that it probably doesn't correspond to any meaningful property of the world, and as such that not having the property is vastly more likely, which would be "ball isn't blue (because nothing is blue, blue is e.g. about having blue-quarks, which don't model reality)".
    1TheOtherDave11y
    I'll accept that in general. In this context, I fail to understand what is entailed by that supposed difference. Put another way: I fail to understand how "X"/"not X" can be a binary attribute of a physical system (a ball, a monitor, whatever) if X doesn't correspond to a physical attribute, or a "concept which translates to reality". Can you give me an example of such an X? Put yet another way: if there's no translation of X to reality, if there's no physical attribute to which X corresponds, then it seems to me neither "X" nor "not X" can be true or meaningful. What in the world could they possibly mean? What evidence would compel confidence in one proposition or the other? Looked at yet a different way... case 1: I am confident phlogiston doesn't exist. I am confident of this because of evidence related to how friction works, how combustion works, because burning things can cause their mass to increase, for various other reasons. (P1) "My stove has phlogiston" is meaningful -- for example, I know what it would be to test for its truth or falsehood -- and based on other evidence I am confident it's false. (P2) "My stove has no phlogiston" is meaningful, and based on other evidence I am confident it's true. If you remove all my evidence for the truth or falsehood of P1/P2, but somehow preserve my confidence in the meaningfulness of "phlogiston", you seem to be saying that my P(P1) << P(P2). case 2: I am confident photons exist. Similarly to P1/P2, I'm confident that P3 ("My lightbulb generates photons") is true, and P4 ("My lightbulb generates no photons") is false, and "photon" is meaningful. Remove my evidence for P3/P4 but preserve my confidence in the meaningfulness of "photon", should my P(P3) << P(P4)? Or should my P(P3) >> P(P4)? I don't see any grounds for justifying either. Do you?
    0Kawoomba11y
    Yes. P1 also entails that phlogiston theory is an accurate descriptor of reality - after all, it is saying your stove has phlogiston. P2 does not entail that phlogiston theory is an accurate descriptor of reality. Rejecting that your stove contains phlogiston can be done on the basis of "chances are nothing contains phlogiston, not knowing anything about phlogiston theory, it's probably not real, duh", which is why P(P2)>>P(P1). The same applies to case 2, knowing nothing about photons, you should always go with the proposition (in this case P4) which is also supported by "photons are an imaginary concept with no equivalent in reality". For P3 to be correct, photons must have some physical equivalent on the territory level, so that anything (e.g. your lightbulb) can produce photons in the first place. For a randomly picked concept (not picked out of a physics textbook), the chances of that are negligible. Take some random concept, such as "there are 17 kinds of quark, if something contains the 13th quark - the blue quark - we call it 'blue'". Then affirming it is blue entails affirming the 17-kinds-of-quark theory (quite the burden, knowing nothing about its veracity), while saying "it is not blue = it does not contain the 13th quark, because the 17-kinds-of-quark theory does not describe our reality" is the much favored default case. A not-yet-considered randomly chosen concept (phlogiston, photons) does not have 50-50 odds of accurately describing reality, its odds of doing so - given no evidence - are vanishingly small. That translates to P("stove contains phlogiston") being much smaller than P("stove does not contain phlogiston"). Reason (rephrasing the above argument): rejecting phlogiston theory as an accurate map of the territory strengthens your "stove does not contain phlogiston (... because phlogiston theory is probably not an an accurate map, knowing nothing about it)" even if P("stove contains phlogiston given phlogiston theory describes reality")
    4TheOtherDave11y
    I agree that if "my stove does not contain X" is a meaningful and accurate thing to say even when X has no extension into the real world at all, then P("my stove does not contain X") >>> P("my stove contains X") for an arbitrarily selected concept X, since most arbitrarily selected concepts have no extension into the real world. I am not nearly as convinced as you sound that "my stove does not contain X" is a meaningful and accurate thing to say even when X has no extension into the real world at all, but I'm not sure there's anything more to say about that than we've already said. Also, thinking about it, I suspect I'm overly prone to assuming that X has some extension into the real world when I hear people talking about X.
    1Kawoomba11y
    I'm glad we found common ground. Consider e.g. "There is not a magical garden gnome living under my floor", "I don't emit telepathic brain waves" or "There is no Superman-like alien on our planet", which to me all are meaningful and accurate, even if they all contain concepts which do not (as far as we know) extend into the real world. Can an atheist not meaningfully say that "I don't have a soul"? If I adopted your point of view (i.e. talking about magical garden gnomes living or not living under my floor makes no (very little) sense either way since they (probably) cannot exist), then my confidence for or against such a proposition would be equal but very low (no 50% in that case either). Except if, as you say, you're assigning a very high degree of belief into "concept extends into the real world" as soon as you hear someone talk about it. "This is a property which I know nothing about but of which I am certain that it can apply to reality" is the only scenario in which you could argue for a belief of 0.5. It is not the scenario of the original post.
    3TheOtherDave11y
    The more I think about this, the clearer it becomes that I'm getting my labels confused with my referents and consequently taking it way too much for granted that anything real is being talked about at all. "Given that some monitors are bamboozled (and no other knowledge), is my monitor bamboozled?" isn't the same question as "Given that "bamboozled" is a set of phonemes (and no other knowledge), is "my monitor is bamboozled" true?" or even "Given that English speakers sometimes talk about monitors being bamboozled (ibid), is my monitor bamboozled?" and, as you say, neither the original blue-ball case nor the bamboozled-computer case is remotely like the first question. So, yeah: you're right, I'm wrong. Thanks for your patience.
    0Watercressed11y
    That depends on the knowledge that the AI has. If B9 had deduced the existence of different light wavelengths, and knew how blue corresponded to a particular range, and how human eyes see stuff, the probability would be something close to the range of colors that would be considered blue divided by the range of all possible colors. If B9 has no idea what blue is, then it would depend on priors for how often statements end up being true when B9 doesn't know their meaning. Without knowing what B9's knowledge is, the problem is under-defined.

    Very low, because B9 has to hypothesize a causal framework involving colors without any way of observing anything but quantitatively varying luminosities. In other words, they must guess that they're looking at the average of three variables instead of at one variable. This may sound simple but there are many other hypotheses that could also be true, like two variables, four variables, or most likely of all, one variable. B9 will be surprised. This is right and proper. Most physics theories you make up with no evidence behind them will be wrong.

    4LoganStrohl11y
    I think I'm confused. We're talking about something that's never even heard of colors, so there shouldn't be anything in the mind of the robot related to "blue" in any way. This ought to be like the prior probability from your perspective that zorgumphs are wogle. Now that I've said the words, I suppose there's some very low probability that zorgumphs are wogle, since there's a probability that "zorgumph" refers to "cats" and "wogle" to "furry". But when you didn't even have those words in your head anywhere, how could there have been a prior? How could B9's prior be "very low" instead of "nonexistent"?
    6hairyfigment11y
    Eliezer seems to be substituting the actual meaning of "blue". Now, if we present the AI with the English statement and ask it to assign a probability...my first impulse is to say it should use a complexity/simplicity prior based on length. This might actually be correct, if shorter message-length corresponds to greater frequency of use. (ETA that you might not be able to distinguish words within the sentence, if faced with a claim in a totally alien language.)
    0TheOtherDave11y
    Well, if nothing else, when I ask B9 "is your ball blue?", I'm only providing a finite amount of evidence thereby that "blue" refers to a property that balls can have or not have. So if B9's priors on "blue" referring to anything at all are vastly low, then B9 will continue to believe, even after being asked the question, that "blue" doesn't refer to anything. Which doesn't seem like terribly sensible behavior. That sets a floor on how low the prior on "'blue' is meaningful" can be.
    4ialdabaoth11y
    Thank you! This helps me hone in on a point that I am sorely confused on, which BrienneStrohl just illustrated nicely: You're stating that B9's prior that "the ball is blue" is 'very low', as opposed to {Null / NaN}. And that likewise, my prior that "zorgumphs are wogle" is 'very low', as opposed to {Null / NaN}. Does this mean that my belief system actually contains an uncountable infinitude of priors, one for each possible framing of each possible cluster of facts? Or, to put my first question more succinctly, what priors should I assign potential facts that my current gestalt assigns no semantic meaning to whatsoever?
    5Eliezer Yudkowsky11y
    "The ball is blue" only gets assigned a probability by your prior when "blue" is interpreted, not as a word that you don't understand, but as a causal hypothesis about previously unknown laws of physics allowing light to have two numbers assigned to it that you didn't previously know about, plus the one number you do know about. It's like imagining that there's a fifth force appearing in quark-quark interactions a la the "Alderson Drive". You don't need to have seen the fifth force for the hypothesis to be meaningful, so long as the hypothesis specifies how the causal force interacts with you. If you restrain yourself to only finite sets of physical laws of this sort, your prior will be over countably many causal models.
    2Watercressed11y
    Causal models are countable? Are irrational constants not part of causal models?
    4ThrustVectoring11y
    There are only so many distinct states of experience, so yes, causal models are countable. The set of all causal models is a set of functions that map K n-valued past experiential states into L n-valued future experiential states. This is a monstrously huge number of functions in the set, but still countable, so long as K and L are at most countably infinite. Note that this assumes that states of experience with zero discernible difference between them are the same thing - eg, if you come up with the same predictions using the first million digits of sqrt(2) and the irrational number sqrt(2), then they're the same model.
    1Watercressed11y
    But the set of causal models is not the set of experience mappings. The model where things disappear after they cross the cosmological horizon is a different model than standard physics, even though they predict the same experiences. We can differentiate between them because Occam's Razor favors one over the other, and our experiences give us ample cause to trust Occam's Razor. At first glance, it seems this gives us enough to diagonalize models--1 meter outside the horizon is different from model one, two meters is different from model two... There might be a way to constrain this based on the models we can assign different probabilities to, given our knowledge and experience, which might get it down to countable numbers, but how to do it is not obvious to me.
    0Watercressed11y
    Er, now I see that Eliezer's post is discussing finite sets of physical laws, which rules out the cosmological horizon diagonalization. But, I think this causal models as function mapping fails in another way: we can't predict the n in n-valued future experiential states. Before the camera was switched, B9 would assign low probability to these high n-valued experiences. If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision. Since it can't put a bound on the number of values in the L states, the set is uncountable and so is the set of functions.
    0ThrustVectoring11y
    What? Of course we can - it's much simpler with a computer program, of course. Suppose you have M bits of state data. There are 2^M possible states of experience. What I mean by n-valued is that there are a certain discrete set of possible experiences. Arbitrary, yes. Unbounded, no. It's still bounded by the amount of physical memory it can use to represent state.
    0Watercressed11y
    In order to bound the states at a number n, it would need to assign probability zero to ever getting an upgrade allowing it to access log n bytes of memory. I don't know how this zero-probability assignment would be justified for any n--there's a non-zero probability that one's model of physics is completely wrong, and once that's gone, there's not much left to make something impossible.
    3Vaniver11y
    Note that a conversant AI will likely have a causal model of conversations, and so there are two distinct things going on here- both "what are my beliefs about words that I don't understand used in a sentence" and "what are my beliefs about physics I don't understand yet." This split is a potential source of confusion, and the conversational model is one reason why the betting argument for quantifying uncertainties meets serious resistance.
    6Eliezer Yudkowsky11y
    To me the conversational part of this seems way less complicated/interesting than the unknown causal models part - if I have any 'philosophical' confusion about how to treat unknown strings of English letters it is not obvious to me what it is.
    2Kawoomba11y
    You can reserve some slice of your probability space for "here be dragons", the (1 - P("my current gestalt is correct"). Your countably many priors may fight over that real estate. Also, if you demand your models to be computable (a good assumption, because if they aren't we're eff'ed anyways), there'll never be an uncountable infinitude of priors.
    0CCC11y
    I'd imagine something like . It would be like asking whether or not the ball is supercalifragilisticexpialidocious. If B9 has recently been informed that 'blue' is a property, then the prior would be very low. Can balls even be blue? If balls can be blue, then what percentage of balls are blue? There is also a possibility that, if some balls can be blue, all balls are blue; so the probability distribution would have a very low mean but a very high standard deviation. Any further refinement requires B9 to obtain additional information; if informed that balls can be blue, the odds go up; if informed that some balls are blue, the odds go up further; if further informed that not all balls are blue, the standard deviation drops somewhat. If presented with the luminance formula, the odds may go up significantly (it can't be used to prove blueness, but it can be used to limit the number of possible colours the ball can be, based on the output of the black-and-white camera).
    1ThrustVectoring11y
    I'd go down a level of abstraction about the camera in order to answer this question. You have a list of numbers, and you're told that five seconds from now this list of numbers are going to replace with a list of triplets, with the property that the average of the triplet is the same as the corresponding number in the list. What is the probability you assign to "one of these triplets is within a certain range of RGB values?"
    0JeffJo11y
    Since this discussion was reopened, I've spent some time - mostly while jogging - pondering and refining my stance on the points expressed. I just got around to writing them down. Since there is no other way to do it, I'll present them boldly, apologizing in advance if I seem overly harsh. There is no such intention. 1) "Accursed Frequentists" and "Self-righteous Bayesians" alike are right, and wrong. Probability is in your knowledge - or rather, the lack thereof - of what is in the environment. Specifically, it is the measure of the ambiguity in the situation. 2) Nothing is truly random. If you know the exact shape of a coin, its exact weight distribution, exactly how it is held before flipping, exactly what forces are applied to flip it, the exact properties of the air and air currents it tumbles through, and exactly how long it is in the air before being caught in you open palm, then you can calculate - not predict - whether it will show Heads or Tails. Any lack in this knowledge leaves multiple possibilities open, which is the ambiguity. 3) Saying "the coin is biased" is saying that there is an inherent property, over all of the ambiguous ways you could hold the coin, the ambiguous forces you could use to flip it, the ambiguous air properties, and the ambiguous tumbling times, for it to land one way or another. (Its shape and weight are fixed, so they are unambiguous even if they are not known, and probably the source of this "inherent property.") 4) Your state of mind defines probability only in how you use it to define the ambiguities you are accounting for. Eliezer's frequentist is perfectly correct to say he needs to know the bias of this coin, since in his state of mind the ambiguity is what this biased coin will do. And Eliezer is also perfectly correct to say the actual bias is unimportant. His answer is 50%, since in his mind the ambiguity is what any biased coin do. They are addressing different questions. 5) A simple change to the coin question pu

    This was a very difficult concept for me, Eliezer. Not because I disagree with the Bayesian principle that uncertainty is in the mind, but because I lacked the inferential step to jump from that to why there were different probabilities depending on the question you asked.

    Might a better (or additional) way to explain this be to point out an analogy to the differing probabilities of truth you might assign to confirmed experimental hypothesis that were either originally vague, and therefore have less weight when adjusting the overall probability of truth vs. specific, and therefore shift the probability of truth further.

    Hopefully I'm actually understanding this correctly at all.

    The problem with trying to split, the it must be the oldest child who is the boy or the youngest child who is the boy is that the two situations overlap. You need to split the situation into oldest, youngest and both. If we made the ruling that both should be excluded, then we'd be able to complete the argument that there shouldn't be a difference between knowing that one child is a boy or knowing that the oldest child is a boy.

    I think that the main point of this is correct, but the definition of "mind" used by this phrase is unclear and might be flawed. I'm not certain, just speculating.

    As an aside, I think it is equivocation to talk about this kind of probability as being the same kind of probability that quantum mechanics leads to. No, hidden variable theories are not really worth considering.

    But projectivism has been written about for quite a long time (since at least the 1700s), and is very well known so I find it hard to believe that there are any significant proponents of 'frequentism' (as you call it).

    To those who've not thought about it, everyday projectivism comes naturally, but it falls apart at the slightest consideration.

    When it comes to Hempel's raven, though, even those who understand projectivism can have difficulty coming to terms with the probabilistic reality.

    I think I can show how probability is not purely in the mind but also an inherent property of things, bear with me.

    Lets take an event of seeing snow outside, for simplicity we know that snow is out there 3 month a year in winter, that fact is well tested and repeats each year. That distribution of snowy days is property of the reality. When we go out of bunker after spending there unknown amount of time we assign probability 1/4 to seeing a snow, and that number is function of our uncertainty about the date and our precise knowledge of when snow is out th... (read more)

    0ChristianKl7y
    The notion of probability to which you are pointing is the frequentist notion of probability. Eliezer favors the Bayesian notion of probability over the Frequentist notion. That might be true but a person who knows more about the weather might make a more accurate prediction about whether it shows. If I saw the weather report I might conclude that it's p=0.2 that it snows today even if over the whole year the distribution is that it snows on average every fourth day. If I have more prior information I will predict a different probability that it actually snows.
    0TheAncientGeek7y
    A statistical distribution is objective, and can be an element in a probability calculation, but is not itself probability.
    0vasaka7y
    Probability given data is an objective thing too. But point I make is that probability you assign is a mix of objective and subjective, your exact data is subjective thing, distribution is objective, and probability is a function of both.

    E[x]=0.5

    even for the frequentist, and that's what we make decisions with, so focusing on p(x) is a bit of misdirection. The whole frequentist-vs-bayesian culture war is fake. They're both perfectly consistent with well defined questions. (They have to be, because math works.)

    And yes to everything else, except...

    As to whether god plays dice with the universe... that is not in the scope of probability theory. It's math. Your Bayesian is really a pragmatist, and your frequentist is a straw person.

    Great post!

    Kinship, or more accurately the lack of it, is likewise in the mind. That's why it always annoys me to see the parenthetical phrase "no relation" in a newspaper or magazine article.

    It is a mind game, but not the one you're claiming imo. Probabilities are a game about choices, aka co-products. There are lots of ways to specify the alternatives in a co-product.  And once you've done so, you can create an instance of that co-product by injecting one of its constructors.  A co-product is a type, and its constructors create instances of that type. So frequentists count up the instances and then compare the relative frequency. Your mind games are just silly ways of defining different co-products using hypothetical knowledge or no... (read more)

    I tried to rush the angry comment about how it all is wrong, but a few second ater posting the comment (oops) I understood. I've seen a great example since the school genetics: when two heterozygotes cross (Aa is crossed with Aa), frequency of homozygotes among the descendants with dominant trait is 1/3. AA Aa aA aa (may never survive to the adulthood. Or AA may not survive. Or both survive, but we aren't interested)

    There may be something that influences the 1:2:1 proportion (only in one side?), but it's a "You flip a loaded coin. What's your bet on it falling heads?" case.