One minor correction, Eliezer: the link to your essay uses the text "An Intuitive Expectation of Bayesian Reasoning." I think you titled that essay "An Intuitive EXPLANATION of Bayesian Reasoning." (I am 99.9999% sure of this, and would therefore pay especial attention to any evidence inconsistent with this proposition.)
Perhaps this formulation is nice:
0 = (P(H|E)-P(H))P(E) + (P(H|~E)-P(H))P(~E)
The expected change in probability is zero (for if you expected change you would have already changed).
Since P(E) and P(~E) are both positive, to maintain balance if P(H|E)-P(H) < 0 then P(H|~E)-P(H) > 0. If P(E) is large then P(~E) is small, so (P(H|~E)-P(H)) must be large to counteract (P(H|E)-P(H)) and maintain balance.
Hey, sorry if it's mad trivial, but may I ask for a derivation of this? You can start with "P(H) = P(H|E)P(E) + P(H|~E)P(~E)" if that makes it shorter.
(edit):
Never mind, I just did it. I'll post it for you in case anyone else wonders.
1} P(H) = P(H|E)P(E) + P(H|~E)P(~E) [CEE]
2} P(H)P(E) + P(H)P(~E) = P(H|E)P(E) + P(H|~E)P(~E) [because ab + (1-a)b = b]
3} (P(H) - P(H))P(E) + (P(H) - P(H))P(~E) = (P(H|E) - P(H))P(E) + (P(H|~E) - P(H))P(~E) [subtract P(H) from every value to be weighted]
4} (P(H) - P(H))P(E) + (P(H) - P(H))P(~E) = P(H) - P(H) = 0 [because ab + (1-a)b = b]
(conclusion)
5} 0 = (P(H|E) - P(H))P(E) + (P(H|~E) - P(H))P(~E) [by identity syllogism from lines 3 and 4]
Eliezer,
Of course you are assuming a strong form of Bayesianism here. Why do we have to accept that strong form?
More precisely, I see no reason why there need be no change in the confidence level. As long as the probability is greater than 50% in one direction or the other, I have an expectation of a certain outcome. So, if some evidence slightly moves the expectation in a particular direction, but does not push it across the 50% line from wherever it started, what is the big whoop?
One reason is Cox's theorem, which shows any quantitative measure of plausibility must obey the axioms of probability theory. Then this result, conservation of expected evidence, is a theorem.
What is the "confidence level"? Why is 50% special here?
"Of course you are assuming a strong form of Bayesianism here. Why do we have to accept that strong form?"
Because it's mathematically proven. You might as well ask "Why do we have to accept the strong form of arithmetic?"
"So, if some evidence slightly moves the expectation in a particular direction, but does not push it across the 50% line from wherever it started, what is the big whoop?"
Because (in this case especially!) small probabilities can have large consequences. If we invent a marvelous new cure for acne, with a 1% chance of death to the patient, it's well below 50% and no specific person using the "medication" would expect to die, but no sane doctor would ever sanction such a "medication".
"Why is 50% special here?"
People seem to have a little arrow in their heads saying whether they "believe in" or "don't believe in" a proposition. If there are two possibilities, 50% is the point at which the little arrow goes from "not believe" to "believe".
Tom,
Bayes' Theorem has its limits. The support must be continuous, the dimensionality must be finite. Some of the discussion here has raised issues here that could be relevant to these kinds of conditiosn, such as fuzziness about the truth or falsity of H. This is not as straightforward as you claim it is.
Furthermore, I remind one and all that Bayes' Theorem is asymptotic. Even if the conditions hold, the "true" probability is approached only in the infinite time horizon. This could occur so slowly that it might stay on the "wrong" side of 50% well past the time that any finite viewer might hang around to watch.
There is also the black swan problem. It could move in the wrong direction until the black swan datum finally shows up pushing it in the other direction, which, again, may not occur during the time period someone is observing. This black swan question is exactly the frame of discussion here, as it is Taleb who has gone on and on about this business about evidence and absence thereof.
you can't possibly expect the resulting game plan to shift your beliefs (on average) in a particular direction.
But you can act to change the probability distribution of your future beliefs (just not its mean). That's the entire point of testing a belief. If you have a 50% belief that a ball is under a certain cup, then by lifting the cup, you can be certain than your future belief will be in the set {0%,100%} (with equal probability for 0 and 100, hence the same mean as now).
Getting the right shape of the probability distribution of future belief is the whole skill in testing a hypothesis.
But you can't have it both ways - as a matter of probability theory, not mere fairness.
You've proved your case - but there's still enough wriggle room that it won't make much practical difference. One example from global warming, which predicts higher temperature on average in Europe - unless it diverts the gulf stream, in which case it predicts lower average temperatures. Consider the two statements: 1) If average temperatures go up in Europe, or down, this is evidence for global warming. 2) If average temperatures go up in Europe, and the gulf stream isn't diverted, or average temperatures go down, while the gulf stream is diverted, this is evidence of global warming.
1) is nonsense, 2) is true. Lots of people say statements that sound like 1), when they mean something like 2). Add an extra detail, and the symmetry is broken.
This weakens the practical power of your point; if an accused witch is afraid, that shows she's guilty; if she's not afraid, in a way which causes the inquisitor to be suspicious, she's also guilty. That argument is flawed, but it isn't a logical flaw (since the similar statement 2) is true).
Then we're back to arguing the legitimacy of these "extra details".
Stuart, if the extra details are observable and specified in advance, the legitimacy is clear-cut.
Barkley, I'm an infinite set atheist, all real-world problems are finite; and you seem to be assuming that priors are arbitrary but likelihood ratios are fixed eternal and known, which is a strange position; and in any case what does that have to do with something as simple as Conservation of Expected Evidence? If anyone attempts to make an infinite-set scenario that violates CEE, it disproves their setup by reductio ad absurdum, and reinforces the ancient wisdom of E. T. Jaynes that no infinity may be assumed except as the proven limit of a finite problem.
Eliezer,
I do not necessarily believe that likelihood ratios are fixed for all time. The part of me that is Bayesian tends to the radically subjective form a la Keynes.
Also, I am a fan of nonstandard analysis. So, I have no problem with infinities that are not mere limits.
a more general law, which I would name Conservation of Expected Evidence
I thought it was pretty clear that I was coining the phrase. I'm certainly not the first person to point out the law. E.g. Robin notes that our best estimate of anything should have no predictable trend. In any case, I posted the mathematical derivation and you certainly don't have to take my word about anything.
Barkley, it looks to me like Eli derived it using the sum and product rules of probability theory.
What Peter said. Barkley, do you question that P(H) = P(H,E) + P(H, ~E) or do you question that P(H,E) = P(H|E)*P(E)?
...
Barkley, you don't realize that Bayes's Theorem is precisely what describes the normative update in beliefs over time? That this is the whole point of Bayes's Theorem?
Before black swans were observed, no one expected to encounter a black swan, and everyone expected to encounter another white swan on occasion. A black swan is huge evidence against, a white swan is tiny additional evidence for. Had they been normative, the two quantities would have balanced exactly.
I'm not sure what to say here. Maybe point to Probability Theory: The Logic of Science or A Technical Explanation of Technical Explanation? I don't know where this misunderstanding is coming from, but I'm learning a valuable lesson in how much Bayesian algebra someone can know without realizing which material phenomena it describes.
"no one expected to encounter a white swan, and everyone expected to encounter another black swan on occasion. A white swan is huge evidence against, a black swan is tiny additional evidence for." I presume you meant the reverse of this?
per the Black Swan:
The set of potential multicolored variations of Swans is infinite (purple, brown, grey, blue, green, etc). We can not prove any one of them do not exist. But every day that proceeds where we don't see these swans gives us a higher probability they do not exist. It never equals 1, but it's darn close.
The problem with the Black Swan parable is not that it's untrue, but rather unimportant. The set of things we have no evidence of is infinite. To then pounce across an unexpected observation (eg, a Black Swan, that Kevin Federline is a re...
Eliezer,
This is about to scroll off, but, frankly, I do not know what you mean by "normative" in this context. The usual usage of this term implies statements about values or norms. I do not see that anything about this has anything to do with values or norms. Perhaps I do not understand the "wholel point of Bayes' Theorem." Then again, I do not see anything in your reply that actually counters the argument I made.
Bottom line: I think your "law" is only true by assumption.
What I mean, Barkley, is that the expression P(H|E), as held at time t=0, should - normatively - describe the belief about H you will hold at time t=2 if you see evidence E at time t=1. Thus, statements true in probability theory about the decomposition of P(H) imply the normative law of Conservation of Expected Evidence, if you accept that probability theory is normative for real-world problems where no one has ever seen an infinite set.
If you don't think probability theory is valid in the real world, I have some Dutch Book trades I'd like to make with y...
Eliezer Yudkowsky, The word "normative" has stood in the way of my understanding what you mean, at least the first few times I saw you use it, before I pegged you as getting it from the heuristics and biases people. It greatly confused me many times when I first encountered them. It's jargon, so it shouldn't be surprising that different fields use it to mean rather different things.
The heuristics and biases people use it to mean "correct," because social scientists aren't allowed to use that word. I think there's a valuable lesson about academics, institutions, or taboos in there, but I'm not sure what it is. As far as I can tell, they are the only people that use it this way.
My dictionary defines normative as "of, relating to, or prescribing a norm or standard." It's confusing enough that it carries those two or three meanings, but to make it mean "correct" as well is asking for trouble or in-groups.
This post was one of the most helpful for me personally, but I recently realized this isn't true in an absolute sense: "There is no possible plan you can devise, no clever strategy, no cunning device, by which you can legitimately expect your confidence in a fixed proposition to be higher (on average) than before."
Suppose the statement "I perform action A" is more probable given position P than given not-P. Then if I start planning to perform action A, this will be evidence that I will perform A. Therefore it will also be evidence for p...
Um, no, if a study shows that people who chew gum also have a gene GXTP27 or whatever, which also protects against cancer, I cannot plan to increase my subjective probability that I have gene GXTP27 by starting to chew gum.
See also: "evidential decision theory", why nearly all decision theorists do not believe in.
Here's an example which doesn't bear on Conservation of Expected Evidence as math, but does bear on the statement,
"There is no possible plan you can devise, no clever strategy, no cunning device, by which you can legitimately expect your confidence in a fixed proposition to be higher (on average) than before."
taken at face value.
It's called the Cable Guy Paradox; it was created by Alan Hájek, a philosopher the Australian National University. (I personally think the term Paradox is a little strong for this scenario.)
Here it is: the cable guy is co...
Eliezer - what if the presence of the gene was decided by an omnipotent being called Omega? Then you'd break out the Spearmint, right?
I'll modify my advice. If the probability that "I do action A in order to increase my subjective probability of position P" is greater given P than given not P, then doing A in order to increase my subjective probability of position P will be evidence in favor of P.
So in many cases, there will such a plan that I can devise. Let's see Eliezer find a way out of this one.
Actually, the Omega situation is a perfect example. Someone facing the two boxes would like to increase his subjective probability that there is a million in the second box, and he is able to do this by deciding to take only the second box. If he decides to take both, on the other hand, he should decrease his credence in the presence of the million, even before opening the box.
Fantastic heuristic! It's like x=y·(z/y)+(1-y)·(x-z)/(1-y) for the rationalist's soul :)
It's worth noting, though, that you can rationally expect your credence in a certain belief "to increase", in the following sense: If I roll a die, and I'm about to show you the result, your credence that it didn't land 6 is now 5/6, and you're 5/6 sure that this credence it about to increase to 1.
I think this is what makes people feel like they can have a non-trivial expected value for their new beliefs: you can expect an increase or expect a decrease, but quantitatively the two possibilities exactly cancel each out in the expected value of your belief.
I have a theory that I will post this comment. By posting the comment, I'm seeking evidence to confirm the theory. If I post the comment, my probability will be higher than before.
Similarly, in Newcomb's problem, I seek evidence that box A has a million dollars, so I refrain from taking box B. There was money in box B, but I didn't take it, because that would give me evidence that box A was empty.
In short, there's one exception to this: when your choice is the evidence.
Wouldn't the rule be something more like:
((P(H|E) > P(H)) if and only if (P(H) > P(H|~E))) and ((P(H|E) = P(H)) if and only if (P(H) = P(H|~E)))
So, if some statement is evidence of a hypothesis, its negation must be evidence against. And if some statement's truth value is independent of a hypothesis, then so is that statements negation.
This is implied by the expectation of posterior probabilities version. Since P(E) + P(~E) = 1, that means that P(H|E) and P(H|~E) are either equal, or one is greater than P(H) and one is less than. If they were both l...
Hi, I'm new here but I've been following the sequences in the suggested order up to this point.
I have no problem with the main idea of this article. I say this only so that everyone knows that I'm nitpicking. If you're not interested in nitpicking then just ignore this post.
I don't think that the example given bellow is a very good one to demonstrate the concept of Conservation of Expected Evidence:
...If you argue that God, to test humanity's faith, refuses to reveal His existence, then the miracles described in the Bible >must argue against the existen
Is this the same as Jaynes' method for construction of a prior using transformation invariance on acquisition of new evidence?
Does conservation of expected evidence always uniquely determine a probability distribution? If so, it should eliminate a bunch of extraneous methods of construction of priors. For example, you would immediately know if an application of MaxEnt was justified.
Therefore, for every expectation of evidence, there is an equal and opposite expectation of counter-evidence.
Eliezer, isn't the "equal" part untrue? I like the parallel with Newton's 3rd law, but the two terms P(H|E)*P(E) and P(H|~E)*P(~E) aren't numerically equal - we only know that they sum to P(H).
For a true Bayesian, it is impossible to seek evidence that confirms a theory. There is no possible plan you can devise, no clever strategy, no cunning device, by which you can legitimately expect your confidence in a fixed proposition to be higher (on average) than before. You can only ever seek evidence to test a theory, not to confirm it.
Old post, but isn't evidence that disconfirms the theory X equal to confirming ~X? Is ~X ineligible to be considered a theory?
The hyperlink "An Intuitive Explanation of Bayesian Reasoning" is broken. The current location of that essay is here: http://yudkowsky.net/rational/bayes
Can someone tell me if I understand this correctly : He is saying that we must be clear before hand what constitutes evidence for and what constitutes evidence against and what doesn't constitute evidence either way?
Because in his examples it seems that what is being changed is what counts as evidence. It seems that no matter what transpires (in the witch trials for example) it is counted as evidence for. This is not the same as changing the hypothesis to fit the facts. The hypothesis was always 'she's a witch'. Then the evidence is interpreted as supportive of the hypothesis no matter what.
Hi, new here.
I was wondering if I've interpreted this correctly:
'For a true Bayesian, it is impossible to seek evidence that confirms a theory. There is no possible plan you can devise, no clever strategy, no cunning device, by which you can legitimately expect your confidence in a fixed proposition to be higher (on average) than before. You can only ever seek evidence to test a theory, not to confirm it.'
Does this mean that it is impossible to prove the truth of a theory? Because the only evidence that can exist is evidence that falsifies the theory, or...
Closely related is the law of total expectation: https://en.wikipedia.org/wiki/Law_of_total_expectation
It states that E[E[X|Y]]=E[X].
I do not understand the validity of this statement:
There is no possible plan you can devise, no clever strategy, no cunning device, by which you can legitimately expect your confidence in a fixed proposition to be higher (on average) than before.
Given a temporal proposition A among a set of other mututally exclusive temporal propositions {A, B, C...}, demonstrating B, C, and other candidates do not meet the evidence so far while A meets the evidence so far does raise our confidence in the proposition *continuing to hold*. This is standard Bayesian inferenc...
Criticism of this article was found at a talk page at RationalWiki.
...The Sequences do not contain unique ideas, and they present the ideas they do contain in misleading ways using parochial language. The "Law of Conservation of Expected Confidence" essay, for instance, covers ideas that are often covered in introductory philosophical methods or critical thinking courses. There is no novelty either in the idea that your expected future credence must match your current credence (otherwise, why not update your credence now?), nor in the idea that if E is eviden
Friedrich Spee von Langenfeld, a priest who heard the confessions of condemned witches, wrote in 1631 the Cautio Criminalis ('prudence in criminal cases') in which he bitingly described the decision tree for condemning accused witches: If the witch had led an evil and improper life, she was guilty; if she had led a good and proper life, this too was a proof, for witches dissemble and try to appear especially virtuous. After the woman was put in prison: if she was afraid, this proved her guilt; if she was not afraid, this proved her guilt, for witches characteristically pretend innocence and wear a bold front. Or on hearing of a denunciation of witchcraft against her, she might seek flight or remain; if she ran, that proved her guilt; if she remained, the devil had detained her so she could not get away.
Spee acted as confessor to many witches; he was thus in a position to observe every branch of the accusation tree, that no matter what the accused witch said or did, it was held a proof against her. In any individual case, you would only hear one branch of the dilemma. It is for this reason that scientists write down their experimental predictions in advance.
But you can't have it both ways—as a matter of probability theory, not mere fairness. The rule that "absence of evidence is evidence of absence" is a special case of a more general law, which I would name Conservation of Expected Evidence: The expectation of the posterior probability, after viewing the evidence, must equal the prior probability.
Therefore, for every expectation of evidence, there is an equal and opposite expectation of counterevidence.
If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you're very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow. On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence—before you've actually seen it—should not shift your prior beliefs. (Again, if this is not intuitively obvious, see An Intuitive Explanation of Bayesian Reasoning.)
So if you claim that "no sabotage" is evidence for the existence of a Japanese-American Fifth Column, you must conversely hold that seeing sabotage would argue against a Fifth Column. If you claim that "a good and proper life" is evidence that a woman is a witch, then an evil and improper life must be evidence that she is not a witch. If you argue that God, to test humanity's faith, refuses to reveal His existence, then the miracles described in the Bible must argue against the existence of God.
Doesn't quite sound right, does it? Pay attention to that feeling of this seems a little forced, that quiet strain in the back of your mind. It's important.
For a true Bayesian, it is impossible to seek evidence that confirms a theory. There is no possible plan you can devise, no clever strategy, no cunning device, by which you can legitimately expect your confidence in a fixed proposition to be higher (on average) than before. You can only ever seek evidence to test a theory, not to confirm it.
This realization can take quite a load off your mind. You need not worry about how to interpret every possible experimental result to confirm your theory. You needn't bother planning how to make any given iota of evidence confirm your theory, because you know that for every expectation of evidence, there is an equal and oppositive expectation of counterevidence. If you try to weaken the counterevidence of a possible "abnormal" observation, you can only do it by weakening the support of a "normal" observation, to a precisely equal and opposite degree. It is a zero-sum game. No matter how you connive, no matter how you argue, no matter how you strategize, you can't possibly expect the resulting game plan to shift your beliefs (on average) in a particular direction.
You might as well sit back and relax while you wait for the evidence to come in.
...human psychology is so screwed up.