Lumifer comments on What Bayesianism taught me - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (201)
That presents and interesting chicken-and-egg problem, don't you think?
I can't consider existence or non-existence of something without that something "obtruding on my awareness" which automatically grants it evidence for existence. And I cannot provide this evidence against the existence of anything because as soon as it enters my mind, poof! the evidence against disappears and the evidence for magically appears in its place.
Anyway, I know the point you're trying to make. But taking it to absurd lengths leads to absurd results which are generally not the desired outcome.
Sorry, I wasn't clear. I didn't mean "obtruding on your awareness" in the sense of having the idea of the thing occur to you. I meant that you encounter the thing in a way that is overwhelming evidence for its existence. Like, maybe you aren't looking for goblins, but you might one day open the lid of your trashcan and see a goblin in there.
I am confused. So if you DON'T "encounter the thing in a way that is overwhelming evidence for its existence" then you have evidence against its existence?
That doesn't seem reasonable to me.
Yes. Let
Let us further suppose that the prior probability that I assign to the existence of goblins is very low.
Then P(H | E) > P(H). Hence, P(H | ~E) < P(H). Therefore, the fact that I haven't seen a goblin in my trashcan is evidence against the existence of goblins.
Of course, it may be very weak evidence. It may not be evidence that I, as a computationally limited being, should take any time to weigh consciously. But it is still evidence.
As I said, I understand the point. To demonstrate my problem, replace goblins with tigers. I don't think the fact that I haven't seen a tiger in my trashcan is evidence against the existence of tigers.
In a world where tigers didn't exist, I wouldn't expect to see one in my trashcan. In a world where tigers did exist, I also wouldn't expect to see a tiger in my trashcan, but I wouldn't be quite as surprised if I did see one. My prior probability that tigers exist is very high, since I have lots of independent reasons to believe that they do exist. The conditional probability of observing no tiger in my trashcan is skewed very slightly towards the world where tigers do not exist, but not enough to affect a prior probability that is very close to 100% already. You could say the same for the goblin example, etc–my prior probability is close to zero, and although I'm more likely not to observe a goblin in my trashcan in the world where goblins don't exist, I'm also not likely to see one in the world where goblins do exist. The prior probability is far more skewed than the conditional probability, so the evidence of not observing a goblin doesn't affect my belief much.
The fact that you haven't seen a tiger in your trashcan is, however, evidence that there is no tiger in your trashcan.
Edit: Which I think is more or less harmonious with your original post. It appears to me, however, that at some step in the discussion, there was a leap of levels from "absence of evidence for goblins in the trashcan is evidence of absence of goblins from the trashcan" to "absence of evidence for goblins in the trashcan is evidence for the complete nonexistence of goblins".
For practical purposes, sure, this is a case where "absence of evidence is evidence of absence" is not a very useful refrain. The evidence is so weak that it's a waste of time to think about it. P(I see a tiger in my trashcan|Tigers exist) is very small, and not much higher than P(I see [hallucinate] a tiger in my trashcan|Tigers don't exist). A very small adjustment to P(Tigers exist), of which you already have very high confidence, isn't worth keeping track of... unless maybe you're systematically searching the world for tigers, by examining small regions one at a time, each no more likely to contain a tiger than your own trashcan. Then you really would want to keep track of that very small amount of evidence: if you round it down to no evidence at all, then even after searching the whole world, you'd still have no evidence about tigers!
It's not fully accurate to say
but it might be a useful heuristic. "Be mindful of the strength of evidence, not just its presence" would be more precise, because looking in the right place does provide a much higher likelihood ratio than not looking at all.
Is it because you deny that P(H | E) > P(H) in this case? Or do you acknowledge that P(H | ~E) < P(H) is true in this case, but you don't interpret it as meaning "the fact that I haven't seen a tiger in my trashcan is evidence against the existence of tigers."
If you deny that P(H | E) > P(H), this might be because your implicit prior knowledge already screens off E from H. Perhaps we should, following Jaynes, always keep track of your prior knowledge X. Then we should rewrite P(H | E) > P(H) as P(H | E & X) > P(H | X). But if your prior knowledge already includes, say, seeing tigers at the zoo, then the additional experience of seeing a tiger in your trashcan may not make tigers any more likely to exist. That is, you could have that P(H | E & X) = P(H | X).
In that case, if you've already seen tigers at the zoo, then their absence from your trashcan does not count as evidence against their existence.
In this case I don't think P(H | ~E) < P(H) applies.
/me looks into the socks drawer, doesn't find any tigers
/me adjusts downwards the possibility of tigers existing
/me looks into the dishwasher, doesn't find any tigers
/me further adjusts downwards the possibility of tigers existing
/me looks into the fridge, doesn't find any tigers
...
You get the idea.
Sorry, I think that I was editing my comment after you replied. (I have no excuse. I think what happened was that I was going to make a quick typofix, but the edit grew longer, and by the end I'd forgotten that I had already submitted the comment.)
How do you react to my conjecture that your background knowledge screens off (or seems to) the experience of seeing a tiger in your trashcan from the hypothesis that tigers exist?
I don't think screening off helps with the underlying problem.
Let's recall where we started. I commented on the expression "absence of evidence is evidence of absence" by saying "Only provided you have looked, and looked in the right place."
The first part should be fairly uncontroversial. If you don't look you don't get any new evidence, so there's no reason to update your beliefs.
Now, the second part, "the right place". In this thread Wes_W gives a numerical example that involves searching for tigers in houses and says that you need to search about 5 billion houses to drop your confidence to 90% -- and if you search a trillion houses and still don't find a tiger, "then you'd be insane to still claim that tigers probably do exist."
Well, let's take this example as given but change one little thing. Let's say I'm not looking for tigers -- instead, I heard that there are two big rocks, Phobos and Deimos, and I'm looking for evidence of their existence.
I search a house and I don't find them. I search 5 billion houses and I don't find them. I search a trillion houses and still don't find them. At this point would I be insane to believe Phobos and Deimos exist?
That is the issue of "looking in the right place".
I agree that the "looking" part is important: Looking and not finding evidence is a different kind of "absence of evidence" than just not looking.
I think it would indeed be pretty silly to maintain that a) they exist and b) each house has an independent 10^-9 chance of containing them, after searching a trillion houses and finding neither. But if you didn't place much credence in anything like b) in the first place, your confidence in a) may not be meaningfully altered. If you already thought Phobos and Deimos were moons of Mars, then you would have extremely minimal evidence against their existence. But again, we can construct a Paradox of the Heap-type setup where you search the solar system, one household-volume at a time, and if all of them come up empty you should end up thinking Phobos and Deimos probably aren't real, so each individual household-volume must be some degree of evidence.
My thought here - and perhaps we agree on this, in which case I'm happy to concede the point - is that the need to look in the right place is technically already covered by the relevant math, specifically by the different strengths of evidence. But for us puny humans that are doing this without explicit numerical estimates, and who aren't well-calibrated to nine significant figures, it's a good rule of thumb.
(This comment has been edited multiple times. My apologies for any confusion.)
Suppose the chance of finding a tiger somewhere in a given household, on a given day, is one in a billion. Or so say the pro-tigerians. The tiger denialist faction, of course, claims that statistic is made-up, and tigers don't actually exist. But one household in a trillion might hallucinate a tiger, on any given day.
Today, you search your entire house - the dishwasher AND the fridge AND the trashcan etc.
P(You find a tiger|tigers exist) = .000000001
P(You don't find a tiger|tigers don't exist) = .000000000001
P(You don't find a tiger|tigers exist) = .999999999
P(You don't find a tiger|tigers don't exist) = .999999999999
And suppose you are 99.9% confident that tigers exist - you think you could make statements like that a thousand times in a row, and be wrong only once. (Perhaps rattling off all the animals you know.) Your prior odds ratio is 999 to 1. So you take your prior odds, (.999/.001) and multiply by the likelihood ratio, (.999999999/.999999999999), to get a posterior odds ratio of 998.999999002 to 1. This is, clearly, a VERY small adjustment.
What if you search more households: how many would you have to search, without finding a tiger, before you dropped just to 90% confidence in tigers, where you still think tigers exist but would not willingly bet your life on it? If I've done the math right, about five billion. There probably aren't that many households in the world, so searching every house would be insufficient to get you down to just 90% confidence, much less 10% or whatever threshold you'd like to use for "tigers probably don't exist".
(And my one-in-a-billion figure is probably far too high, and so searching every household in the world should get you even less adjustment...)
But if you could search a trillion houses at those odds, and still never found a tiger - then you'd be insane to still claim that tigers probably do exist.
And if a trillion searches can produce such a shift, then each individual search can't produce no evidence. Just very little.
I've posted a comment that answers you here
Under the technical definition of "evidence", yes. In practice, it's a question of how likely you would be to have seen one by now if they were real.
Bayes' Theorem implies that you can take the prior odds of the hypothesis A, or the ratio of its probability to the probability of its being false, A/a, and update that to take the evidence E into account by multiplying in the ratio of the probability of that evidence given A and given A: new odds = old odds * (E|A)/(E|a).
Play around with that until you see the truth of the claim you asked about. Note that A = 1-a.