I mean "theoretical evidence" as something that is in contrast to empirical evidence. Alternative phrases include "inside view evidence" and "gears-level evidence".
I personally really like the phrase "gears-level evidence". What I'm trying to refer to is something like, "our knowledge of how the gears turn would imply X". However, I can't recall ever hearing someone use the phrase "gears-level evidence". On the other hand, I think I recall hearing "theoretical evidence" used before.
Here are some examples that try to illuminate what I am referring to.
Effectiveness of masks
Iirc, earlier on in the coronavirus pandemic there was empirical evidence saying that masks are not effective. However, as Zvi talked about, "belief in the physical world" would imply that they are effective.
Foxes vs hedgehogs
Consider Isaiah Berlin’s distinction between “hedgehogs” (who rely more on theories, models, global beliefs) and “foxes” (who rely more on data, observations, local beliefs).
- Blind Empiricism
Foxes place more weight on empirical evidence, hedgehogs on theoretical evidence.
Harry's dark side
Then I won't do that again! I'll be extra careful not to turn evil!
"Heard it."
Frustration was building up inside Harry. He wasn't used to being outgunned in arguments, at all, ever, let alone by a Hat that could borrow all of his own knowledge and intelligence to argue with him and could watch his thoughts as they formed. Just what kind of statistical summary do your 'feelings' come from, anyway? Do they take into account that I come from an Enlightenment culture, or were these other potential Dark Lords the children of spoiled Dark Age nobility, who didn't know squat about the historical lessons of how Lenin and Hitler actually turned out, or about the evolutionary psychology of self-delusion, or the value of self-awareness and rationality, or -
"No, of course they were not in this new reference class which you have just now constructed in such a way as to contain only yourself. And of course others have pleaded their own exceptionalism, just as you are doing now. But why is it necessary? Do you think that you are the last potential wizard of Light in the world? Why must you be the one to try for greatness, when I have advised you that you are riskier than average? Let some other, safer candidate try!"
The Sorting Hat has empirical evidence that Harry is at risk of going dark. Harry's understanding of how the gears turn in his brain makes him think that he is not actually at risk of going dark.
Instincts vs A/B tests
Imagine that you are working on a product. A/B tests are showing that option A is better, but your instincts, based on your understanding of how the gears turn, suggest that B is better.
Posting up in basketball
Over the past 5-10 years in basketball, there has been a big push to use analytics more. Analytics people hate post-ups (an approach to scoring). The data says that they are low-efficiency.
I agree with that in a broad sense, but I believe that a specific type of posting up is very high efficiency. Namely, trying to get deep-position post seals when you have a good height-weight advantage. My knowledge of how the gears turn strongly indicates to me that this would be high efficiency offense. However, analytics people still seem to advise against this sort of offense.
Maybe "destroying the theory" was not a good choice of words - the theory will more likely be "demoted" to the stature of "very good approximation". Like gravity. But the distinction I'm trying to make here is between super-accurate sciences like physics that give exact predictions and still-accurate-but-not-as-physics fields. If medicine says masks are 99% effective, and they were not effective for 100 out of 100 patients, the theory still assigned a probability of 10−200 that this would happen. You need to update it, but you don't have to "throw it out". But if physics says a photon should fire and it didn't fire - then the theory is wrong. Your model did not assign any probability at all to the possibility of the photon not firing.
And before anyone brings 0 And 1 Are Not Probabilities, remember that in the real world:
This means that the falsifying evidence, on its own, does not destroy the theory. But it can still weaken it severely. And my point (which I've detoured too far from) is that the perfect Bayesian should achieve the same final posterior no matter at which stage they apply it.