I am not sure that that means. Example: I claim that this coin is biased. I do a hundred coin flips, it comes up heads 55 times. Is this "clear evidence"?
Without crunching the numbers, my best guess is no, a fair coin is not very unlikely to come up heads 55 times out of 100. I would guess that no possible P(heads) would have a likelihood ratio much greater than 1 from that test.
If one of the hypotheses is that the coin is unfair in a way that causes it to always get exactly 55 heads in 100 flips, that might be clear/strong evidence, but this would require a different mechanism than usually implied when discussing coin flips.
Does it ever get strong enough for you to dismiss all claimed evidence of paranormal powers sight unseen? I don't know -- it depends on your prior and on how did you update. I expect different results with different people.
I don't know either. This is a rather different question from whether you're getting evidence at all, though.
Without crunching the numbers, my best guess
No need for best guesses -- this is a standard problem in statistics. What it boils down to is that there is a specific distribution of the number of heads that 100 tosses of a fair coin would produce. You look at this distribution, note where 55 heads are on it... and then? What is clear evidence? how high a probability number makes things "likely" or "unlikely"? It's up to you to decide what level of certainty is acceptable to you.
The Bayesian approach, of course, sidesteps all this and j...
David Chapman criticizes "pop Bayesianism" as just common-sense rationality dressed up as intimidating math[1]:
What does Bayes's formula have to teach us about how to do epistemology, beyond obvious things like "never be absolutely certain; update your credences when you see new evidence"?
I list below some of the specific things that I learned from Bayesianism. Some of these are examples of mistakes I'd made that Bayesianism corrected. Others are things that I just hadn't thought about explicitly before encountering Bayesianism, but which now seem important to me.
I'm interested in hearing what other people here would put on their own lists of things Bayesianism taught them. (Different people would make different lists, depending on how they had already thought about epistemology when they first encountered "pop Bayesianism".)
I'm interested especially in those lessons that you think followed more-or-less directly from taking Bayesianism seriously as a normative epistemology (plus maybe the idea of making decisions based on expected utility). The LW memeplex contains many other valuable lessons (e.g., avoid the mind-projection fallacy, be mindful of inferential gaps, the MW interpretation of QM has a lot going for it, decision theory should take into account "logical causation", etc.). However, these seem further afield or more speculative than what I think of as "bare-bones Bayesianism".
So, without further ado, here are some things that Bayesianism taught me.
What items would you put on your list?
ETA: ChrisHallquist's post Bayesianism for Humans lists other "directly applicable corollaries to Bayesianism".
[1] See also Yvain's reaction to David Chapman's criticisms.
[2] ETA: My wording here is potentially misleading. See this comment thread.