There's great many things that you have never even thought of, and you know nothing about those things. They have no probabilities assigned, and worse, they work as if they had probability of zero. And you can't avoid that, because there's far more things you ought to enumerate than you can enumerate, by a very very huge factor.
You don't need to enumerate beliefs to assign them nonzero probability. You can have a catch-all "stuff nothing like anything that'd ever even occur to me, unless it smacked me in the face" category, to which you can assign nonzero probability.
Those beliefs don't propagate where they should, that's the issue, and universe doesn't care if you made an excuse to make it sound better. Those beliefs still have zero effect on inferences, and that's what matters. And when you get some of that weak "evidence" such as your Zeus example it doesn't go towards other hypotheses, but it goes towards Zeus, because the latter you have been prompted with.
Or when you process an anecdote, it would seem to me that with your qualitative Bayes you are going to tend to affect your belief about the conclusio...
David Chapman criticizes "pop Bayesianism" as just common-sense rationality dressed up as intimidating math[1]:
What does Bayes's formula have to teach us about how to do epistemology, beyond obvious things like "never be absolutely certain; update your credences when you see new evidence"?
I list below some of the specific things that I learned from Bayesianism. Some of these are examples of mistakes I'd made that Bayesianism corrected. Others are things that I just hadn't thought about explicitly before encountering Bayesianism, but which now seem important to me.
I'm interested in hearing what other people here would put on their own lists of things Bayesianism taught them. (Different people would make different lists, depending on how they had already thought about epistemology when they first encountered "pop Bayesianism".)
I'm interested especially in those lessons that you think followed more-or-less directly from taking Bayesianism seriously as a normative epistemology (plus maybe the idea of making decisions based on expected utility). The LW memeplex contains many other valuable lessons (e.g., avoid the mind-projection fallacy, be mindful of inferential gaps, the MW interpretation of QM has a lot going for it, decision theory should take into account "logical causation", etc.). However, these seem further afield or more speculative than what I think of as "bare-bones Bayesianism".
So, without further ado, here are some things that Bayesianism taught me.
What items would you put on your list?
ETA: ChrisHallquist's post Bayesianism for Humans lists other "directly applicable corollaries to Bayesianism".
[1] See also Yvain's reaction to David Chapman's criticisms.
[2] ETA: My wording here is potentially misleading. See this comment thread.