I suppose we all came across Bayesianism from different points of view - my list is quite a bit different.
For me the biggest one is that the degree to which I should believe in something is basically determined entirely by the evidence, and IS NOT A MATTER OF CHOICE or personal belief. If I believe something with degree of probability X, and see Y happen that is evidence for X, then the degree of probability Z which which I then should believe is a mathematical matter, and not a "matter of opinion."
The prior seems to be a get-out clause here, but since all updates are in principle layered on top of the first prior I had before receiving any evidence of any kind, it surely seems a mistake to give it too much weight.
My own personal view is also that often it's not optimal to update optimally. Why? Lack of computing power between the ears. Rather than straining the grey matter to get the most out of the evidence you have, it's often best to just go out and get more evidence to compensate. Quantity of evidence beats out all sorts of problems with priors or analysis errors, and makes it more difficult to reach the wrong conclusions.
On a non-Bayesian note, I have a rule to be careful of cases which consist of lots of small bits of evidence combined together. This looks fine mathematically until someone points out the lots of little bits of evidence pointing to something else which I just ignored or didn't even see. Selection effects apply more strongly to cases which consist of lots of little parts.
Of course if you have the chance to actually do Bayesian mathematics rather than working informally with the brain, you can of course update exactly as you should, and use lots of little bits of evidence to form a case. But without a formal framework you can expect your innate wetware to mess up this type of analysis.
David Chapman criticizes "pop Bayesianism" as just common-sense rationality dressed up as intimidating math[1]:
What does Bayes's formula have to teach us about how to do epistemology, beyond obvious things like "never be absolutely certain; update your credences when you see new evidence"?
I list below some of the specific things that I learned from Bayesianism. Some of these are examples of mistakes I'd made that Bayesianism corrected. Others are things that I just hadn't thought about explicitly before encountering Bayesianism, but which now seem important to me.
I'm interested in hearing what other people here would put on their own lists of things Bayesianism taught them. (Different people would make different lists, depending on how they had already thought about epistemology when they first encountered "pop Bayesianism".)
I'm interested especially in those lessons that you think followed more-or-less directly from taking Bayesianism seriously as a normative epistemology (plus maybe the idea of making decisions based on expected utility). The LW memeplex contains many other valuable lessons (e.g., avoid the mind-projection fallacy, be mindful of inferential gaps, the MW interpretation of QM has a lot going for it, decision theory should take into account "logical causation", etc.). However, these seem further afield or more speculative than what I think of as "bare-bones Bayesianism".
So, without further ado, here are some things that Bayesianism taught me.
What items would you put on your list?
ETA: ChrisHallquist's post Bayesianism for Humans lists other "directly applicable corollaries to Bayesianism".
[1] See also Yvain's reaction to David Chapman's criticisms.
[2] ETA: My wording here is potentially misleading. See this comment thread.