Bayesianism, as it is presently formulated, concerns the evaluation of the probability of beliefs in light of some background information. In particular, given a particular state of knowledge, probability theory says that there is exactly one probability that should be assigned to any given input statement. A simple corrolary is that if two agents with identical states of knowledge arrive at different probabilities for a particular belief then at least one of them is irrational.
A thought experiment. Suppose I ask you for the probability that P=NP (a famous unsolved computer science problem). Sounds like a difficult problem, I know, but thankfully all relevant information has been provided for you --- namely the axioms of set theory! Now we know that either P=NP is proveable from the axioms of set theory, or its converse is (or neither is proveable, but let's ignore that case for now). The problem is that you are unlikely to solve the P=NP problem any time soon.
So being the pragmatic rationalist that you are, you poll the world's leading mathematicians, and do some research of your own into the P=NP problem and the history of difficult mathematical problems in general to gain insight into perhaps which group of mathematicians may be more reliable, and to what extent thay may be over- or under-confident in their beliefs. After weighing all the evidence honestly and without bias you submit your carefully-considered probability estimate, feeling like a pretty good rationalist. So you didn't solve the P=NP problem, but how could you be expected to when it has eluded humanity's finest mathematicians for decades? The axioms of set theory may in principle be sufficient to solve the problem but the structure of the proof is unknown to you, and herein lies information that would be useful indeed but is unavailable at present. You cannot be considered irrational for failing to reason from unavailable information, you say; rationality only commits you to using the information that is actually available to you, and you have done so. Very well.
The next day you are discussing probability theory with a friend, and you describe the one-in-a-million-illness problem, which asks for the probability that a patient has a particular illness, which is known to exist within only one in a million individuals, given that a particular diagnostic test with known 1% false positive rate has returned positive. Sure enough, your friend intuits that there is a high chance that the patient has the illness and you proceed to explain why this is not actually the rational answer.
"Very well", your friend says, "I accept your explanation but I when I gave my previous assessment I was unaware of this line of reasoning. I understand the correct solution now and will update my probability assignment in light of this new evidence, but my previous answer was made in the absence of this information and was rational given my state of knowledge at that point."
"Wrong", you say, "no new information has been injected here, I have simply pointed out how to reason rationally. Two rational agents cannot take the same information and arrive at different probability assignments, and thinking clearly does not constitute new information. Your previous estimate was irrational, full stop."
By now you've probably guessed where I'm going with this. It seems reasonable to assign some probability to the P=NP problem in the absence of a solution to the mathematical problem, and in the future, if the problem is solved, it seems reasonable that a different probability would be assigned. The only way both assessments can be permitted as rational within Bayesianism is if the proof or disproof of P=NP can be considered evidence, and hence we understand that the two probability assignments are each rational in light of differing states of knowledge. But at what point does an insight become evidence? The one-in-a-million-illness problem also requires some insight in order to reach the rational conclusion, but I for one would not say that someone who produced the intuitive but incorrect answer to this problem was "acting rationally given their state of knowledge". No sir, I would say they failed to reach the rational conclusion, for if lack of insight is akin to lack of evidence then any probability could be "rationally" assigned to any statement by someone who could reasonably claim to be stupid enough. The more stupid the person, the more difficult it would be to claim that they were, in fact, irrational.
We can interpolate between the two extremes I have presented as examples, of course. I could give you a problem that requires you to marginalize over some continuous variable, and with an appropriate choice for x I could make the integration very tricky, requiring serious math skills to come to the precise solution. But at what difficulty does it become rational to approximate, or do a meta-analysis?
So, the question is: when, if ever, does an insight count as evidence?
Insight doesn't exactly "count as evidence". Rather, when you acquire insight, you improve the evidence-strength of the best algorithm available for assigning hypothesis probabilities.
Initially, your best hypothesis-weighting algorithms are "ask an expert" and "use intuition".
If I give you the insight to prove some conclusion mathematically, then the increased evidence comes from the fact that you can now use the "find a proof" algorithm. And that algorithm is more entangled with the problem structure than anything else you had before.