I think you're problem is you don't understand what the issues at stake are, so you don't know what you're trying to find.
You said:
anything written by Popper which mentions the word "Bayes" or "Bayesian" at all. Or that discusses Bayes' equation.
But then when you found a well known book by Popper which does have those words, and which does discuss Bayes' equation, you were not satisfied. You asked for something which wasn't actually what you wanted. That is not my fault.
You also said:
You are changing the subject. I never asked you to summarize Popper's solution to the problem of induction.
But you don't seem to understand that Popper's solution to the problem of induction is the same topic. You don't know what you're looking for. It wasn't a change of topic. (Hence I thought we should discuss this. But you refused. I'm not sure how you expect to make progress when you refuse to discuss the topic the other guy thinks is crucial to continuing.)
Bayesian updating, as a method of learning in general, is induction. It's trying to derive knowledge from data. Popper's criticisms of induction, in general, apply. And his solution solves the underlying problem rendering Bayesian updating unnecessary even if it wasn't wrong. (Of course, as usual, it's right when applied narrowly to certain mathematical problems. It's wrong when extended out of that context to be used for other purposes, e.g. to try to solve the problem of induction.)
So, question: what do you think you're looking for? There is tons of stuff about probability in various Popper books including chapter 8 of LScD titled "probability". There is tons of explanation about the problem of induction, and why support doesn't work, in various Popper books. Bayesian updating is a method of positively supporting theories; Popper criticized all such methods and his criticisms apply. In what way is that not what you wanted? What do you want?
So for example I opened to a random page in that chapter and found, p 183, start of section 66, the first sentence is:
Probability estimates are not falsifiable.
This is a criticism of the Bayesian approach as unscientific. It's not specifically about the Bayesian approach in that it applies to various non-Bayesian probabilistic approaches (whatever those may be. can you think of any other approaches besides Bayesian epistemology that you think this is targeted at? How would you do it without Bayes' theorem?). In any case it is a criticism and it applies straightforwardly to Bayesian epistemology. It's not the only criticism.
The point of this criticism is that to even begin the Bayesian updating process you need probability estimates which are created unscientifically by making them up (no, making up a "prior" which assigns all of them at once, in a way vague enough that you can't even use it in real life without "estimating" arbitrarily, doesn't mean you haven't just made them up).
EDIT: read the first 2 footnotes in section 81 of LScD, plus section 81 itself. And note that the indexer did not miss this but included it...
How would you do it without Bayes' theorem?
(It's subject to limitations that do not constrain the Bayesian approach, and as near as I can tell, is mathematically equivalent to a non-informative Bayesian approach when it is applicable, but the author's justification for his procedure is wholly non-Bayesian.)
One of the best achievements of the LessWrong community is our high standard of discussion. More than anywhere else, people here are actively trying to interpret others charitatively, argue to the point, not use provocative or rude language, apologise for inadvertent offenses while not being overtly prone to take offense themselves, avoid their own biases and fallacies instead seeking them in others, and most importantly, find the truth instead of winning the argument. Maybe the greatest attribute of this approach is its infectivity - I have observed several newcomers to change their discussing habits for better in few weeks. However, not everybody is susceptible to the LW standards and our attitude produces somewhat bizarre results when confronted with genuine trolls.
Recent posts about epistemology1 have all generated large number of replies; in fact, the discussions were among the largest in the last few months. People have commented there (yes, I too am guilty) even if it was clear that the author of the posts doesn't actually react to our arguments. After he was rude and had admitted to do it on purpose. After commiting several fallacies, after generating an unreasonable amount of text of mediocre to low quality, after saying that he is neither trying to convince anyone nor he is willing to learn anything nor he aims for agreement. In short, perhaps all symptoms of trolling were present, and still, people were repeatedly patiently explaining what's wrong with the author's position. Which reaction is, I must admit, sort of amazing - but on the other hand, it is hard to deny that the whole discussion was detrimental to the quality of LW content and was mostly a waste of time.
So, here is the question: why didn't we apply the don't feed the troll meme, as would probably happen much sooner on most forums? I have several hypotheses on that.
1. We are unable to recognise trolls for lack of training. The first hypothesis is quite improbable, given that the concerned troll was downvoted to oblivion2, but still possible. There are not many trolls on LW and perhaps it is difficult to believe that someone is actively seeking that sort of confrontation. I have never understood the psychology of trolls - I try to avoid combative arguments instinctively and find it hard to imagine why somebody would intentionally try to create one. Perhaps a manifestation of the typical mind fallacy combines with compartmentalisation here: although we consciously know that there are trolls out there (as this is hard to ignore), when meeting one our instict tells us that the person cannot be so much different from us.
2. We are unwilling to deal with trolls. The second theory is that although we know that a person isn't sincere, we cherish our standards of discussion so strongly that we still try to respond kindly and maintain a civil debate, or at least one side of the debate. If it is the case, it is not automatically a bad policy. Our rationality is limited and we always operate under the threat of self-serving biases. Some quasi-deontological rule of kindness in debates, even if it is an overkill, may be useful in the same way presumption of innocence is useful in justice.
3. Sunken costs. Once the debate has started, our initial investments feel binding. It is unsettling to quit an argument admitting that it was completely useless and we have lost an hour of our life for nothing. Sunken costs fallacy is well known and widespread, there is no reason to expect we are immune.
4. Best rebuttal contest. An interesting fact is that not only the number of replies was fairly large, but also lot of replies were strongly upvoted. It leads me to suspect that those replies weren't in fact aimed at the opponent in the discussion, but rather intended to impress the fellow LessWrongers. Once the motivation is not "I want to convince my interlocutor" but rather "I can craft an extraordinarily elegant counter-argument which until now didn't appear", the attitude of the opponent doesn't matter. The debate becomes an exercise in arguing, a potentially useful practice maybe, but with many associated dangers.
5. Trollish arguments are fun. I include this possibility mainly for completeness since I don't much believe that significant number of LW users enjoy pointless arguments. But still, there is something fascinating in fallacious arguments. They are frustrating to follow, for sure, especially for a rationalist, but I cannot entirely leave out of consideration the appeal of seeing biases and fallacies in real life, as opposed to mere reading about them in a Kahneman and Tversky paper.
Whatever of the above hypotheses is correct, or even if none of them is correct, I don't doubt that on reflection most of us would prefer to have less irrational discussions. The karma system works somehow, but slowly, and cannot prevent the trollish discussions from gaining momentum if people continue in their present voting patterns. One of the problems lies in upvoting the rebuttals which gives additional motivation for people to participate. There seem to be two main strategies of voting: "I want to see more/less of this" and "this deserves more/less karma than it presently has". The first strategy seems marginally better for dealing with trolls, but both strategies should work better when applied in context. Even a brilliant reply should not be upvoted when placed in an irrational debate: first, it is mostly wasting of resources, and more, we certainly want to see less irrational debates. I don't endorse downvoting good replies, if only because the troll could interpret it as support for his cause. But leaving them on zero seems to be a correct policy.
1 I am not going to link to them because I don't want to generate more traffic there; one of those posts figures already on the 4th place when you Google lesswrong epistemology. Neither I write down the precise topic or the name of the author explicitly, which I hope decreases the probability of his appearing here.
2 In fact, the downvoting, even if massive, came relatively late, with the person in question being able to post on the main site after several days.