Many so-called "logical fallacies" are correct Bayesian inferences.
I find this a very interesting claim and wondering if anyone has applied it to some list of logical fallacies such as one might find listed in some Intro to Logic text book.
I'm assuming that one could get all that from reading through all the Sequences but sees to me a cheat sheet type document would be much more helpful.
Wikipedia has a list. Note that even the "informal" fallacies are often "so-called 'logical fallacies'".
Fallacies as weak Bayesian evidence had some good exposition on a few of them from a Bayesian perspective. There could be more under the fallacies tag.
There's also some discussion under Logical fallacy poster.
If you observe 2 pieces of evidence, you have to condition the 2nd on seeing the 1st to avoid double-counting evidence
The basic definition of evidence is more important than you may think. You need to start by asking what different models predict. Related: it is often easier to show how improbable the evidence is according to the scientific model, than to get any numbers at all out of your alternative theory.
Absent hypotheses do not produce evidence. Often (in some cases you can notice confusion but it is hard thing to notice until it up in your face) you need to have a hypothesis that favor certain observation as evidence to even observe it, to notice it. It is source for a lot of misunderstandings (along with a stupid priors of course). If you forget that other people can be tired or in pain or in a hurry, it is really easy to interpret harshness as evidence in favor of "they dont like me" (they can still be in a hurry and dislike you, but well...) and be done with it. After several instances of it you will be convinced enough to make changing your mind very difficult (confirmation bias difficult) so alternatives need to be present in your mind before encounter with observation.
Vague hypotheses ("what if we are wrong?") and negative ("what if he did not do this?") are not good at producing evidence to. To be useful they have to be precise and concrete and positive (this is easy to check in some cases by visualisation - how hard it is to do and if it even possible to visualise).
Cromwell's rule: you prior probability can never be zero about anything, otherwise it would stay zero in the face of any evidence.
The flip side is that some actually useful hypotheses are inaccessible on a fundamental level, so you can't ever be a True Bayesian. Sorry. This might map to epistemic humility.
What are the qualitative lessons we can learn about logic and reasoning from Bayesian epistemology, that is, from taking Bayes' rule as a mathematical model for thought (even if it is considered a simplified formalism that we often can't implement?)
I've seen at least a few of these from @Eliezer Yudkowsky, but I think they're scattered across many essays.
Some things I consider to be examples of what I'm gesturing at here:
Thanks!