This is simply wrong. Bayesian statistics is just Bayesian probability theory. As is Bayesian epistemology. Bayesian probabilities are epistemic probabilities.
But you'd have to be one really stupid correctional officer to get an order to disable the cameras around Epstein's cell the night he was murdered, and not know who killed him after he dies.
I assume you mean "who ordered him killed."
Here's what a news report says happened:
A letter filed by Assistant US Attorneys Jason Swergold and Maurene Comey said "the footage contained on the preserved video was for the correct date and time, but captured a different tier than the one where Cell-1 was located", New York City media report.Prince Andrew spoke to the BBC in November about his links to Epstein
"The requested video no longer exists on the backup system and has not since at least August 2019 as a result of technical errors."
Could be a lot more subtle than that. Just ask for the wrong video. A little mess up in procedures. Maybe some operative clandestinely gets into the system and causes some "technical errors."
I'm not an expert on assassinations, and I suspect you aren't either. It seems to me that you're using the argument from lack of imagination -- "I can't think of a way to do it, therefore it can't be done." If, say, the CIA were behind Epstein and didn't want him to talk, is it unreasonable to suspect that they would know of many techniques to assassinate someone while covering their tracks that neither you nor I would have a clue about?
Note that I'm not claiming that there's a strong case that Epstein was assassinated, just that it's not so easy to rule out.
But when you do assert that basically the entire U.S. government has collaborated on murdering Epstein
Isn't this a straw man? If someone powerful wanted Epstein dead, how many people does that require, and how many of them even have to know why they're doing what they're doing? It seems to me that only one person -- the murderer -- absolutely has to be in on it. Other people could get orders that sound innocuous, or maybe just a little odd, without knowing the reasons behind them. And, of course, there are always versions of "Will no one rid me of this troublesome priest?" to ensure deniability.
The context is *all* applications of probability theory. Look, when I tell you that A or not A is a rule of classical propositional logic, we don't argue about the context or what assumptions we are relying on. That's just a universal rule of classical logic. Ditto with conditioning on all the information you have. That's just one of the rules of epistemic probability theory that *always* applies. The only time you are allowed to NOT condition on some piece of known information is if you would get the same answer whether or not you conditioned on it. When we leave known information Y out and say it is "irrelevant", what that means is that Pr(A | Y and X) = Pr(A | X), where X is the rest of the information we're using. If I can show that these probabilities are NOT the same, then I have proven that Y is, in fact, relevant.
You are simply assuming that what I've calculated is irrelevant. But the only way to know absolutely for sure whether it is irrelevant is to actually do the calculation! That is, if you have information X and Y, and you think Y is irrelevant to proposition A, the only way you can justify leaving out Y is if Pr(A | X and Y) = Pr(A | X). We often make informal arguments as to why this is so, but an actual calculation showing that, in fact, Pr(A | X and Y) != Pr(A | X) always trumps an informal argument that they should be equal.
Your "probability of guessing the correct card" presupposes some decision rule for choosing a particular card to guess. Given a particular decision rule, we could compute this probability, but it is something entirely different from "the probability that the card is a king". If I assume that's just bad wording, and that you're actually talking about the frequency of heads when some condition occurs, well now you're doing frequentist probabilities, and we were talking about *epistemic* probabilities.
But randomly awakening Beauty on only one day is a different scenario than waking her both days. A priori you can't just replace one with the other.
Yes, in exactly the same sense that *any* mathematical / logical model needs some justification of why it corresponds to the system or phenomenon under consideration. As I've mentioned before, though, if you are able to express your background knowledge in propositional form, then your probabilities are uniquely determined by that collection of propositional formulas. So this reduces to the usual modeling question in any application of logic -- does this set of propositional formulas appropriately express the relevant information I actually have available?
This is the first thing I've read from Scott Garrabant, so "otherwise reputable" doesn't apply here. And I have frequently seen things written on LessWrong that display pretty significant misunderstandings of the philosophical basis of Bayesian probability, so that gives me a high prior to expect more of them.
The fact that people tend to specialize in one or the other does not mean that "the two have little to do with each other." Likewise, there are physicists who spend a lot of time working in foundations and interpretation of QM, and others who spend their time applying it to solve problems in solid state physics, nuclear physics, etc. They're working on different kinds of problems, but it's absurd to say that the two have "little to do with each other."