I'm reading a paper called 'Reasonable Doubt and Presumtion of Innocence: The Case of the Bayesian Juror' for a Physics/Policy course I'm taking, and am a bit confused by something in it. Note here that I'm quite new to Bayesianism and do not claim to understand in entirity how it all works.

The claim made is that in pure Bayesianism, all probabilities are subjective (a probability of *you*). As I had understood from initial readings on Bayesianism, it is supposed to be entirely objective (ie you look at the thing you want to determine the probability of, you look at the evidence you have available, and you thusly determine the probability of the thing). As I understand it, this makes Bayesianism objective, at least within the scope of the Bayesian's knowledge.

Is my understanding wrong somewhere? Could some kind and enlightened souls please explain this to me?

New Comment
14 comments, sorted by Click to highlight new comments since:
[-]gjm191

Here's how I see it.

Traditionally, probability was all about random events, things like rolling dice. The probability of a thing was what fraction of the time it would happen if you were somehow able to bring about the same random situation over and over again. Asking for (say) the probability that there is a god, or the probability that your wife has been unfaithful to you, was a type error like asking what colour happiness is.

But once you start thinking about conditional probability, you run into Bayes' theorem, which points out a sort of symmetry between "the probability of A, given B" and "the probability of B, given A" and encourages you to ask what happens if you attach probabilities to everything. And it turns out that you can kinda do this, and that any situation where you're reasoning about uncertain things can be treated using the methods of probability theory.

One way to apply that insight is to think about any given agent's state of knowledge in probabilistic terms. It turns out that for certain kinds of mathematically idealized agents in certain kinds of mathematically idealized situations, the One True Way to represent their incomplete knowledge of the world is in terms of probabilities. This gets you "subjective Bayesianism": you don't ask what is the probability of a given thing, you ask what is my probability of a given thing; different agents will have different probabilities because they start off with different probabilities and/or see different evidence afterwards.

But you can have "objective Bayesianism" in a few senses. Firstly, given your prior probabilities and conditional probabilities and your subsequent observations the updates you do to get your posterior probabilities are dictated by the mathematics; you don't get to choose those. So, while anyone can attach any probability to anything, some combinations of probability assignments are just wrong. If you say you're 80% sure that some particular sort of god exists, and that you're 90% sure that this god would do an otherwise-improbable thing X, and then X doesn't happen, and you say you're still 80% sure about that god's existence -- why, then, something is amiss.

Secondly, in some contexts the relevant probabilities are known. When you're learning elementary probability theory in school, you get questions like "Joe rolls a fair die three times. The sum of the three rolls is 12. What's the probability that the first roll yielded an odd number?", and that question has a definite right answer and there's nothing subjective about it. (This changes if you apply it to the real world and e.g. have concerns that the die may not be perfectly fair or that Joe may have misreported the sum of the rolls. Then your priors for those things affect your answer.)

Thirdly, you can ask not about your probabilities but those of some sort of hypothetical ideal agent. The resulting probability assignments will be objective to whatever extent your specification of that hypothetical agent is.

Fourthly (this is closely related to #2 and #3 above), in some cases you may consider that while you could choose your prior probabilities however you like, there's only one reasonable way to choose them. (E.g., you know that in some entirely alien language "glorp" and "spung" are the words for magnetic north and south, and someone hands you a bar magnet. What's the probability that this end is the glorp end? Gotta be 1/2, right?, because you have absolutely no information that would introduce an asymmetry between the two possible answers.)

The SEP is quite good on this subject:

Subjective and Objective Bayesianism. Are there constraints on prior probabilities other than the probability laws? Consider a situation in which you are to draw a ball from an urn filled with red and black balls. Suppose you have no other information about the urn. What is the prior probability (before drawing a ball) that, given that a ball is drawn from the urn, that the drawn ball will be black? The question divides Bayesians into two camps:
(a) Subjective Bayesians emphasize the relative lack of rational constraints on prior probabilities. In the urn example, they would allow that any prior probability between 0 and 1 might be rational (though some Subjective Bayesians (e.g., Jeffrey) would rule out the two extreme values, 0 and 1). The most extreme Subjective Bayesians (e.g., de Finetti) hold that the only rational constraint on prior probabilities is probabilistic coherence. Others (e.g., Jeffrey) classify themselves as subjectivists even though they allow for some relatively small number of additional rational constraints on prior probabilities. Since subjectivists can disagree about particular constraints, what unites them is that their constraints rule out very little. For Subjective Bayesians, our actual prior probability assignments are largely the result of non-rational factors—for example, our own unconstrained, free choice or evolution or socialization.
(b) Objective Bayesians (e.g., Jaynes and Rosenkrantz) emphasize the extent to which prior probabilities are rationally constrained. In the above example, they would hold that rationality requires assigning a prior probability of 1/2 to drawing a black ball from the urn. They would argue that any other probability would fail the following test: Since you have no information at all about which balls are red and which balls are black, you must choose prior probabilities that are invariant with a change in label (“red” or “black”). But the only prior probability assignment that is invariant in this way is the assignment of prior probability of 1/2 to each of the two possibilities (i.e., that the ball drawn is black or that it is red).
In the limit, an Objective Bayesian would hold that rational constraints uniquely determine prior probabilities in every circumstance. This would make the prior probabilities logical probabilities determinable purely a priori.

Under these definitions, Eliezer and LW in general fall under the Objective category. We tend to believe that two agents with the same knowledge should assign the same probability.

It seems different than gim (and my) explanations below, where "subjective Bayesianism" means applying probability about states of knowledge.

I think everyone agrees on the directions "more subjective" and "more objective", but they use the words "subjective"/"objective" to mean "more subjective/objective than me".

A very subjective position would be to believe that there are no "right" prior probabilities, and that it's okay to just pick any prior depending on personal choice. (i.e. Agents with the same knowledge can assign different probabilities)

A very objective position would be to believe that there are some probabilities that must be the same even for agents with different knowledge. For example they might say that you must assign probability 1/2 to a fair coin coming up heads, no matter what your state of knowledge is. (i.e. Agents with different knowledge must (sometimes) assign the same probabilities)

Jaynes and Yudkowsky are somewhere in between these two positions (i.e. agents with the same knowledge must assign the same probabilities, but the probability of any event can vary depending on your knowledge of it), so they get called "objective" by the maximally subjective folk, and "subjective" by the maximally objective folk.

The definitions in the SEP above would definitely put Jaynes and Yudkowsky in the objective camp, but there's a lot of room on the scale past the SEP definition of "objective".

Also, here's Eliezer on the subject: Probability is Subjectively Objective

Under his definitions he's subjective. But he would definitely say that agents with the same state of knowledge must assign the same probabilities, which rules him out of the very subjective camp.

What about the idea of "well calibrated judgement", where I must be right 9 times of 10 when I say something has 90 per cent probability? Yudkowsky discussed it in the article "Cognitive biases in global risks", if I correctly remember.

In that case I assigned some probability distribution over my judgements which could be about completely different external objects?

I'm not quite sure what you mean here, but I don't think the idea of calibration is directly related to the subjective/objective dichotomy. Both subjective and objective Bayesians could desire to be well calibrated.

It's subjective for past events, and a mix of subjective and objective (depending on your model of determinism and randomness) for future events. For most cases, it's simplest and gives the right answers to treat it as subjective always.

Think of it as not "chance that something happened", but "my level of belief in what happened". What happened, happened - there's no probability there, no range of possibilities. But you don't have access to truth, only to your limited observations (including second-hand observations, which themselves are suspect). Bayes' rule is a way to incorporate a new observation into the range of possible truths which fit your previous observations.

I am also confused that on LW subjective and objective Baysian probability is not typically divided.

In my - may be incorrect - understanding the subjective Baysian probability is the probability distribution over all my beliefs, like "Typically I am rights in 6 cases from 10, so a priori probability of truth of any my idea is 0.6". This could be used in cases of logical uncertainty and other unmeasurable situations, like claims about possibility of AI.

Objective Byaesian probability is about some real situation, like the problem "If you meet a person in glasses in a port city, what is more likely: if he librarian or a sailor". It doesn't make any assumptions about my believes or bets, but use straightforward calculations using information provided in the example. In this case, the correct answer is sailors, as there are much more sailors in the port city than librarian, and number of sailors in glasses overweights number of librarians in glasses.

In ideally calibrated person both probabilities should converge.

"If you meet a person in glasses in a port city, what is more likely: if he librarian or a sailor" is not a statement about a real situation but a question about an abstract situation a quite narrow set of information is known and a decision was made to model the situation in a certain way.

Further there is someone to do this observing and know that they are seeing a librarian or a sailor. There is no "objective" unless you shove the observer outside the frame of reference so you can pretend to get objectivity.

[-]TAG20

There's no evidence of any kind that doesn't require a subject to reason or observe. That should suggest that "no subjects involved" is too high a bar for objectivity, and in order to have a non-empty set of objective facts, you need some other criterion , such as "multiple subjects who are out of communication are able to converge".

Perhaps, but I think that kind of use of the word "objective" only makes sense in a context where we can reclaim it from it's normal meaning. I realize such a thing has happened within Bayesianism, but it causes significant confusion for the uninitiated reader.

[-]TAG10

I think it has happened much more widely, and the "normal" meaning is a historical curiosity.