I was recently disturbed by my perception that, despite years of studying and debating probability problems, the LessWrong community as a whole has not markedly improved its ability to get the right answer on them.
I had expected that people would read posts and comments by other people, and take special note of comments by people who had a prior history of being right, and thereby improve their own accuracy.
But can that possibly work? How can someone who isn't already highly-accurate, identify other people who are highly accurate?
Aumann's agreement theorem (allegedly) says that Bayesians with the same priors agree. But it doesn't say that doing so helps. Under what circumstances does revising your opinions, by updating in response to people you consider reliable, actually improve your accuracy?
To find out, I built a model of updating in response to the opinions of others. It did, eventually, show that Bayesians improve their collective opinions by updating in response to the opinions of other Bayesians. But this turns out not to depend on them satisfying the conditions of Aumann's theorem, or on doing Bayesian updating. It depends only on a very simple condition, established at the start of the simulation. Can you guess what it is?
I'll write another post describing and explaining the results if this post receives a karma score over 10.
That's getting a bit ahead of ourselves, though. This post models only non-Bayesians, and the results are very different.
Here's the model:
- There are G people in a group such as LessWrong.
- There are N problems being discussed simultaneously.
- Problems are binary problems, with an answer of either 1 or 0.
- Each person's opinion on each problem is always known to all people.
- Each person i has an accuracy: Their probability pi of getting any arbitrary problem correct on the first guess.
- givt is what person i believes at time t is the answer to problem v (1 or 0).
- pij expresses person i's estimate of the probability that an arbitrary belief of person j is correct.
- Without loss of generality, assume the correct answer to every problem is 1.
Algorithm:
# Loop over T timesteps
For t = 0 to T-1 {
# Loop over G people
For i = 0 to G-1 {
# Loop over N problems
For v = 0 to N-1 {
If (t == 0)
# Special initialization for the first timestep
If (random in [0..1] < pi) givt := 1; Else givt := 0
Else {
# Product over all j of the probability that the answer to v is 1 given j's answer and estimated accuracy
m1 := ∏j [ pijgjv(t-1) + (1-pij)(1-gjv(t-1)) ]
# Product over all j of the probability that the answer to v is 0 given j's answer and estimated accuracy
m0 := ∏j [ pij(1-gjv(t-1)) + (1-pij)gjv(t-1) ]
p1 := m1 / (m0 + m1) # Normalize
If (p1 > .5) givt := 1; Else givt := 0
}
}
# Loop over G other people
For j = 0 to G-1
# Compute person i's estimate of person j's accuracy
pij := { Σs in [0 .. t] Σv in [s..N] [ givtgjvs + (1-givt)(1-gjvs) ] } / N
}
}
p1 is the probability that agent i assigns to problem v having the answer 1. Each term pijgjv(t-1) + (1-pij)(1-gjv(t-1)) is the probability of problem v having answer 1 computed using agent j's beliefs, by adding either the probability that j is correct (if j believes it has answer 1), or the probability that j is wrong (if j believes it has answer 0). Agent i assumes that everyone's opinions are independent, and multiplies all these probabilities together. The result, m1, is very small when there are very many agents (m1 is on the order of .5G), so it is normalized by computing a similar product m0 for the probability that v has answer 0, and setting p1 = m1 / (m0 + m1).
The sum of sums to compute pij (i's opinion of j's accuracy) computes the fraction of problems, summed over all previous time periods, on which person j has agreed with person i's current opinions. It sums over previous time periods because otherwise, pii = 1. By summing over previous times, if person i ever changes its mind, that will decrease pii. (The inner sum starts from s instead of 0 to accomodate an addition to the model that I'll make later, in which the true answer to problem t is revealed at the end of time t. Problems whose answer is public knowledge should not be considered in the sum after the time they became public knowledge.)
Now, what distribution should we use for the pi?
There is an infinite supply of problems. Many are so simple that everyone gets them right; many are so hard or incomprehensible that everyone performs randomly on them; and there are many, such as the Monty Haul problem, that most people get wrong because of systematic bias in our thinking. The range of population average performance pave on all possible problems thus falls within [0 .. 1].
I chose to model person accuracy instead of problem difficulty. I say "instead of", because you can use either person accuracy or problem difficulty to set pave. Since a critical part of what we're modeling is person i's estimate of person j's accuracy, person j should actually have an accuracy. I didn't model problem difficulty partly because I assume we only talk about problems of a particular level of difficulty; partly because a person in this model can't distinguish between "Most people disagree with me on this problem; therefore it is difficult" and "Most people disagree with me on this problem; therefore I was wrong about this problem".
Because I assume we talk mainly about high-entropy problems, I set pave = .5. I do this by drawing pi from [0 .. 1], with a normal distribution with a mean of .5, truncated at .05 and .95. (I used a standard deviation of .15; this isn't important.)
Because this distribution of pi is symmetric around .5, there is no way to know whether you're living in the world where the right answer is always 1, or where the right answer is always 0. This means there's no way, under this model, for a person to know whether they're a crackpot (usually wrong) or a genius (usually right).
Note that these agents don't satisfy the preconditions for Aumann agreement, because they produce 0/1 decisions instead of probabilities, and because some agents are biased to perform worse than random. It's worth studying non-Bayesian agents before moving on to a model satisfying the preconditions for the theorem, if only because there are so many of them in the real world.
An important property of this model is that, if person i is highly accurate, and knows it, pii will approach 1, greatly reducing the chance that person i will change their mind about any problem. Thus, the more accurate a person becomes, the less able they are to change their minds when they are wrong - and this is not an error. It's a natural limit on the speed at which one can converge on truth.
An obvious problem is that at t=0, person i will see that it always agrees with itself, and set pii = 1. By induction, no one will ever change their mind. (I consider this evidence for the model, rather than against it.)
The question of how people ever change their mind is key to this whole study. I use one of these two additions to the model to let people change their mind:
- At the end of each timestep t, the answer to problem number t becomes mutual knowledge to the entire group. (This solves the crackpot/genius problem.)
- Each person has a maximum allowable pij (including pii).
This model is difficult to solve analytically, so I wrote a Perl script to simulate it.
- What do you think will happen when I run the program, or its variants?
- What other variants would you like to see tested?
- Is there a fundamental problem with the model?
I've been following, but I'm still nonplussed as to your use of the type-token distinction in this context. The comment of mine which was the parent for your type-token observation had a specific request: show me the specific mistake in my math, rather than appeal to a verbal presentation of a non-formal, intuitive explanation.
Take a bag with 1 red marble and 9 green marbles. There is a type "green marble" and it has 9 tokens. The experiences of drawing any particular green marble, while token-distinct are type-identical. It seems that what matters when we compute our credence for the proposition "the next marble I draw will be green" is the tokens, not the types. When you formalize the bag problem accordingly, probability theory gives you answers that seem quite robust from a math point of view.
If you start out ignorant of how many marbles the bag has of each color, you can ask questions like "given that I just took two green marbles in a row, what is my credence in the proposition 'the next marble I draw will be green'". You can compute things like the expected number of green marbles left in the bag. In the bag problem, IOW, we are quantifying our uncertainty over tokens, while taking types to be a fixed feature of the situation. (Which of course is only a convention of this kind of exercise: with precise enough instruments we could distinguish all ten individual marbles.)
Statements like "information is gained" or "information is lost" are vague and imprecise, with the consequence that a motivated interpretation of the problem statement will support whichever statement we happen to favor. The point of formalizing probability is precisely that we get to replace such vague statements with precisely quantifiable formalizations, which leave no wiggle room for interpretation.
If you have a formalism which shows, in that manner, why the answer to the Sleeping Beauty question is 1/2, I would love to see it: I have no attachment any longer to "my opinion" on the topic.
My questions to you, then, are: a) given your reasons for "worrying about types rather than tokens" in this situation, how do you formally quantify your uncertainty over various propositions, as I do in the spreadsheet I've linked to earlier? b) what justifies "worrying about types rather than tokens" in this situation, where every other discussion of probability "worries about tokens" in the sense I've outlined above in reference to the bag of marbles? c) how do you apply the type-token distinction in other problems, say, in the case of the Tuesday Boy?
My point was that I didn't think anything was wrong with your math. If you count tokens the answer you get is 1/3. If you count types the answer you get is 1/2 (did you need more math for that?). Similarly, you can design payouts where the right choice is 1/3 and payouts where the right choice is 1/2.
... (read more)