Information cascades
An information cascade is a problem in group rationality. Wikipedia has excellent introductions and links about the phenomenon, but here is a meta-ish example using likelihood ratios. Suppose in some future version of this site, there are several well-known facts: * All posts come in two kinds, high quality (insightful and relevant) and low quality (old ideas rehashed, long hypotheticals). * There is a well-known prior 60% chance of anything being high quality, rather than low quality. (We're doing well!) * Readers get a private signal, either "high" or "low", their personal judgement of quality, which is wrong 20% of the time. * The number of up and down votes is displayed next to each post. (Note the difference from the present system, which only displays up minus down. This hypothesis makes the math easier.) * Readers are competent in Bayesian statistics and strive to vote the true quality of the post. Let's talk about how the very first reader would vote. If they judged the post high quality, then they would multiply the prior likelihood ratio (6:4) times the bayes factor for a high private signal (4:1), get (6*4:4*1) = (6:1) and vote the post up. If they judged the post low quality then they would instead multiply by the bayes factor for a low private signal (1:4), get (6*1:4*4) = (3:8) and vote the post down. There were two scenarios for the first reader (private information high or low). If we speculate that the first reader did in fact vote up, then there are two scenarios for the second scenario: There are two scenarios for the second reader: 1. Personal judgement high: (6:4)*(4:1)*(4:1) = (24:1), vote up. 2. Personal judgement low: (6:4)*(1:4)*(4:1) = (6:4), vote up against personal judgement. Note that now there are two explanations for ending up two votes up. It could be that the second reader actually agreed, or it could be that the second reader was following the first reader and the prior against their personal judgement. That means that
If humans are bad at mental arithmetic, but good at, say, not dying - doesn't that suggest that, as a practical matter, humans should try to rephrase mathematical questions into questions about danger?
E.g. Imagine stepping into a field crisscrossed by dangerous laser beams in a prime-numbers manner to get something valuable. I think someone who had a realistic fear of the laser beams, and a realistic understanding of the benefit of that valuable thing would slow down and/or stop stepping out into suspicious spots.
Quantifying is ONE technique, and it's been used very effectively in recent centuries - but those successes were inside a laboratory / factory / automation structure, not in an individual-rationality context.