Swimmer963 comments on 2012 Survey Results - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (640)
My first thought about this is that people's rationality 'in real life' totally is determined by how likely they are to notice a Bayes question in an informal setting, where they may be tired and feeling mentally lazy. In Keith Stanovich's terms, rationality is mostly about the reflective mind: it's someone's capacity and habits to re-compute a problem's answer, using the algorithmic mind, rather than accept the intuitive default answer that their autonomous mind spits out.
IQ tests tend to be formal; it's very obvious that you're being tested. They don't measure rationality in the sense that most LWers mean it; the ability to apply thinking techniques to real life in order to do better.
It might still be valuable to know how LWers do on a more formal test of probability-related knowledge; after all, most people in the general public don't know Bayes' theorem, so it'd be neat to see how good LW is at increasing "rationality literacy". But that's not the ultimate goal. There are reasons why you might want to measure a group's ability to pick out unexpected rationality-related problems and activate the correct mindware. If your Bayesian superpowers only activate when you're being formally tested, they're not all that useful as superpowers.
I can see why you'd criticize someone for saying "the problem is that the setting wasn't formal enough" but that's not exactly what I was getting at. What I was getting at is that there's a limit to how much thinking that one can do in a day, everyone's limit is different, and a lot of people do things to ration their brainpower so they avoid running out of it. This comment on mental stamina explains more.
My point was, more clearly worded: It would be a very rare person who possesses enough mental stamina to be rational in literally every single situation. That's a wonderful ideal, but the reality is that most people are going to ration brainpower. If your expectation is that rationalists should never ration brainpower and should be rational constantly, this is an unrealistic expectation. A more realistic expectation is that people should identify the things they need to think extra hard about, and correctly use rational thinking skills at those times. Therefore, testing for the skills when they're trying is probably the only way to detect a difference. There are inevitably going to be times when they're not trying very hard, and if you catch them at one of those times, well, you're not going to see rational thinking skills. It may be that some of these things can be ingrained in ways that don't use up a person's mental stamina, but to expect that rationality can be learned in such a way that it is applied constantly strikes me as an unreasoned assumption.
Now I wonder if the entire difference between the control groups results and LessWrong's results was that Yvain asked the control group only one question, whereas LessWrong had answered 14 pages of questions prior to that.
Agreed that rationality is mentally tiring...I went back and read your comment, too. However:
To me, rationality is mostly the ability to notice that "whew, this is a problem that wasn't in the problem-set of the ancestral environment, therefore my intuitions probably won't be useful and I need to think". The only way a rationalist would have to be analytical all the time is if they were very BAD at doing this, and had to assume that every situation and problem required intense thought. Most situations don't. In order to be an efficient rationalist, you have to be able to notice which situations do.
Any question on a written test isn't a great measure of real-life rationality performance, but there are plenty of situations in everyday life when people have to make decisions based on some unknown quantities, and would benefit from being able to calibrate exactly how much they do know. Some people might answer better on the written test than if faced with a similar problem in real life, but I think it's unlikely that anyone would do worse on the test than in real life.
I don't think you could really apply any 'algorithmic' method to that question (other than looking it up, but that would be cheating). It was a test on how much confidence you put in your heuristics. (BTW, It seems that I've underestimated mine, or I've been lucky, since I've got the date off by one year but estimated my confidence at 50% IIRC). Still, it was a valuable test, since most of human reasoning is necessarily heuristic.
Really? What probability do you assign to that statement being true? :D
I'm under the impression that Bayes' theorem is included in the high school math programs of most developed countries, and I'm certain it is included in any science and engineering college program.
I assign about 80% probability to less than 25% of adults knowing Bayes theorem and how to use it. I took physics and calculus and other such advanced courses in high school, and graduated never having heard of Bayes' Theorem. I didn't learn about it in university, either–granted, I was in 'Statistics for Nursing', it's possible that the 'Statistics for Engineering' syllabus included it.
Only 80%?
In the USA, about 30% of adults have a bachelor's degree or higher, and about 44% of those have done a degree where I can slightly conceive that they might possibly meet Bayes' theorem (those in the science & engineering and science- & engineering-related categories (includes economics), p. 3), i.e. as a very loose bound 13% of US adults may have met Bayes' theorem.
Even bumping the 30% up to the 56% who have "some college" and using the 44% for a estimate of the true ratio of possible-Bayes'-knowledge, that's only just 25% of the US adult population.
(I've no idea how this extends to the rest of the world, the US data was easiest to find.)
You did your research and earned your confidence level. I didn't look anything up, just based an estimate on anecdotal evidence (the fact that I didn't learn it in school despite taking lots of sciences). Knowing what you just told me, I would update my confidence level a little–I'm probably 90% sure that less than 25% of adults know Bayes Theorem. (I should clarify that=adults living in the US, Canada, Britain, and other countries with similar school systems. The percentage for the whole world is likely significantly lower.)
I hear Britain's school system is much better than the US's.
Once you control for demographics, the US public school system actually performs relatively well.
Good point.
The UK high school system does not cover Bayes Theorem.
If you choose maths as one of your A-levels, there's a good chance you will cover stats 1 which includes the formula for Bayes' Theorem and how to apply it to calculate medical test false positives/false negatives (and equivalent problems). However it isn't named and the significance to science/rationality is not explained, so it's just seen as "one more formula to learn".
Offhand, 1/2 young people do A levels, 1/4 of those do maths, and 2/3 of those do stats, giving us 1/12 of young people. I don't think any of these numbers are off by enough to push the fraction over 25%
Maybe you guys could solve that problem by publishing some results demonstrating its exteme significance
As far as I know, it's been formally demonstrated to be the absolutely mathematically-optimal method of achieving maximal hypothesis accuracy in an environment with obscured, limited or unreliable information.
That's basically saying: "There is no possible way to do better than this using mathematics, and as far as we know there doesn't yet exist anything more powerful than mathematics."
What more could you want? A theorem proving that any optimal decision theory must necessarily use Bayesian updating? ETA: It has been pointed out that there already exists such a theorem. I could've found that out by looking it up. Oops.
It's not great by international standards, but I have heard that the US system is particularly bad for an advanced country.
In terms of outcomes, the US does pretty terribly when considered 1 country, but when split into several countries it appears at the top of each class. Really, the EU is cheating by considering itself multiple countries.
The EU arguably is more heterogeneous than the US. But then, India is even more so.
How's it being split?
I actually thought someone would dig up and provide the relevant link by now. I'll have to find it.
You mean comparing poorer states to poorer countries?
Actually it is quite good (even for an "advanced country") if you compare the test scores of, say, Swedes and Swedish-Americans rather than Swedes and Americans as a whole.
I wonder what that's controlling for? Cultural tendencies to have different levels of work ethic?
Hmmm. So it's "good" but people with the wrong genes are spoiling the average somehow.
Must be a problem of the American school system, I suppose.
Did they teach you about conditional probability? Usually Bayes' theorem is introduced right after the definition of conditional probability.
Well, it's certainly not included in the US high school curriculum.
There are national and international surveys of quantitative literacy in adults. The U.S. does reasonably well in these, but in general the level of knowledge is appalling to math teachers. See this pdf (page 118 of the pdf, the in-text page number is "Section III, 93") for the quantitative literacy questions, and the percentage of the general population attaining each level of skill. less than a fifth of the population can handle basic arithmetic operations to perform tasks like this:
People who haven't learned and retained basic arithmetic are not going to have a grasp of Bayes' theorem.
I'm pretty sure Ireland doesn't have it on our curriculum, not sure how typical we are.
It was in my high school curriculum (in Italy, in the mid-2000s), but the teacher spent probably only 5 minutes on it, so I would be surprised if a nontrivial number of my classmates who haven't also heard of it somewhere else remember it from there. IIRC it was also briefly mentioned in the part about probability and statistics of my "introduction to physics" course in my first year of university, but that's it. I wouldn't be surprised if more than 50% of physics graduates remember hardly anything about it other than its name.