(Hello, All! This is pretty much my first post (excluding "did the survey") on here. Looking forward to some interesting discussions)
It seems to me that the examples (ie "subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal) were presented here in a context conducive to thinking about them rationally. I rather doubt they were presented similarly to the test subjects. Here are two different ways of presenting this information:
METHOD 1 Disease A kills 1,286 people out of every 10,000. Disease B is 24.14% likely to be fatal . All other things being equal, which is more dangerous?
METHOD 2 Emma contracted bloxy pox at 8 years of age. 10,000 children contract bloxy pox every year. Emma was one of the 1,286 who do not survive this fatal disease. She died after a painful 3 month struggle. But bloxy pox is not the only fatal childhood disease: Crompularia is 24.14% likely to be fatal....etc, etc.....How would you feel if your child contracted bloxy pox? What would you do? Do you think we should fund money to cure childhood illnesses? Which disease do you think is most dangerous?
It seems that because the article presents the information to US in a rational context (and we therefore think about it rationally) we tend to assume that the experiments were presented similarly, such as in Method 1 above. This leads to many comments to the sequence along the lines of "People Are Stupid!".
I would assume that instead the information was presented in a manner that put people into a more intuitive mindset. I've not actually read the source material to know for sure, though. So if you have, please let me know if I am correct in this assumption.
I think the point these articles are trying to make is not that people CAN'T do probability calculations (which many commenters seemed to joke about), but rather that we, as humans, DON'T do a lot of probability calculations in our default/ intuitive state. I feel this is an important distinction to make, and one which many commenters seemed to ignore.
Actually, our probability calculations might be substantially ignored when making our decisions.
Participants (health professionals and consumers) understood natural frequencies better than percentages...In studies of alternative formats for presenting risk reductions of interventions, and compared with [Absolute Risk Reduction], [Relative Risk Reduction] had little or no difference in understanding but was perceived to be larger and more persuasive
People got similar understanding from those formats, but chose differently.
...Compared with [Number Needed to Treat], RRR was better understood...was perceived to be larger and was more persuasive
Here people were sensible, at least.
Compared with NNT, ARR was better understood...was perceived to be larger...There was little or no difference for persuasiveness.
People were equally persuaded by those formats, but had different understanding as to what was going on.
Overall there were no differences between health professionals and consumers.
Unfortunate.
http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD006776.pub2/abstract
Spin it here!
Today's post, The Affect Heuristic was originally published on 27 November 2007. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Purpose and Pragmatism, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.