"Selling mosquito nets cheaply in malaria-afflited areas is more effective than giving them away, because people who pay for them will value them more and be more likely to use them."
Plausible, but false. I got it from this New Yorker article about randomized trials of social policy, which might have other good nuggets for you as well. (The economist doing the studies annoyed some people by showing evidence that microloans don't work, or something like that--I read it in the hard copy a while ago, and the online version is behind a subscriber wall.)
This is the best I've seen in this thread. More like this please!
NB: I like not because of which turns out to be true - I doubt we're ready to take a confident position on that - but because I can easily rationalize either one as "obvious".
I remembered seeing a list like that in my Psychology 101 textbook, so I tracked it down. This is from Psychology: Eight Edition in Modules by David G Myers, pg 18; they're presented as a true/false quiz.
Answers: Nppbeqvat gb gur nhgube, gur bqq-ahzorerq fgngrzragf unir orra ershgrq, naq gur rira-ahzorerq barf unir orra pbasvezrq.
Sorry, but many of those statements are simply vague ('much the same', does driving count? 'often' ?), and/or overgeneralised. Or non-informative, e.g. the statement that <50% of abused children become abusive adults. It's much more interesting to compare that against the baseline. In a society where >50% of adults are abusive, you can expect most abused children to become abusive adults. The infants mirror self recognition is extremely variable, see this: http://en.wikipedia.org/wiki/Rouge_test
Speaking of which, something interesting. A vague statement is generally unsurprising and/or is predictable, as is negation of the vague statement, not because of some cognitive bias, but because of vagueness.
I think a good one from evopsych (I got it from Robert Wright's The Moral Animal and maybe also here) is:
"Poorer women try to have children until they get at least one boy, while wealthier women try to have at least one girl." (Truth: it's the opposite -- status helps the reproductive chances of men more than women.)
I like it because it's easier for someone just encountering it to rationalize the false result on the grounds that "Poorer families need hard workers, so they need boys."
Venting when angry leads to less happiness after the initial catharsis.
In romantic relationships, similar personalities attract, rather than opposites.
The blog Barking up the Wrong Tree also collects psychology experiments of this kind. The posts are often based on single studies, so I don't know how well-established they are, but it's often possible to reverse the result and invent a plausible explanation. Some examples:
Spending money on other people makes you happier than spending it on yourself
Heavy TV-watchers are less happy when given more channel options (which itself is an example of the general finding that having a large number of choices reduces happiness)
Chronically ill patients may be less happier if they hope for a cure
I don't think these are exactly what Eliezer is looking for - these are statements that go against our natural inclinations, whereas I think Eliezer is looking like things that are (as someone said) "obviously correct in both directions", i.e. stuff we could rationalize as true either way upon encountering it.
I know what you mean, and I worried about that when I posted those examples. The problem is that I can't tell if I'm suffering from the hindsight bias when I'm trying to evaluate "Could I believe both this statement and its inverse, regardless of which one was presented as the truth?" In these cases, I can come up with fake rationalisations for both (even though one is more counter-intuitive), which makes me think that they might be invertible. They would need to be tested on people in experiments like the ones in the article by Meyers.
This article discusses inventions that we don't realize are very recent. So the time they were invented may be an invertible fact to many people. The article is on Cracked.com so the claims should be verified carefully (although sometimes I feel that they do more fact checking than much of the official media.) Highlights include the doorknob (invented in 1878. I would have guessed last thousand years, but certainly not that recently). One on the list, timzonzes struck me as very obviously recent, but that may be because I've read a bit about the history of railroads. I also don't think the example of "nationalism" is quite correct, or at least, they are drastically oversimplifying in that entry. But the overall thrust might do what you want.
The findings on the efficacy of medicine that Robin Hanson often mentions sound like a good example. Most people would guess that the conclusions of the RAND study are obviously correct if presented in inverted form.
But more importantly, if you have such difficulty finding good examples, could this perhaps be an indication that your claim is more shaky than you'd like it to be?
One of the worst tendencies I notice in my own argumentation when I get carried away is precisely the sort of error you might be making here. Namely, sometimes I state a general claim that wouldn't really hold under scrutiny, and then think long and hard until I find an example that fits especially well to support it. This even though in the process, I think through a bunch of situations that represent at least a partial counterexample to my claim, but I never mention those.
In this case, the fact that you have such difficulty finding examples where hindsight devalues social science findings suggests that this doesn't happen as often as you'd like to imply.
Most people would guess that the conclusions of the RAND study are obviously correct if presented in inverted form.
But isn't the aim here to find claims that are obviously correct in both directions?
if you have such difficulty finding good examples, could this perhaps be an indication that your claim is more shaky than you'd like it to be?
Could be, though my first thought on the difficulty of finding suggestions was "I guess the brain doesn't index its knowledge on 'is_invertible_fact'." And of course, hindsight bias interferes with the search process - the facts one knows seem correct and well-supported.
I thought Eliezer was looking for statements which would make his readers experience hindsight bias upon reading them, which is more of a challenge than simply finding examples of hindsight bias.
Eliezer--
Can you give us any information about when or how the book is going to be published?
The book will be written before he feels his work is done on FAI.
You know it's true because intuitively you need a non-work outlet for creativity and flow, and it makes sense to write about rationality in an entertaining way and to write a character (Harry) which he can take as an inspiration.
Of course, you also know it's false because the man prioritizes and FAI matters far more.
... I find a lot of rationalizations are like that. One of the most useful quick-n-dirty rationality heuristics LessWrong has given me is to 'consider the opposite'
Here's one a friend presented to me as "scientist proves the bleeding obvious". Who would you guess has more trouble reading the emotions of others - lower-class or upper-class people? Spoiler in URL.
Eliezer writes,
Note also that I have a general policy of keeping anything related to religion out of the >rationality book - that there be no mention of it whatsoever.
I just wanted to say that I think this is an excellent decision. I wish there was more material that challenges the practice of mental compartmentalization yet avoids triggering emotional bias.
How about just about any evolutionary claim? While evolution itself is as real a phenomenon as anyone has ever discovered, there seems to be a lot of dubious rationality, especially hindsight bias, about applying evolutionary arguments, especially in evolutionary psychology.
For example, this article about wasps is begging to be fact-inverted.
If you throw a baseball into the air, it will take longer to reach its maximum height than it will to fall the same distance back toward the ground.
Or will it? ;)
If it's thrown straight up, I see a simple argument that it will take longer to fall than to rise, but none showing the reverse. A driven golf ball would be a more ambiguous problem. The same argument as for the baseball applies, but there is the extra effect of the backspin operating in the opposite direction.
Baduk is the korean version of the board game Go. Expert players of it have below-average IQs on average.
"However, Baduk experts demonstrated slightly lower IQs than did controls (mean ± SD, 93.19 ± 10.42 for Baduk experts, 101.21 ± 13.11 for controls; t = -2.016, df = 32.910, p = 0.052) as estimated by the Korean version of the Wechsler Adult Intelligence Scale (K-WAIS) (Kim and Lee, 1995), and a significant difference appeared between the two groups for level of education"
Gwern said the same is true of chess but I don't have a source for that.
My first thought was, "Just use anything from the experimental literature on the Coase Theorem," but then I realized most people wouldn't be able to easily rationalize Coasean results. However, if you think otherwise, the divorce literature is interesting. Perhaps not clear-cut enough.
"Facts about happiness" really is a good idea, here are some suggestions off the top of my head:
"Money doesn't buy happiness" is true even across countries with huge differences in wealth and income. (False. Rationalization: Simple folk wisdom, right? Why wouldn't other people around the globe have come to the same conclusion?)
More direct participation in democracy has a negative effect on happiness; more representational institutions increase happiness. (False. Easily rationalized: "Representational democracy woo!" I figure most of your audience will be American. More seriously, "Less stress: you have to worry about technical policies less often and vote less often, but your voice is still heard." )
Women are, in general, happier than men. This effect appears to last over time and cross-country. (False. Could be rationalized as, "women work less, have better social lives, are less materialistic, or female modes of thinking manage stress better.")
Unfortunately, these are contentious examples, and you may want something with more rigorous empirics to make your point. But then, you shouldn't go near happiness research to begin with.
Could you please clarify what you mean by "inverted". Even a physical object could be "inverted" in more than one way. It could be turned upside down, or it could be reversed into a mirror image of itself, if concave it could be made convex, and so on.
By an "inversion" of a true statement, I take it that you mean a false statement which states what one might call the opposite of the true statement, contradicting the true statement. For example, if dogs live longer than cats, than an inversion of this claim would be the claim that cats live longer than dogs. One of these is true and the other false (I don't know which one) and and they state what one might call the opposite of each other, contradicting each other.
But in one comment someone has a very different interpretation:
"But isn't the aim here to find claims that are obviously correct in both directions?"
So, could you clarify?
Your interpretation of "inversion" is correct. The aim is to find statements such that if told P, you say "yes, that's what I would have expected", but that you have the same reaction if instead you're told not-P. See the linked post "Hindsight devalues science" for why such statements exist and why he's looking for examples of them.
http://lesswrong.com/r/discussion/lw/ens/why_humans_are_sometimes_less_rational_than/
(nominated by sixes_and_sevens)
Note also that I have a general policy of keeping anything related to religion out of the rationality book...
This is off-topic but very interesting. Anything related to religion is a pretty broad category, I wonder why.
What Vladimir said, plus it keeps atheists from smugly thinking that rationality is primarily about not being religious.
Because it keeps religious people from perceiving the book as attacking their beliefs, thus stopping this particular bad reason from making it less persuasive than it is.
Interesting, a Trojan horse for religious memes? Superficially it seems like it doesn't harm your meme but its inevitable conclusion however does annul it from behind. Yet it subliminally supports a much stronger meme that is AI going FOOM, sneaky!
Using properties of the bottom line to evaluate validity of arguments is generally an anti-epistemic practice, avoid it in your thinking.
Subjects who know the actual answer to a question assign much lower probabilities they "would have" guessed for that answer, compared to subjects who must guess without knowing the answer.
( :p )
Seriously thou, maybe something from physics or math?
EDIT: Or something about sleep. Or some anti akrasia method. Or the schedule of some technological progression...
Having started of like this already, lets just make this the post that people can comment rambling lists of random subjects for inspiration?
EDIT2: For a specific math one that seems easy maybe a logarithmic curve that the value further along has to be guessed for? Might backfire if the reader is to good at math thou.
(Also, I just realized where I am and who made the request. Eeep! don't downvote me into oblivion please...)
If you want to remember something for the long haul you should study it in many differing places times and environments. This goes pretty strongly against the normal advice on the quiet room.
For reference, this article discusses that and also some related concepts. Another one which might work for EY is that periodic testing helps with information retention (apparently true, but might not seem to be if one equates "being tested" with "cramming instead of learning").
Question: should I be avoiding potentially politically charge suggestions?
(ie, my first thought is look up current stats on gun control vs violent crime and such, and if the numbers (whichever way it goes) seems to be strong enough that the issue is effectively decided, just use that? But politics is the mind killer and all that, so I'm hesitant to suggest this one)
Not an answer, but I would think that one could find examples among ill considered government regulations which have the opposite of the intended effect. I'll let you know if any concrete examples occur to me.
The difficulty here in my mind is with satisfying the requirement "experimentally verified." To this end Armok GoB's suggestion of looking to physics or math sounds good.
in the spirit of unintended consequences: a day care in Israel charged a fine to parents who were late picking up their children, with the unintended result that more parents were late picking up their kids, because they treated the fine as a "fee."
ill considered government regulations
That can be easily as bad as religious samples. Many people are really invested in their favorite goverment policy.
I suggestion would be to sample history or to some degree oddities of foreign cultures.
That can be easily as bad as religious samples. Many people are really invested in their favorite goverment policy.
Or, for that matter, invested in the idea that government regulation will never outperform nonregulation.
It's about a happiness study, but it may be one of the better examples:
The gist is that people remember the end of an uncomfortable experience more than the time in between and map that feeling of a "happy end" onto the whole experience in retrospect. In one study two groups of subjects got a colonoscopy. Both groups were in considerable discomfort for equal amounts of time, when in group 1 the colonoscopy was terminated, while for group 2 the instrument wasn't removed immediately and thus caused considerably less discomfort as during the treatment, but obviously still more discomfort than in the group where it was removed entirely. When both groups were asked to evaluate their experience afterwards, the group which suffered discomfort for longer (but where the discomfort was curtailed down somewhat at the end) didn't think the experience was as bad, as the control group which suffered less on all accounts in comparison.
Unfortunately it may be a somewhat known finding by now, considering that you can find it in a TED-talk: http://www.ted.com/talks/lang/eng/daniel_kahneman_the_riddle_of_experience_vs_memory.html
Another problem with taking the study as an example may be, that it may feel almost too much like a set trap. If we didn't know the results, upon hearing the study setup we would think: "Well duh, of course the patients with less overall discomfort will report that they had a better experience" but that seems to be such an obviously and intuitively trivial conclusion that I'd get suspicious immediately if I encountered that example.
EDIT: By the way, are you planning to include a rough outline of evolutionary theory or include evolution in some other fashion Eliezer?
Given how commonly discussed that experiment is, anyone who is reading Eliezer's book is likely to be vaguely aware of it.
This article discusses inventions that we don't realize are very recent. So the time they were invented may be an invertible fact to many people. The article is on Cracked.com so the claims should be verified carefully (although sometimes I feel that they do more fact checking than much of the official media.) Highlights include the doorknob (invented in 1878. I would have guessed last thousand years, but certainly not that recently). One on the list, timzonzes struck me as very obviously recent, but that may be because I've read a bit about the history of railroads. I also don't think the example of "nationalism" is quite correct, or at least, they are drastically oversimplifying in that entry. But the overall thrust might do what you want.
I'm writing the section of the rationality book dealing with hindsight bias, and I'd like to write my own, less racially charged and less America-specific, version of the Hindsight Devalues Science example - in the original, facts like "Better educated soldiers suffered more adjustment problems than less educated soldiers. (Intellectuals were less prepared for battle stresses than street-smart people.)" which is actually an inverted version of the truth, that still sounds plausible enough that people will try to explain it even though it's wrong.
I'm looking for facts that are experimentally verified and invertible, i.e., I can give five examples that are the opposite of the usual results without people catching on.
Divia (today's writing assistant) has suggested facts about marriage and facts about happiness as possible sources of examples, but neither of us can think of a good set of facts offhand and Googling didn't help me much. Five related facts would be nice, but failing that I'll just take five facts. My own brain just seems to be very bad at answering this kind of query for some reason; I literally can't think of five things I know.
(Note also that I have a general policy of keeping anything related to religion out of the rationality book - that there be no mention of it whatsoever.)