Ever since Tversky and Kahneman started to gather evidence purporting to show that humans suffer from a large number of cognitive biases, other psychologists and philosophers have criticized these findings. For instance, philosopher L. J. Cohen argued in the 80's that there was something conceptually incoherent with the notion that most adults are irrational (with respect to a certain problem). By some sort of Wittgensteinian logic, he thought that the majority's way of reasoning is by definition right. (Not a high point in the history of analytic philosophy, in my view.) See chapter 8 of this book (where Gigerenzer, below, is also discussed).

Another attempt to resurrect human rationality is due to Gerd Gigerenzer and other psychologists. They have a) shown that if you tweak some of the heuristics and biases (i.e. the research program led by Tversky and Kahneman) experiments but a little - for instance by expressing probabilities in terms of frequencies - people make much fewer mistakes and b) argued, on the back of this, that the heuristics we use are in many situations good (and fast and frugal) rules of thumb (which explains why they are evolutionary adaptive). Regarding this, I don't think that Tversky and Kahneman ever doubted that the heuristics we use are quite useful in many situations. Their point was rather that there are lots of naturally occuring set-ups which fool our fast and frugal heuristics. Gigerenzer's findings are not completely uninteresting - it seems to me he does nuance the thesis of massive irrationality a bit - but his claims to the effect that these heuristics are rational in a strong sense are wildly overblown in my opnion. The Gigerenzer vs. Tversky/Kahneman debates are well discussed in this article (although I think they're too kind to Gigerenzer).

A strong argument against attempts to save human rationality is the argument from individual differences, championed by Keith Stanovich. He argues that the fact that some intelligent subjects consistently avoid to fall prey to the Wason Selection task, the conjunction fallacy, and other fallacies, indicates that there is something misguided with the notion that the answer that psychologists traditionally has seen as normatively correct is in fact misguided.

Hence I side with Tversky and Kahneman in this debate. Let me just mention one interesting and possible succesful method for disputing some supposed biases. This method is to argue that people have other kinds of evidence than the standard interpretation assumes, and that given this new interpretation of the evidence, the supposed bias in question is in fact not a bias. For instance, it has been suggested that the "false consensus effect" can be re-interpreted in this way:

The False Consensus Effect

Bias description: People tend to imagine that everyone responds the way they do. They tend to see their own behavior as typical. The tendency to exaggerate how common one’s opinions and behavior are is called the false consensus effect. For example, in one study, subjects were asked to walk around on campus for 30 minutes, wearing a sign board that said "Repent!". Those who agreed to wear the sign estimated that on average 63.5% of their fellow students would also agree, while those who disagreed estimated 23.3% on average.

Counterclaim (Dawes & Mulford, 1996): The correctness of reasoning is not estimated on the basis of whether or not one arrives at the correct result. Instead, we look at whether reach reasonable conclusions given the data they have. Suppose we ask people to estimate whether an urn contains more blue balls or red balls, after allowing them to draw one ball. If one person first draws a red ball, and another person draws a blue ball, then we should expect them to give different estimates. In the absence of other data, you should treat your own preferences as evidence for the preferences of others. Although the actual mean for people willing to carry a sign saying "Repent!" probably lies somewhere in between of the estimates given, these estimates are quite close to the one-third and two-thirds estimates that would arise from a Bayesian analysis with a uniform prior distribution of belief. A study by the authors suggested that people do actually give their own opinion roughly the right amount of weight.

(The quote is from an excellent Less Wrong article on this topic due to Kaj Sotala. See also this post by himthis by Andy McKenzie, this by Stuart Armstrong and this by lukeprog on this topic. I'm sure there are more that I've missed.)

It strikes me that the notion that people are "massively flawed" is something of an intellectual cornerstone of the Less Wrong community (e.g. note the names "Less Wrong" and "Overcoming Bias"). In the light of this it would be interesting to hear what people have to say about the rationality wars. Do you all agree that people are massively flawed?

Let me make two final notes to keep in mind when discussing these issues. Firstly, even though the heuristics and biases program is sometimes seen as pessimistic, one could turn the tables around: if they're right, we should be able to improve massively (even though Kahneman himself seems to think that that's hard to do in practice). I take it that CFAR and lots of LessWrongers who attempt to "refine their rationality" assume that this is the case. On the other hand, if Gigerenzer or Cohen are right, and we already are very rational, then it would seem that it is hard to do much better. So in a sense the latter are more pessimistic (and conservative) than the former.

Secondly, note that parts of the rationality wars seem to be merely verbal and revolve around how "rationality" is to be defined (tabooing this word is very often a good idea). The real question is not if the fast and frugal heuristics are in some sense rational, but whether there are other mental algorithms which are more reliable and effective, and whether it is plausible to assume that we could learn to use them on a large scale instead.

New Comment
43 comments, sorted by Click to highlight new comments since: Today at 5:02 PM

Certain models of the Pentium processor had errors in their FPU. Some floating point calculations would give the wrong answers. The reason was that in a lookup table inside the FPU, a few values were wrong.

Now, consider the following imaginary conversation:

Customer: "There's a bug in your latest Pentium." (Presents copious evidence ruling out all other possible causes of the errors.)

Intel: "Those aren't errors, the chip's working exactly as designed. Look, here's the complete schematic of the chip, here's the test results for the actual processor, you can see it's working exactly as manufactured."

Customer: "But the schematic is wrong. Look, these values that it lists for that lookup table are wrong, that's why the chip's giving wrong answers."

Intel: "Those values are exactly the ones the engineers put there. What does it mean to say that they're 'wrong'?"

Customer: "It means they're wrong, that's what it means. The chip was supposed to do floating point divisions according to this other spec here." (Gestures towards relevant standards document.) "It doesn't. Somewhere between there and the lookup table, someone must have made a mistake."

Intel: "The engineers designing it took the spec and made that lookup table. The table is exactly what it they made it to be. It makes no sense to call it 'wrong'."

Customer: "The processor says that 4195835/3145727 = 1.333820449136241002. The right answer is 1.333739068902037589. That's a huge error, compared with the precision it's supposed to give."

Intel: "It says 1.333820449136241002, so that's the answer it was designed to give. What does that other computation have to do with it? That's not the calculation it does. I still can't see the problem."

Customer: "It's supposed to be doing division. It's not doing division!"

Intel: "But there are lots of other examples it gets right. You're presenting it with the wrong problems."

Customer: "It's supposed to be right for all examples. It isn't."

Intel: "It does exactly what it does. If it doesn't do something else that you think it ought to be doing instead, that's your problem. And if you want division, it's still a pretty good approximation."

I think this parallels a lot of the discussion on "biases".

A version of Intel's argument is used ny Objectvists to prove that there is no perceptual error.

[-]lmm10y-30

But there was a specification - IEEE 754 - that the Pentium was supposed to be implementing, and wasn't. There's no similar objective standard for rationality.

But there was a specification - IEEE 754 - that the Pentium was supposed to be implementing, and wasn't. There's no similar objective standard for rationality.

There is.

[-]V_V10y10

That's a poem, not a specification.

[-]Cyan10y40

It's a poem and a specification.

[-]V_V10y30

Not in any way that is meaningful from an engineering point of view.

[-]Cyan10y30

I do not agree. (Point of view = Ph.D. biomedical engineering.)

[-]Cyan10y00

I have a sad that you didn't challenge me on my previous reply to you; that means that you've written me off as an interlocutor, probably on the suspicion that I'm a hopeless fanboy.

...which, on reflection, would be no more than I deserve for going into pissing-match mode and not being straightforward about my point of view. Oh well.

[-]V_V10y20

I felt that the discussion wasn't going to become productive, hence I disengaged.

Upvoted, but I would like to point out that it is not immediately obvious that the template can be modified to suit instrumental rationality as well as epistemological rationality; At a casual inspection the litany appears to be about epistemology only.

The corresponding specification for instrumental rationality would be the VNM axioms, wouldn't it?

If working standing as opposed to sitting will increase my health,
I desire to have the habit of working standing.
If working standing as opposed to sitting will decrease my health,
I desire to have the habit of working sitting.
Let me not become attached to habits that do not serve my goals.

Note also that there are some delightful self-fulfilling prophecies that mix epistemic and instrumental rationality, with a hint of Löb's Theorem:

If believing that (taking this sugar pill will cure my headache) will mean (taking this sugar pill will cure my headache),
I desire to believe that (taking this sugar pill will cure my headache).
If believing that (taking this sugar pill will not cure my headache) will mean (taking this sugar pill will not cure my headache),
I desire to believe that (taking this sugar pill will cure my headache).
Let me not become attached to self-fulfilling beliefs that disempower me.

For a much more in-depth look, see this article by LWer BrienneStrohl:
Lob's Theorem Cured My Social Anxiety

Yes, that's roughly the reformulation I settled on. Except that I omitted 'have the habit' because it's magical-ish - desiring to have the habit of X is not that relevant to actually achieving the habit of X, rather simply desiring to X strongly enough to actually X is what results in the building of a habit of X.

Do you all agree that people are massively flawed?

To say that something is "flawed" is to say that it doesn't measure up to some standard. That is, "flawed" is at least a two-place predicate: X is flawed according to standard Y.

Consider: Humans sometimes make errors in arithmetic. When adding up long columns of numbers, we sometimes forget to carry a digit, and thus arrive at the wrong answer. I do not think that we would want to say that just because most people can't add up a hundred numbers without error, that the correct sum is undefined. Rather, humans are not perfect at adding.

A human mind can construct a standard — addition without error — that an unaided human cannot reliably meet.

Few people, I suspect, would take this to be some huge indictment of our worth.

And yet we can imagine entities that are better at adding than an unaided human. And we do imagine them, and we make them, and we are better off for doing so. A clerk equipped with a comptometer is more effective than one who must add via pencil and paper.

Documenting those areas in which human cognition or intuition doesn't reliably get the right answer — and particularly those where we do reliably make an intuitive leap to a wrong answer — is a step toward being more effective.

Good point. It is true that we say that humans are flawed we do this relative to a standard. I think one good way to think of this is in terms of our expectations. We can compare how flawed we thought that we are prior to reading up on cognitive psychology to how flawed we now think that we are (when doing this, it is of course important to try to avoid hindsight bias). I think that most people are surprised by cognitive psychology's findings (I certainly was, and didn't trust Tversky and Kahneman's results from the start, but was eventually convinced). The reason is that our folk or naive theory of human mind says that we are much more rational than scientific psychology has shown. See:

http://en.wikipedia.org/wiki/Na%C3%AFve_realism_(psychology)

Basically, my view is that the "Panglossians" (see lukeprog's comments) refuse to give up on this pretheoretical image in the face of evidence. They are thus conservatives not only in the sense that they don't think that human cognition can be radically improved, but also in the sense that they think that our "common sense" image of ourselves is largely right. There have of course been many other examples of such conservatism in the history of science - people have refused to believe "strange" doctrine such as relativity theory, Darwinism, and what-not. Non-naturalistic analytic philosophy is to a very large extent conservative in this sense (something which is pointed out by naturalistic critics such as Gellner - who criticized ordinary language philosophy's defence of common sense in his Words and Things (1959) - Bishop and Trout (2004; attack on the conservatism of "standard analytic epistemology) - and Ladyman and Ross (2007; attack on the conservatism of "neo-scholastic metaphysics" which they claim is based on "A-level chemistry" rather than cutting-edge science).

Of course your pre-theoretical intuitions do have some value and should be used as a guide in science, but too often, people attach too much weight to them and too little to empirical evidence. I take it that this is an example of this.

See also this, and Stanovich's distinction between "Meliorists" and "Panglossians."

[-][anonymous]10y40

Rationality is not equally distributed between adults and children / infants. They lack experience, education and the biologically-based capacity for rationality (ie no sense of object permanence in infants, mixed sense of conservation of matter in children). This disparity is part of how we adults can convince young people to do as we prefer (eat, wipe, not stab, etc.). So in this instance a disparity of rationality is not entirely or even mostly a bad thing. Even if it were, until we have a great deal more genetic engineering of humans and a great deal less law to prevent it we have no choice (and some would say such a choice is worse).

Children are fabulously instructive in cognitive biases. It's one thing to read the list, it's another to see all of them at once, at full strength, untrammeled by collision with the real world ;-)

Rationality is not equally distributed between adults and children / infants. They lack experience, education and the biologically-based capacity for rationality (ie no sense of object permanence in infants, mixed sense of conservation of matter in children). This disparity is part of how we adults can convince young people to do as we prefer (eat, wipe, not stab, etc.). So in this instance a disparity of rationality is not entirely or even mostly a bad thing.

But isn't the desirability of adults being able to convince children to do as they prefer a consequence of their lesser rationality? After all, if children knew as well as adults, there would be no reason not to let them make their own decisions.

By some sort of Wittgensteinian logic, he thought that the majority's way of reasoning is by definition right. (Not a high point in the history of analytic philosophy, in my view.)

If my understanding of Wittgenstein is correct, this is a misrepresentation of Wittgenstein. So far as I've seen, Wittgenstein is consistent with LW, especially on language, and would very probably get the right answer on tabooing words. It would be a shame if LWers wrote off Wittgenstein prematurely. Less confidently, LW's analytic linguistic approach would not be possible without the forerunning work of Wittgenstein, and mocking him is basically exactly the wrongly way round.

I'd recommend removing the Wittgenstein-slamming.

I find it quite peculiar that you recommend other users to remove parts of their comments, in particular since you obviously don't know what I'm referring to

I'm referring to Wittgenstein's idea of rule-following. Say that someone tries to teach you what "+2" means and counts "2, 4, 6, 8". How are you to continue this series after you've come to, say, 100? There is nothing objective which forces you to count "100, 102, 104" - you could just as well count "100, 104, 108". So what is the right application of "+2"? According to at least one intepretation of Wittgenstein, it is whatever interpretation the majority finds most natural. It is in virtue of this that it is right to count "100, 102, 104" rather than "100, 104, 108".

Now if you apply this logic to the conjunction fallacy, the majority, which answers that Linda is more likely to be a feminist bank-teller than a bank-teller is right, whereas the minority which has it the other way around, though right according to the conventional intepretation, is actually wrong.

Wittgenstein is notoriously slippery and vague though, so it's not clear that this is the right interpretation. My ideas of this draws heavily on "Concepts and Community", Ernest Gellner's review of Kripke's book on Wittgenstein (which is more accessible than Wittgenstein himself). It can be found in his Relativism and the Social Sciences.

I don't actually know what inspired L. J. Cohen but his ideas are so close to Wittgenstein's and the ordinary language school's it's hard to believe that he wasn't influenced by him. That was not the point, though, which rather was that Cohen's and Wittgenstein's arguments are alike in that they claim that on conceptual questions, the majority is by definition always right.

I'm not saying anyone should write off Wittgenstein - I'm saying this idea was wrong.

...in particular since you obviously don't know what I'm referring to

I'm referring to Wittgenstein's idea of rule-following.

I'm well aware of rule-following paradoxes, having discovered quite a few of them myself. Observing ambiguities or siding with majority definitions does not mean that one believes that the definitions determine reality. One could believe in majority definition and still think tabooing words is a good idea. That's why it's not obvious to me that Wittgenstein would have actually made the mistake of definitional determinism, i.e. defending having different anticipated experiences depending on the sounds we use to refer to things.

I shall explicitly point out that If you literally thought that I had never heard of (Wittgenstein's) rule-following paradoxes, then you massively overestimated my cluelessness and should update away from me being quite that clueless. (Similarly I am surprised by the extent of your background reading on this matter so have updated away from 'This person's bashing WIttgenstein based on a misrepresentation of Wittgenstein that they heard without independent analysis.' and updated slightly towards your original claim being basically correct.)

...Ernest Gellner's review of Kripke's book on Wittgenstein...

Not sure if this is relevant to what you're saying, but IIRC Kripkenstein (if this is Kripkenstein you're alluding to) is controversial as an interpretation of Wittgenstein.

That was not the point, though, which rather was that Cohen's and Wittgenstein's arguments are alike in that they claim that on conceptual questions, the majority is by definition always right.

This seems (not completely sure because I am not certain to what you're referring by 'conceptual questions') like a direct accusation of WIttgenstein being a relativist. From my position it seems like there's a fair chance that you're committing a fallacy of 'this person pointed out that language games are consensus-/usage-based, so they're a relativist' or something. That class of fallacies is a Distinct Thing that I've noticed when people talk about such things, and your comment sounded a lot like that way of thinking.

I am unconvinced that Wittgenstein would really defend e.g. a majority losing money on conjunction tests (i.e. actual consequential decisions rather than semantic disputes) as proof that losing money is right in a useful sense.

I find it quite peculiar that you recommend other users to remove parts of their comments [...] I'm not saying anyone should write off Wittgenstein - I'm saying this idea was wrong.

Sure. But in the context of LW culture which can be quite keen (not necessarily unfairly) to write off mainstream/conventional philosophy, stuff like what you wrote can come across like that whether you intend it or not. I think to a lot of LWers your mention would pretty much sound like writing Wittgenstein off, and given how highly regarded Wittgenstein is by mainstream philosophy, it makes mainstream philosophy seem much more like a joke. Wittgenstein's work seems to be particularly susceptible to such dismissals. I am not convinced that such an update (against Wittgenstein/mainstream philosophy) should be made in this case. Even if you eventually quote something to me proving that the mistake you highlighted really is something Wittgenstein would do, it would still not refute my main point, which is that flippantly writing Wittgenstein off in such a way promotes a misunderstanding, even if what you meant happens to be correct. If someone is frequently misrepresented (as Wittgenstein seems to be to me) when they are criticised, then you should take care to not appear to be reinforcing the faulty criticism.

I'm not sure whether 'I find it quite peculiar' was a euphemism for 'Fuck you; you can't tell me what to write'; did you mean it literally? If so, has it at least stopped seeming peculiar? :) It's not that I'm certain your comment will mindkill LWers; it's just that I don't think much is lost by editing out that word, and something is gained by avoiding mindkilling LWers about WIttgenstein/mainstream philosophy.

(LW prematurely dismissing certain at-first-glance-woolly or mainstream philosophy is a big issue that e.g. RobbBB takes very seriously as something to be addressed about LW's culture and vital if the community wants to be able to mature enough philosophically to function without Eliezer correcting our philosophical mistakes. I'm not sure how much I agree, but I feel like RobbBB would also get a bad feeling from your line about Wittgenstein. Just in case that means more to you than the suggestion of a nobody like myself.)

I think I'm getting you now - you're a radical advocate of ask/tell culture, right? I'm not - in my world you don't tell other people to remove part of their posts. But anyway, let's leave this and go to the content of your post, which is interesting.

Yes, Kripkenstein is controversial, which I referred to when writing that it's not clear it's the right interpretation.

Yes, I do think there are strong relativist strands in Wittgenstein. Again, it is hard to know what Wittgenstein actually meant, since he's so unclear, but a famous Wittgensteinian such as Peter Winch certainly drew relativistic conclusions from Wittgenstein's writings, something I delve upon here.):

Winch argues that cultures cannot be judged from the outside, by independent standards. Thus, the Zande’s belief in witches, while unjustified in our culture, is justified in their culture, and since there is no culture-transcending standard, we have no right to tell them what to believe. Gellner takes this to be a reductio ad absurdum of Wittgenstein’s position: since the Zande are obviously mistaken, any philosophy that says they are not must be false. And, since Gellner thinks that Winch has interpreted Wittgenstein correctly, this makes not only Winch’s but also Wittgenstein’s philosophy false.

(Actually, it strikes me now that Gellner's strategies wrt the two Wittgenstein interpretations are quite similar. In both cases he congratulates Winch/Kripke for having elucidated Wittgenstein's muddled ideas, by and large accepts their interpretation, and then argues that given this interpretation, Wittgenstein is obviously wrong.)

Regarding Wittgenstein and "mainstream philosophy". While Wittgenstein still is a star in some circles, most analytic philosophers reject his views today, rightly or wrongly. The linguistic approach to philosophy due to Wittgenstein and the Oxfordian ordinary language school died out in the 60's, and was replaced by a different kind of philosophy, which didn't think that philosophical problems were pseudo-problems that arose because we failed to understand how our language works. Instead they went back to the pre-Wittgensteinian view that they were real problems that should be attacked head on, rather than getting dissolved by the analysis of language.

This points to something more general, namely that analytic philosophy is far from monolithic. It includes Wittgensteinians, Quinean naturalists and "neo-scholastics" (in James Ladyman and Don Ross's apt phrase) and no doubt a score of other branches (it depends on how you individuate branches, obviously). I take it that most of LW's criticism of analytic philosophy is actually direct against "neo-scholasticism", which is accused of not being adequately informed by the sciences, of working with outdated methods, of being generally concerned with ephemeral problems, etc. In my view there is much to this criticism, but similar criticisms have been launched by naturalistic or postivistic philosophers within the analytic camp.

The huge differences between the different branches of analytic philosophy makes the term "analytic philosophy" a bit misleading, in fact.

I should have explicated more clearly what I meant by "some sort of Wittgensteinian logic" in the OP, though - point taken.

I think there a good chance that a professional philosopher like Laurence Jonathan Cohen knows what Wittgensteinian logic happens to be.

Do you think that L J Cohen is wrong about what logic happens to be?

Do you think that Stefan Schubert is wrong when he says that L J Cohen made his argument based on Wittgensteinian logic and that L J Cohen in fact did make his argument in another way?

Less confidently, LW's analytic linguistic approach would not be possible without the forerunning work of Wittgenstein, and mocking him is basically exactly the wrongly way round.

Marx did have a large influence on the intellectual framework of economics. That doesn't mean that it's a bad idea to mock him. Just because someone did some valid work doesn't mean that all of his ideas are correct.

Secondly, given the LW position on a subject like the Many World Hypothesis do you really think it's reflective of what Wittgenstein thought?

It might be worth noting that Bayesian models of cognition have played a big role in the "rationality wars" lately. The idea is that if humans are basically rational, their behaviors will resemble the output of a Bayesian model. Since human behavior really does match the behavior of a Bayesian model in a lot of cases, people argue that humans really are rational. (There has been plenty of criticism of this approach, for instance that there are so many different Bayesian models in the world that one is sure to match the data, and thus the whole Bayesian approach to showing that humans are rational is unfalsifiable and overfitting.)

If you are interested in Bayesian models of cognition I recommend the work of Josh Tenenbaum and Tom Griffiths, among others.

That's very helpful! I've heard a lot of scattered remarks about this perspective but never read up on it systematically. I will look into Tennenbaum and Griffiths. Any particular suggestions (papers, books)?

The unfalsifiability remark is interesting, btw.

Hmm. If you want to know how Bayesian models of cognition work, this paper might be a good place to start, but I haven't read it yet: "Bayesian Models of Cognition", by Griffiths, Kemp, and Tenenbaum.

I'm taking a philosophy class right now on Bayesian models of cognition, and we've read a few papers critiquing Bayesian approaches: "Bayesian Fundamentalism or Enlightenment?", by Jones and Love "Bayesian Just-So Stories in Psychology and Neuroscience", by Bowers and Davis Iirc, it's the latter that discusses the unfalsifiability of the Bayesian approach.

[-]V_V10y00

Being rational isn't a binary predicate, unless by "rational" you mean some abstract model of perfect rationality which isn't physically realizable, making the point moot.

Clearly, any average human is highly rational with respect to any non-human animal. Nevertheless, he or she makes mistakes, which can be noticed by other humans, or by the same human after some time.,

[-][anonymous]10y00

If it turned out that the lesson of all this cognitive science was that acting intuitively usually leads to the best result that would be HUGE and anything but a null result. In today's world people act in ways that hurt them because they think that's what is supposed to happen (where else can this 'ought' come from than the inside of your body?) Like some ppl in NK. Intuivitely it feels like correct because nothing in this movement so far has gone deeply against my feelings and I don't believe it's going to happen in the future unless the message is heavily distorted. I dunno, some of this feels contradictory and what's the best way to act intuitively anyway...

love

father of the unmatchable

[This comment is no longer endorsed by its author]Reply
[-][anonymous]10y00

Sorry for the rambling, but this was actually inspired by the story of two hospitals in Slate Star Codex:

All of this reminds me of a video I saw this afternoon on the second day of The Hospital Orientation. Please excuse me if I change it around just a little to turn it from a quality improvement case study to a morality tale.

There were two hospitals, Hospital A and Hospital B. Both, like all hospitals, were fighting a constant battle against medical errors – surgeons removing the wrong leg, doctors giving the wrong dose of medication, sleepy interns reading x-rays backwards, that kind of thing. These are deadly – they kill up to a hundred thousand people a year – and terrifyingly common.

Hospital A took a very right-wing approach to the issue. They got all their doctors together and told them that any doctor who made a minor medical error would get written up and any doctor who made a major medical error would be fired. Rah personal responsibility!

Unfortunately, when they evaluated the results of their policy they found they had exactly as many medical errors as before, except now people were trying to cover them up and they weren’t being discovered until way too late.

Hospital B took a very progressive approach. They too got all their doctors together, but this time the hospital administrators announced: “You are not to blame for any medical errors. If medical errors occur, it means we, the administrators, have failed you by not creating a sufficiently good system. Please tell us if you commit any medical errors, and you won’t be punished, but we will scrutinize what we’re doing to see if we can make improvements.”

Then they made sweeping changes to what you might call the “society” of the hospital. They decreased doctor workload so physicians weren’t as harried. They shortened shifts to make sure everyone got at least eight hours of sleep a night. They switched from paper charts (where doctors write orders in notoriously hard-to-read handwriting) to electronic charts (where everything is typed up). They required everyone to draw up and use checklists. They even put propaganda posters over every sink reading “DID YOU WASH YOUR HANDS LONG ENOUGH??!” with a picture of a big eye on them. You can’t get more Orwellian than that.

And yet, mirabile dictu, this was the hospital that saw their medical error rates plummet.

The administrators of this second hospital didn’t ignore human nature. Instead, they exploited their knowledge of human nature to the fullest. They know it’s in human nature to do a bad job when you’re working on no sleep. They know it’s human nature to try to cut corners, but that people will run through checklists honestly and effectively. They even know that studies show that pictures of eyes make people behave more prosocially because they feel like they’re being watched.

You don’t have to tell me all the reasons this doesn’t directly apply to an entire country. I can think of most of them. But my point is that if I’m progressive – a label I am not entirely comfortable with but which people keep trying to pin on me – this is my progressivism. The idea of using knowledge of human nature to create a structure with a few clever little lever taps that encourage people to perform in effective and prosocial ways. It’s a lot less ambitious than “LET’S TOTALLY REMAKE EVERY ASPECT OF SOCIETY AS A UTOPIA”, but it’s a lot more practical.

Sunshine regiment is always going to beat the other kind of armies and the best way to make someone mad is to be really really happy.

[This comment is no longer endorsed by its author]Reply

In the absence of other data, you should treat your own preferences as evidence for the preferences of others.

But in this case, unless you were raised by wolves, you do have more data.Their objection seems like weak tea here, though it has validity generally.

I often find myself disagreeing with the studies which conclude people have failures of rationality. Often they fail to take cost functions into account, or knowledge, or priors, or contexts.

For instance, one supplemental explanation for the False Consensus Effect (because just because it is one effect doesn't mean it has only one cause) that I have heard is that in most cases it is a "free" way of obtaining comfort.

If presented with an opportunity to believe that other people are like you, with no penalty for being wrong, one could expect people will err on the side of predicting behavior consistent with one's own behavior.

I obviously haven't done this experiment, but I suspect that if the subjects asked to wear the sign were offered a cash incentive based on their accuracy of prediction for others, both groups would make a more accurate prediction.

[See also - political predictions are more accurate when the masses are asked to make monetary bets on the winner of the election, rather than simply indicate who they would vote for]

Voting isn't a form of predicting the winner, it's not about being on the side of the winner.

I didn't mean to imply I thought it was, though I see how that wasn't clear.

I didn't intend that last bracketed part to be an example, but rather a related phenomenon - it is interesting to me how asking a random sample of people who they voted for is a worse predictor than asking a random sample of people who they would predict got the most votes, and that this accuracy further improves when people are asked to stake money on their predictions.

I simply was pointing out that certain biases might be significantly more visible when there is no real incentive to be right.

people who they voted for < who they predicted would win < bet on who would win, where '<' indicates predictive accuracy.

Because, the first is signaling about yourself and perhaps trying to sway others, the second is probably just swaying others, and the third is trying to make money.

It's a testament to a demented culture that people are lying about how they vote.

More common than lying about how they vote is falsely believing that they voted other than they did.

people who they voted for < who they predicted would win < bet on who would win, where '<' indicates predictive accuracy.

This is exactly what I was saying.

Yes, I meant it as a paraphrase.

If presented with an opportunity to believe that other people are like you, with no penalty for being wrong, one could expect people will err on the side of predicting behavior consistent with one's own behavior.

I obviously haven't done this experiment, but I suspect that if the subjects asked to wear the sign were offered a cash incentive based on their accuracy of prediction for others, both groups would make a more accurate prediction.

Possibly. But if you're prepared to bet that the bias would vanish in that context, that's a bet I'd take.

I'm not prepared to make that bet.

I don't suspect the bias would vanish, but rather be diminished.