Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
In Think Like Reality, I put forth the astonishing and controversial proposition that when human intuitions disagree with a fact, we need to either disprove the "fact" in question, or try to reshape the intuition. (Well, it wouldn't have been so controversial, but like a fool I picked quantum mechanics to illustrate the point. Never use quantum mechanics as an example of anything.) Probability theory says that a model which is consistently surprised on the data is probably not a very good model.
Matt Shulman pointed out in personal conversation that, in practice, we may want to be wary of people who don't appear surprised by surprising-seeming data. Some people affect to be unsurprised because it is a fakeable signal of competence. Well, a lot of things that good rationalists will do - such as appearing skeptical and appearing to take other people's opinions into account - are also fakeable signals of competence. But, in practice, Matt's point is still well-taken.
People may also appear unsurprised (Matt points out) if their models are so vague that they don't understand the implications one way or the other. (Rob Spear: "It doesn't matter to the general public whether reality has 11, 42, or 97.5 dimensions... The primary good that most modern physics provides to the people is basically light entertainment.") Or they may appear unsurprised if they fail to emotionally connect to the implications - "Oh, sure, an asteroid is going to hit Earth... but personally I don't think humanity really deserves to survive anyway... are you taking Sally to her doctor's appointment tomorrow?"
Or Cialdini on the bystander effect:
We can learn from the way the other witnesses are reacting whether the event is or is not an emergency. What is easy to forget, though, is that everybody else observing the event is likely to be looking for social evidence, too. Because we all prefer to appear poised and unflustered among others, we are likely to search for that evidence placidly, with brief, camouflaged glances at those around us. Therefore everyone is likely to see everyone else looking unruffled and failing to act.
So appearing unsurprised, or pretending to yourself that you weren't surprised, is both personally and socially detrimental. By saying that a consistently surprised model is a poor model, I didn't intend to make it more difficult for people to admit their surprise! Even rationalists are surprised sometimes - the important thing is to throw away the model, reshape your intuitions, and otherwise update yourself so that it doesn't happen again.
Think Like Reality wasn't arguing that we should never admit surprise, but that, having been surprised, we shouldn't get all indignant at reality for surprising us - because that just keeps us in the mistaken frame of mind that was surprised in the first place; instead, we should try to adjust our intuitions so that reality doesn't seem surprising the next time. That doesn't mean rationalizing the events in hindsight using your current model - hindsight bias is detrimental to this process because it leads you to underestimate how surprised you were, and hence adjust your model less than it needs to be adjusted.