I agree. I don't understand the praise the book has received. I found the reasoning in the book very sloppy, filled with huge gaps in the logic and more obvious alternate explanations for experimental results that were not even mentioned.
I don't expect the rigor of a research paper in a popular science book, but even popular science books have standards. I'm sure his papers fill in all the gaps in the book, but if there are multiple obvious explanations for an experimental result and you're going to tell your readers how to interpret the results, you should at least say why the other obvious (sometimes more obvious) interpretations are less plausible or why the preferred interpretation is so compelling -- even in a popular science book.
For some examples of what I mean, see this Amazon.com review.
Today Ed Yong has a post on Not Exactly Rocket Science that is about updating - actually, the most extreme case in updating, where a person gets to choose between relying completely on their own judgement, or completely on the judgement of others. He describes 2 experiments by Daniel Gilbert of Harvard in which subjects are given information about experience X, and asked to predict how they would feel (on a linear scale) on experiencing X; they then experience X and rate what they felt on that linear scale.
In both cases, the correlation between post-experience judgements of different subjects is much higher than the correlation between the prediction and the post-experience judgement of each subject. This isn't surprising - the experiments are designed so that the experience provides much more information than the given pre-experience information does.
What might be surprising is that the subjects believe the opposite: that they can predict their response from information better than from the responses of others.
Whether these experiments are interesting depends on how the subjects were asked the question. If they were asked, before being given information or being told what that information would be, whether they could predict their response to an experience better by making their own judgement based on information, or from the responses of others, then the result is not interesting. The subjects in that case did not know that they would be given only a trivial amount of information relative to those who had the experience.
The result is only interesting if the subjects were given the information first, and then asked whether they could predict their response better from that information than from someone else's experience. Yong's post doesn't say which of these things happened, and doesn't cite the original article, so I can't look it up. Does anyone know?
I've heard studies like this cited as strong evidence that we should update more; but never heard that critical detail given for any such studies. Are there any studies which actually show what this study purports to show?
EDIT: Robin posted the citation. The original paper does not contain the crucial information. Details in my response to Robin.
EDIT: The original paper DOES contain the crucial info for the first experiment. I missed it the first time. It says: