Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

gwern comments on Open thread, Jan. 09 - Jan. 15, 2017 - Less Wrong Discussion

3 Post author: MrMind 09 January 2017 08:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (141)

You are viewing a single comment's thread.

Comment author: gwern 09 January 2017 08:50:35PM *  21 points [-]

So apparently the fundamental attribution bias may not really exist: "The actor-observer asymmetry in attribution: a (surprising) meta-analysis", Malle 2006. Nor has Thinking, Fast and Slow held up too well under replication or evaluation (maybe half): https://replicationindex.wordpress.com/2017/02/02/reconstruction-of-a-train-wreck-how-priming-research-went-of-the-rails/

I am really discouraged about how the heuristics & biases literature has held up since ~2008. I wasn't naive enough back then to think that all the results were true, I knew about things like publication bias and a bit about power and p-hacking, but what has happened since has far exceeded my worst expectations. (I think Carl Shulman or someone warned me that the H&B literature wouldn't, so props to whoever that was.) At this point, it seems like if it was written about in Cialdini's Influence, you can safely assume it's not real.

Comment author: niceguyanon 10 January 2017 04:44:30PM 7 points [-]

At this point, it seems like if it was written about in Cialdini's Influence, you can safely assume it's not real.

How well has the ideas presented in Cialdini's book held up? Scarcity heuristic, Physical attractiveness stereotype, and Reciprocity I thought were pretty solid and hasn't come under scrutiny, yet at least.

Comment author: lifelonglearner 12 January 2017 06:32:02AM 3 points [-]

Is there a current list of biases that have held up?

I've been looking quite a bit specifically into the planning fallacy / miscalibration / overconfidence, which appears to be well-substantiated across a variety of studies (although I haven't seen any meta-analyses).

Comment author: John_Maxwell_IV 11 January 2017 03:20:07AM 3 points [-]

At this point, it seems like if it was written about in Cialdini's Influence, you can safely assume it's not real.

Are you sure "does not replicate" is the same as "not real"? If we can't trust the studies that found these effects, why are you so confident in the replications?

Comment author: gwern 11 January 2017 04:47:33PM 7 points [-]

Time-reversal heuristic: if the failed replication had come first, why would you privilege the original over that? If the replications cannot be trusted, despite the benefit of clear hypotheses to test and almost always higher power & incorporation of heterogeneity, a fortiori, the original cannot be trusted either...

Comment author: John_Maxwell_IV 15 January 2017 12:11:06AM 2 points [-]

It would be surprising if the necessary level of power & incorporation of heterogeneity always happened to fall right in between that of the original study and the replication. I would expect that in many cases, the necessary level is above that of both studies, which means neither can be considered definitive.

Comment author: NatashaRostova 10 January 2017 03:56:48AM 3 points [-]

I think there are some serious issues with the methodology and instruments used to measure heuristics & biases, which they didn't fully understand even ten years ago.

Some cognitive biases are robust and well established, like the endowment effect. Then there are the weirder ones, like ego depletion. I think a fundamental challenge with biases is clever researchers first notice them by observing other humans, as well as observing the way that they think, and then they need to try and measure it formally. The endowment effect, or priming, maps pretty well to a lab. On the other hand, ego depletion is hard to measure in a lab (in any sufficiently extendable way).

I think a lot of people experience, or think they experience, something like ego depletion. Maybe it's insufficiently described, or a broad classification, or too hard to pin down. So the original researcher noticed it in their experience, and formed a contrived experiment to 'prove' it. Everyone agreed with it, not because the statistics were compelling or it was a great research design, but because they all experience, or think they experience, ego depletion.

Then someone replicates it, and it doesn't replicate, because it's really hard to measure robustly. I think ego depletion doesn't work well in a lab, or without some sort of control or intervention, but those are hard things to set up for such a broad and expansive argument. And I guess you could build a survey, but that sucks too.

In the fundamental attribution error, I think that meta analysis is great, in that it shows that these studies suck statistically. They only work if you come to them with the strong prior evidence that "Hey, this seems like something I do to other people, and in the fake examples of attribution error I can think of lots of scenarios where I have done that." Of course, our memory sucks, so that is a questionable prior, but how questionable is it? In the end I don't know if it's real, or only real for some people, or too generalized to be meaningful, or true in some situations but not others, or how other people's brains work. Probably the original thesis was too nice and tidy: Here is a bias, here is the effect size. Maybe the reality is: Here is a name for a ton of strange correlated tiny biases, which together we classify as 'fundamental attribution', but which is incredibly challenging to measure statistically over a sample population in a contrived setting, as the best information to support it seems inextricably tangled up in the recesses of our brains.

(also most heuristics and biases probably do suck, and lack of replication shows the authors were charlatans)

Comment author: niceguyanon 10 January 2017 03:42:27PM 1 point [-]

The endowment effect, or priming, maps pretty well to a lab.

Are you saying that cognitive biases like endowment effect and priming map better to lab settings therefore are less susceptible to contrived experiments to prove them like ego depletion?

I don't know whether or not these map well to a lab or not, but priming research is one of the major areas under going a replication crisis; not sure about the endowment effect.

Comment author: ChristianKl 12 January 2017 04:47:46PM 1 point [-]

I added a question on Skeptics.SE about his reciprocity principle.

Comment author: Douglas_Knight 19 January 2017 08:32:52PM 0 points [-]

how the heuristics & biases literature has held up

How do you define it? Anything that Kahneman mentioned in his popular book? That seems too broad for me. The work of Kahneman and Tversky has held up well, as, I think, has the work of their students, the people invited to contribute to the book Heuristics and Biases.