Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

NatashaRostova comments on Open thread, Jan. 09 - Jan. 15, 2017 - Less Wrong Discussion

3 Post author: MrMind 09 January 2017 08:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (136)

You are viewing a single comment's thread. Show more comments above.

Comment author: NatashaRostova 10 January 2017 03:56:48AM 3 points [-]

I think there are some serious issues with the methodology and instruments used to measure heuristics & biases, which they didn't fully understand even ten years ago.

Some cognitive biases are robust and well established, like the endowment effect. Then there are the weirder ones, like ego depletion. I think a fundamental challenge with biases is clever researchers first notice them by observing other humans, as well as observing the way that they think, and then they need to try and measure it formally. The endowment effect, or priming, maps pretty well to a lab. On the other hand, ego depletion is hard to measure in a lab (in any sufficiently extendable way).

I think a lot of people experience, or think they experience, something like ego depletion. Maybe it's insufficiently described, or a broad classification, or too hard to pin down. So the original researcher noticed it in their experience, and formed a contrived experiment to 'prove' it. Everyone agreed with it, not because the statistics were compelling or it was a great research design, but because they all experience, or think they experience, ego depletion.

Then someone replicates it, and it doesn't replicate, because it's really hard to measure robustly. I think ego depletion doesn't work well in a lab, or without some sort of control or intervention, but those are hard things to set up for such a broad and expansive argument. And I guess you could build a survey, but that sucks too.

In the fundamental attribution error, I think that meta analysis is great, in that it shows that these studies suck statistically. They only work if you come to them with the strong prior evidence that "Hey, this seems like something I do to other people, and in the fake examples of attribution error I can think of lots of scenarios where I have done that." Of course, our memory sucks, so that is a questionable prior, but how questionable is it? In the end I don't know if it's real, or only real for some people, or too generalized to be meaningful, or true in some situations but not others, or how other people's brains work. Probably the original thesis was too nice and tidy: Here is a bias, here is the effect size. Maybe the reality is: Here is a name for a ton of strange correlated tiny biases, which together we classify as 'fundamental attribution', but which is incredibly challenging to measure statistically over a sample population in a contrived setting, as the best information to support it seems inextricably tangled up in the recesses of our brains.

(also most heuristics and biases probably do suck, and lack of replication shows the authors were charlatans)

Comment author: niceguyanon 10 January 2017 03:42:27PM 1 point [-]

The endowment effect, or priming, maps pretty well to a lab.

Are you saying that cognitive biases like endowment effect and priming map better to lab settings therefore are less susceptible to contrived experiments to prove them like ego depletion?

I don't know whether or not these map well to a lab or not, but priming research is one of the major areas under going a replication crisis; not sure about the endowment effect.