SusanBrennan comments on Thoughts on moral intuitions - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (199)
Right - and the interesting thing is, I had no idea that I was doing it, and in fact was trying to do the opposite. I did my best to take extreme viewpoints like "eating meat is like committing genocide" and "everyone should be converted or they'll go to hell" and attempted to portray them as psychologically no different from any other belief. But although I think I did okay with that, an uncharitable and exaggerated strawman still managed to slip in earlier on.
For the most part, I think it's just about the general ingroup-outgroup tendency in humans, and the desire to look down on any outgroups. But as for that bias slipping into my writing, even when I was explicitly trying to avoid it - that seems to have more to do with the way that most of our thought and behavior is built on subconscious systems, with conscious thought only playing a small role. Or to use Jonathan Haidt's analogy, the conscious mind is the rider of an elephant:
That elephant is very eager to pick up on all sorts of connotations and biases from its social environment, and if we spend a lot of time in an environment where a specific group (conservatives, say) frequently gets bashed, then we'll start to imitate that behavior ourselves - automatically and almost as a reflex, and sometimes even when we think that we're doing the exact opposite.
It is a pity that this kind of a bias hasn't really been discussed much on LW. Probably because the original sequences drew most heavily upon cognitive psychology and math, whereas this kind of bias has been mostly explored in social psychology and the humanities.
I remember coming across this paper during my PhD, and it provides a somewhat game theoretic analysis of in-group out-group bias, which is still fairly easy to follow. The paper is mainly about the implications for conflict resolution, as the authors are lecturers in business an law, so it should be of interest to those seeking to improve their rationality (particularly where keeping ones cool in arguments is involved), which is why we are here after all.
I've been thinking about doing my first mainspace post for LessWrong soon. Perhaps I could use it to address this. Unfortunately I've forgotten a very famous social psychology experiment wherein one group (group A) was allowed to dictate their preferred wage difference between their group and and another group (group B). They chose the option which gave them the least in an absolute sense because the option gave them more than group B by comparison. They were divided according to profession. It's a very famous experiment, so I'm sure someone here will know it.
In Irrationality, Sutherland cites Brown (1978, "Divided we fall: An analysis of relations between sections of a factory workforce") and states:
In a highly-cited review, Tajfel (1982) states:
A brief look at recent studies seems to suggest a more nuanced relation, but I'm not familiar with the literature. See, e.g., Card et al. (2010).
Bang on! Brown ("Divided we fall") is exactly what I was looking for. Thank you. I regret having only one up-vote to give you.