You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Metus comments on Open thread, 7-14 July 2014 - Less Wrong Discussion

2 Post author: David_Gerard 07 July 2014 07:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (232)

You are viewing a single comment's thread.

Comment author: Metus 08 July 2014 11:15:06AM *  3 points [-]

I know politics is the mindkiller and arguments are soldiers yet still the question looms large: What makes some people more suceptible to arguing about politics and ideology? I know of people I can talk to while having differing points of view and just go "well, seems like we disagree" and carry on a conversation. Conversations with other people invariably disintegrate into political discussion with neither side yielding.

Why?

Comment author: Viliam_Bur 08 July 2014 02:03:00PM 4 points [-]

Different people may have different reasons. I guess it's usually a form of bonding: if you believe that the other person is likely to have similar political opinions, then if you confirm it explicitly, you have common values and common enemies, which makes you emotionally closer.

And people who often start political debates with those who disagree... could be just uncalibrated. I mean, there is some kind of surprise/outrage when they find out that the other person doesn't agree with them. But maybe I'm just protecting my hypothesis against falsification here. Perhaps we could find such person and ask them to make an estimate about how likely it is that a random person within their social group would share their opinions.

Comment author: Metus 08 July 2014 02:25:31PM 0 points [-]

The attempt at making the hypothes falsifiable itself already warrants an upvote.

So bonding over policy might be a game-theoretic strategy to find allies at the cost of obviously alienating some people. Very interesting hypothesis. How might this be made falsifiable? I'd reject the hypothesis if I see politicking decrease or stay constant with need for allies, assuming satisfying measures for both politicking and need for allies.

Comment author: Viliam_Bur 08 July 2014 04:45:05PM 0 points [-]

Well, the adaptation may have been well-balanced in the ancient environment, but imbalanced for today. (Which could explain why people are uncalibrated.) So... let's just separate the "what" from "why". Let's assume that people are running an algorithm that even doesn't have to make sense. And we just have to throw in a lot of different inputs, examine the outputs, and make a hypothesis about the algorithm. And the whole meaning of that would be a prediction that if we keep making experiments, the outputs will be generated by the same algorithm.

That's the "what" part. And the "why" part would be a story about how such algorithm would provide good results in the ancient environment.

Unfortunately, I can't quite imagine making that experiment. Would we... take random people from the streets, ask them how many friends and enemies they have, then put them in a room together and wait how much time passes until someone starts debating politics? Or make an artificial environment with artificial "political sides", like a reality show?

Comment author: BaconServ 09 July 2014 06:43:11PM 0 points [-]

Do you find yourself refusing to yield in the latter case but not the former case? Or is this observation of mutually unrelenting parties purely an external observation?

If there is a bug in your behavior (inconsistencies and double standards), then some introspection should yield potential explanations.