Pfft comments on Open Thread, January 11-17, 2016 - Less Wrong

3 Post author: username2 12 January 2016 10:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (180)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 14 January 2016 05:22:42PM *  5 points [-]

An excellent piece about communication styles, in particular about a common type of interaction on the 'net which is sometimes seen on LW as well. I'll quote some chunks, but the whole thing is good.

Here’s a series of events that happens many times daily on my favorite bastion of miscommunication, the bird website. Person tweets some fact. Other people reply with other facts. Person complains, “Ugh, randos in my mentions.” Harsh words may be exchanged, and everyone exits the encounter thinking the other person was monumentally rude for no reason. ...

For clarity’s sake, I’ll name “ugh, randos” Sue and an archetypal “rando” Charlie.[4] I will also assume both are, initially anyway, operating in good faith–while there are certainly Sues and Charlies who are just unpleasant assholes, I think they are comparatively uncommon, and in any event picking apart their motivations wouldn’t be particularly interesting.

From Sue’s perspective, strangers have come out of the woodwork to demonstrate superiority by making useless, trivial corrections. Some of them may be saying obvious things that Sue, being well-versed in the material she’s referencing, already knows, and thus are insulting her intelligence, possibly due to their latent bias. This is not necessarily an unreasonable assumption, given how social dynamics tend to work in mainstream culture. People correct others to gain status and assert dominance. An artifice passed off as “communication” is often wielded as a blunt object to establish power hierarchies and move up the ladder by signaling superiority. Sue responds in anger as part of this social game so as not to lose status in the eyes of her tribe.

From Charlie’s perspective, Sue has shared a piece of information. Perhaps he already knows it, perhaps he doesn’t. What is important is that Sue has given a gift to the commons, and he would like to respond with a gift of his own. Another aspect is that, as he sees it, Sue has signaled an interest in the topic, and he would like to establish rapport as a fellow person interested in the topic. In other words, he is not trying to play competitive social games, and he may not even be aware such a game is being played. When Sue responds unfavorably, he sees this as her spurning his gift as if it had no value. This is roughly as insulting to Charlie as his supposed attempt to gain status over Sue is to her. At this point, both people think the other one is the asshole. People rightly tend to be mean to those they are sure are assholes, so continued interaction between them will probably only serve to reinforce their beliefs the other is acting in bad faith.

And a special shout-out to mathematicians :-/ Here is a quote about how talking to a mathematician feels to someone... born on the other side of IQ tracks:

Nobody was mean to me, nobody consciously laughed at me. There’s just a way that mathematicians have been socialized (I guess?!) to interact with each other that I find oppressive. If you have never had someone mansplain or whitesplain things to you, it may be hard for you to understand what I’m going to describe.

Usually, friendly conversation involves building a shared perspective. Among other things, mansplaining and whitesplaining involve one person of privilege forcing a marginalized person into a disagreeable perspective against their will, and not allowing them a way out. If you are someone averse to negative labels, it can be silencing. My experience discussing math with mathematicians is that I get dragged into a perspective that includes a hierarchy of knowledge that says some information is trivial, some ideas are “stupid”; that declares what is basic knowledge, and presents open incredulity in the face of dissent. Maybe I would’ve successfully assimilated into this way of thinking if I had learned it at a time where I was at the same level as my peers, but as it was it was just an endless barrage of passive insults I was supposed to be in on.

Comment author: Viliam 15 January 2016 10:00:48AM *  14 points [-]

I agree with gjm that the remark about IQ is wrong. This is about cultures. Let's call them "nerd culture" and "social culture" (those are merely words that came immediately to my mind, I do not insist on using them).

Using the terms of Transactional Analysis, the typical communication modes in "nerd culture" are activity and withdrawal, and the typical communication modes in "social culture" are pastimes and games. This is what people are accustomed to do and to expect from other people in their social circle. It doesn't depend on IQ or gender or color of skin; I guess it depends on personality and on what people in our perceived "tribe" really are doing most of the time. -- If people around you exchange information most of the time, it is reasonable to expect that the next person also wants to exchange information with you. If people around you play status games most of the time, it is reasonable to expect that the next person also wants to play a status game with you. -- In a different culture, people are confused and project.

A person coming from "nerd culture" to "social culture" may be oblivious to the status games around them. From an observer's perspective, this person display a serious lack of social skills.

A person coming from "social culture" to "nerd culture" may interpret everything as a part of some devious status game. From an observer's perspective, this person displays symptoms of paranoia.

The "nerd culture" person in a "social culture" will likely sooner or later get burned, which provides them evidence that their approach is wrong. Of course they may also process the evidence the wrong way, and decide e.g. that non-nerds are stupid or insane, and that it is better to avoid them.

Unfortunately, for a "social culture" person in a "nerd culture" it is too easy to interpret the evidence in a way that reinforces their beliefs. Every failure in communication may be interpreted as "someone did a successful status attack on me". The more they focus on trying to decipher the imaginary status games, the more they get out of sync with their information-oriented colleagues, which only provides more "evidence" that there is some kind of conspiracy against them. And even if you try to explain them this, your explanation will be processed as "yet another status move". A person sufficiently stuck in the status-game interpretation of everything may lack the dynamic to process any feedback as something else then (or at least something more than merely) a status move.

Thus ends my whitesplaining mansplaining cissplaining status attack against all who challenge the existing order.

EDIT:

Reading the replies I realized there are never enough disclaimers when writing about a controversial topic. For the record, I don't believe that nerds never play status games. (Neither do I believe that non-nerds are completely detached from reality.) Most people are not purely "nerd culture" or purely "social culture". But the two cultures are differently calibrated.

For example, correcting someone has a subtext of a status move. But in the "nerd culture" people focus more on what is correct and what is incorrect, while in the "social culture" people focus more on how agreement or disagreement would affect status and alliances.

If some person says "2+2=3" and other person replies "that's wrong", in the "nerd culture" the most likely conclusion is that someone has spotted a mistake and automatically responded. Yes, there is always the possibility that the person wanted to attack the other person, and really enjoyed the opportunity. Maybe, maybe not.

In the "social culture" the most likely conclusion is the status attack, because people in the "social culture" can tolerate a lot of bullshit from their friends or people they don't want to offend, so it makes sense to look for an extra reason why in this specific case someone has decided to not tolerate the mistake.

As a personal anecdote, I have noticed that in real life, some people consider me extremely arrogant and some people consider me extremely humble. The former have repeatedly seen me correcting someone else's mistake; and the latter have repeatedly seen someone else correcting my mistake, and me admitting the mistake. The idea that both attitudes could exist in the same person (and that the person could consider them to be two aspects of the same thing) is mind-blowing to someone coming from the "social culture", because there these two roles are strictly separated; they are the opposite of each other.

When you hear someone speaking about how the reality is socially constructed, in a sense they are not lying. They are describing the "social culture" they live in; where everyone keeps as many maps as necessary to fit peacefully in every social group they want to belong to. For a LessWronger, the territory is the thing that can disagree with our map when we do an experiment. But for someone living in a "social culture", the disagreement with maps typically comes from enemies and assholes! Friends don't make their friends update their maps; they always keep an extra map for each friend. So if you insist that there is a territory that might disagree with their map, of course they perceive it as a hostility.

Yes, even the nerds can be hostile sometimes. But a person from the "social culture" will be offended all the time, even by a behavior that in the "nerd culture" is considered perfectly friendly. -- As an analogy, imagine a person coming from a foreign culture that also speaks English, but in their culture, ending a sentence with a dot is a sign of disrespect towards the recipient. (Everyone in their culture knows this rule, and it is kinda taboo to talk about it openly.) If you don't know this rule, you will keep offending this person in every single letter you send them, regardless of how friendly you will try to be.

Comment author: Pfft 17 February 2016 05:24:10AM *  6 points [-]

For a LessWronger, the territory is the thing that can disagree with our map when we do an experiment. But for someone living in a "social culture", the disagreement with maps typically comes from enemies and assholes! Friends don't make their friends update their maps; they always keep an extra map for each friend.

I figured this was an absurd caricature, but then this thing floated by on tumblr:

So when arguing against objectivity, they said, don’t make the post-modern mistake of saying there is no truth, but rather that there are infinite truths, diverse truths. The answer to the white, patriarchal, heteronormative, massively racist and ableist objectivity is DIVERSITY of subjectivities. And this, my friends, is called feminist epistemology: the idea that rather than searching for a unified truth to fuck all other truths we can understand and come to know the world through diverse views, each of which offers their own valid subjective view, each valid, each truthful. How? by interrupting the discourses of objectivity/normativity with discourses of diversity.

Objective facts: white, patriarchal, heteronormative, massively racist and ableist?

Comment author: Viliam 17 February 2016 11:58:49AM 2 points [-]

Sigh.

Logic itself has a very gendered and white supremacist history.

These people are clearly unable to distinguish between "the territory" and "the person who talks about the territory".

I had to breathe calmly for a few moments. Okay, I'm not touching this shit on the object level again.

On a meta level, I wonder how much of the missing rationality skills these people never had vs how much they had but lost later when they became politically mindkilled.

Comment author: Sarunas 17 February 2016 05:27:02PM *  3 points [-]

I remember reading SEP on Feminist Epistemology where I got the impression that it models the world in somewhat different way. Of course, this is probably one of those cases where epistemology is tailored to suit political ideas (and they themselves most likely wouldn't disagree) but much less vice versa.

When I (or, I suppose, most LWers) think about how knowledge about the world is obtained the central example is an empirical testing of hypotheses, i.e. situation when I have more than one map of a territory and I have to choose one of them. An archetypal example of this is a scientist testing hypotheses in a laboratory.

On the other hand, feminist epistemology seems to be largely based on Feminist Standpoint Theory which basically models the world as being full of different people who are adversarial to each other and try to promote different maps. It seems to me that it has an assumption that you cannot easily compare accuracies of maps, either because they are hard to check or because they depict different (or even incommensurable) things. The central question in this framework seems to be "Whose map should I choose?", i.e. choice is not between maps, but between mapmakers. Well, there are situations where I would do something that fits this description very well, e.g. if I was trying to decide whether to buy a product which I was not able to put my hands on and all information I had was two reviews, one from the seller and one from an independent reviewer, I would be more likely to trust the latter's judgement.

It seems to me that the first archetypal example is much more generalizable than the second one, and strange claims that were cited in a Pfft's comment is what one gets when one stretches the second example to extreme lengths.

There also exists Feminist Empiricism which seems to be based on idea that since one cannot interpret empirical evidence without a framework, something must be added to an inquiry, and since biases that favour a desirable interpretations is something, it is valid to add them (since this is not a Bayesian inference, this is different from the problem of choice of priors). Since the whole process is deemed to be adversarial (scientists in this model look like prosecutors or defense attorneys), different people inject different biases and then argue that others should stop injecting theirs.

(disclaimer: I have read SEP article some time ago and wrote about these ideas from my memory, it wouldn't be a big surprise if I misrepresented them in some way. In addition to that, there are other obvious sources of potential misrepresentations)

Comment author: Viliam 18 February 2016 10:25:44AM *  0 points [-]

Seems like the essential difference is whether you believe that as the maps improve, they will converge.

A "LW-charitable" reading of the feminist version would be that although the maps should converge in theory, they will not converge in practice because humans are imperfect -- the mapmaker is not able to reduce the biases in their map below certain level. In other words, that there is some level of irrationality that humans are unable to overcome today, and the specific direction of this irrationality depends on their "tribe". So different tribes will forever have different maps, regardless of how much they try.

Then again, to avoid "motte and bailey", even if there is the level of irrationality that humans are unable to overcome today even if they try, the question is whether the differences between maps are at this level, or whether people use this as a fully general excuse to put anything they like on their maps.

Yet another question would be who exactly are the "tribes" (the clusters of people that create maps with similar biases). Feminism (at least the version I see online) seems to define the clusters by gender, sexual orientation, race, etc. But maybe the important axes are different; maybe e.g. having high IQ, or studying STEM, or being a conservative, or something completely different and unexpected actually has greater influence on map-making. Which is difficult to talk about, because there is always the fully general excuse that if someone doesn't have the map they should have, well, they have "internalized" something (a map of the group they don't belong to was forced on them, but naturally they should have a different map).

Comment author: OrphanWilde 17 February 2016 02:39:22PM 0 points [-]

On a meta level, I wonder how much of the missing rationality skills these people never had vs how much they had but lost later when they became politically mindkilled.

Can rationality be lost? Or do people just stop performing the rituals?

Comment author: Old_Gold 17 February 2016 06:33:37PM 4 points [-]

Can rationality be lost?

Sure, when formerly rational people declare some topic of limits to rationality because they don't like the conclusions that are coming out. Of course, since all truths are entangled that means you have to invent other lies to protect the ones you've already made. Ultimately you have to lie about the process of arriving at truth itself, which is how we get to things like feminist anti-epistomology.

Comment author: Lumifer 18 February 2016 07:40:07PM 1 point [-]

Can rationality be lost?

I don't see why not. It is, basically, a set of perspectives, mental habits, and certain heuristics. People lose skills, forget knowledge, just change -- why would rationality be exempt?

Comment author: OrphanWilde 18 February 2016 08:37:09PM 0 points [-]

Habits and heuristics are what I'd call "rituals."

Are perspectives something you can lose? I ask genuinely. It's not something I can relate to.

Comment author: Lumifer 18 February 2016 08:47:10PM *  1 point [-]

Habits and heuristics are what I'd call "rituals."

I don't know about that. A heuristic is definitely not a ritual -- it's not a behaviour pattern but just an imperfect tool for solving problems. And habits... I would probably consider rituals to be more rigid and more distanced from the actual purpose compared to mere habits.

Are perspectives something you can lose?

Sure. You can think of them as a habitual points of view. Or as default approaches to issues.

Comment author: Viliam 18 February 2016 08:57:43AM *  0 points [-]

Heh, I immediately went: "What is rationality if not following (a specific kind of) rituals?" But I guess the key is the word "specific" here. Rationality could be defined as following a set of rules that happen to create maps better corresponding to the territory, and knowing why those rules achieve that, i.e. applying the rules reflectively to themselves. The reflective part is what would prevent a person from arbitrarily replacing one of the rules by e.g. "what my group/leader says is always right, even if the remaining rules say otherwise".

I imagine that most people have at least some minimal level of reflection of their rules. For example, if they look at the blue sky, they conclude that the sky is blue; and if someone else would say that the sky is green, they would tell them "look there, you idiot". That is, not only they follow the rule, but they are aware that they have a rule, and can communicate it. But the rule is communicated only then someone obviously breaks it; that means, the reflection is only done in crisis. Which means they don't develop the full reflective model, and it leaves the option of inserting new rules, such as "however, that reasoning doesn't apply to God, because God is invisible", which take priority over reflection. I guess these rules have a strong "first mover advantage", so timing is critical.

So yeah, I guess most people are not, uhm, reflectively rational. And unreflective rationality (I guess on LW we wouldn't call it "rationality", but outside of LW that is the standard meaning of the word) is susceptible to inserting new rules under emotional pressure.

Comment author: ChristianKl 18 February 2016 09:19:35PM -1 points [-]

These people are clearly unable to distinguish between "the territory" and "the person who talks about the territory".

What about that sentence makes you think that the person isn't able to make that distinction?

If you look at YCombinator the semantics are a bit different but the message isn't that different. YCombinator also talks about how diversity is important. The epistemic method they teach founders is not to think abstractly about a topic and engage with it analytically but that it's important to speak to people to understand their own unique experiences and views of the world.

David Chapman's article going down on the phenomenon is also quite good.

Comment author: Viliam 19 February 2016 08:59:25AM 0 points [-]

It's interesting how the link you posted talks about importance of using the right metaphors, while at the same time you object against my conclusion that people saying "logic itself has white supremacist history" can't distinguish between the topic and the people who talk about the topic.

To explain my position, I believe that anyone who says either "logic is sexist and racist" or "I am going to rape this equation" should visit a therapist.

Comment author: ChristianKl 19 February 2016 09:49:04AM *  0 points [-]

I believe that anyone who says either "logic is sexist and racist" or "I am going to rape this equation"

Nobody linked here says either of those things. In particular the orginal blog posts says about logic:

This is not to say it is not useful; it is. But it does not exist in a vacuum and should not be sanctified.

The argument isn't that logic is inherently sexist and racist and therefore bad but that it's frequently used in places where there are other viable alternatives. That using it in those places can be driven by sexism or racism.

Comment author: Old_Gold 20 February 2016 04:58:02AM 5 points [-]

The argument isn't that logic is inherently sexist and racist and therefore bad but that it's frequently used in places where there are other viable alternatives.

Such as?

Comment author: ChristianKl 20 February 2016 09:47:38AM 1 point [-]

Interviewing lot's of people to understand their view points and not to have conversations with them to show them where they are wrong but be non-judgemental. That's basically what YC teaches.

Reasoning by analogy is useful in some cases.

There's a huge class of expert decisions that's done via intuition.

Using a technique like Gendlin's Focusing would be a way to get to solutions that's not based on logic.