Decius comments on Don't Get Offended - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (588)
Does any of that help locate truth in the search space, other than maneuvering into a position of social power?
No, but social power + respect can be useful for achieving your goals (especially if one of your goals is social power and respect, which seem to be true for a lot of people).
Yes, but on lesswrong, at least, we've been exposed to enough social psychology to understand why that's a dangerous intrinsic goal to have. It's certainly seductive, but aren't there better things to do with increased agency than to seek to dominate other potential agents?
Last I checked (which was admittedly a while ago), there are a decent number of Less Wrong users who act obnoxiously high status in real life (including some who are quite prominent). I'd love to have the egalitarian norms you describe, but I think first we'd have to convince them to stop.
This may not be trivial. I've noticed that my high status behaviors often seem pretty instinctual, and I've also noticed that I have a fair amount of mental resistance to giving up status even if I'd like to in theory (ex.: apologizing).
The way you worded this makes it sound as if there are a few people ruining it for everyone. If this is actually the case, then the solution is, when these people begin acting obnoxiously high status, say "You're being obnoxious. Stop." Bystander effect, etc. If you try this and it doesn't work, let me know so I can update.
Without identifying the people involved, can you describe in more concrete terms the behaviours you are talking about?
Name three?
You are asking John to do something that is clearly unwise for him to do in a form typically used with the connotation that if the person does not comply it is because they can not. This is disingenuous.
Good point, though he might reply to him in a private message.
If John wants a problem to stop, it would be nice to first identify more clearly the source of the problem. Otherwise he's just doing the LW analogue of vaguebooking.
It's not obvious to me that "the source of the problem" and "the people most saliently exhibiting the symptoms" are the same thing. It's also not obvious to me that "the source of the problem" necessarily refers to any particular set of individuals.
The utility function is not up for grabs.
In the abstract, sure.
But we exist within a particular social context here - specifically, people supposedly come to this website, and participate in this forum, to attempt to be less wrong.
Instead, it appears that many people are engaging in (as someone else put it) obnoxious status displays, playing "look how edgy and selfish and status-motivated I am", rather than actually attempting to aid each other in being less wrong.
And that's fine if that's an indicated maximum of your utility function, but I would think that other people would act to collectively punish that behavior rather than reward it, lest we turn into the kind of obnoxious circle-jerk/dickwaving contest that most of the internet tends to devolve into.
That is why status is a dangerous goal to pursue - because it tends produce an affective death-spiral until all other goals subordinate to gaining status.
Agreed -- Less Wrong is a particularly bad place to pursue the goal of social power.
I also agree, especially if one is trying to look high-status to the average person in the general population. Science and rationality is still looked at as nerdy, unfortunately.
Oddly, I tend to feel like having high status among nerdy types is the only time it actually "counts." I get a rush when something I say here or within other nerd and geek communities is well received, or if I'm treated as an authority on X, etc...wheras, say, people calling me "sir" or otherwise treating me as higher-status at work makes me extremely uncomfortable. So do compliments from normals in general.
[Edit: "Status granted by a tribe I don't identify with feels like a status hit instead" might be a good way to put it.]
Well, why would I care what status people who don't regularly non-trivially interact with me assign to me?
Same here, but I think the main reason for that is that it makes me feel ‘old’. (Teenagers and people in the early twenties aren't usually treated that way (no matter how cool they are in the eyes in their peers), and I don't exactly revel in being reminded that I'm no longer one.) ETA: I do like the fact that I'm now economically independent, though.
(Edited to add scare quotes around “old”, lest thirtysomethings resent me, as they usually do when I say I feel old.)
It is totally okay to want social power and respect. You want social power and respect. If you believe that you don't want social power and respect, then you will be motivated to lie to yourself about the actual causes of your actions.
"better" according to whom? The only one who can set a different standard for yourself is yourself, yet if you already do have that "dangerous intrinsic goal", then, well, you already do have that goal (yay tautology). You can weigh it against other goals and duly modify it, but presumably if other goals outweighed your need to dominate, that would already have happened. Since it has not (for those who have that goal), that is reason to surmise that from the point of view of those agents there isn't in fact anything better to do, even if they'd like to think that they think there was.
Not if it is, in fact, your intrinsic goal!
Of course there are occasions where having goals makes it less likely to actualize them, and so incentives exist roughly isomorphic to the ones which collapse CDT to TDT. The advice in How To Win Friends and Influence People is of this type - it advises you that in order to achieve social dominance and manipulate others you should become genuinely interested in them. But this is orthogonal to mindkilling.
Then maybe this is a deontological question rather than an ontological one. I would very much appreciate any help understanding why people seek to dominate other potential agents as an intrinsic goal.
If I was particularly interested in the question why people have the terminal values they have, I'd look into evolutionary psychology (start from Thou Art Godshatter) -- but if one doesn't clearly keep in mind the evolutionary-cognitive boundary, or the is-ought distinction (committing the naturalistic fallacy), then one will risk being mind-killed by evo-psy (in a way similar to this -- witness the dicks on the Internet who use evo-psy as a weapon against feminism), and if one does keep these distinctions in mind, then that question may become much less interesting.
What do you mean "use as a weapon against" and why is it obviously a bad thing? Would you say it's a fair complaint against EY that he uses Bayesianism as a "weapon against" religion?
I believe what army means is that some people mistakenly use evo-psy to make claims along the lines of "we have evolved to have [some characteristic], therefore it is morally right for us to have [aforementioned characteristic]".
Right. Many armchair evolutionary psychologists don't understand the nature of the evolutionary-cognitive boundary.
What I've seen tends to be more like, "we have evolved to have [some characteristic], asserting a deontological duty not to have [aforementioned characteristic] is not a good idea".
I'd add that many people appear to exercise motivated cognition in their use of ev-psych explanations; they want to justify a particular conclusion, so they write the bottom line and craft an argument from evolutionary psychology to work their way down to it. Although it would be hard for me to recall a precise example off the top of my head, I've certainly seen cases where people used evolutionary just-so stories to justify a sexual status quo, where I could easily see ways that the argument could have led to a completely different conclusion.
It's not evolutionary psychology so much, but I've seen quite a volume of evolutionary just-so stories in the field of diet and nutrition: everyone from raw vegans to proponents of diets based on meat and animal fats seems eager to justify their findings by reference to the EEA. Generally, the more vegetarian a diet, the more its proponents will focus on our living hominid relatives; the more carnivorous, the more the focus is on the recent evolutionary environment.
Remember, it all adds up to normality. Thus we should not be surprised that the conclusion of evo-psych agree with the traditional ideas.
When people claim that they're final argument tends to be a lot less convincing and involve a lot more mental gymnastics than the original.
I didn't mean that using a theory as weapon against (i.e., in orter to argue against) a different theory is always obviously a bad thing; in particular, I don't think that using Bayesianism to argue against religion is bad (so long as you don't outright insult religious people or similar). But in this particular case, evo-psy is a descriptive theory, feminism is a normative theory, and you cannot derive “ought” from “is” without some meta-ethics, so if someone's using evo-psy to argue against feminism there's likely something wrong. (The other replies you've got put it better than I could.)
Feminists frequently make "is" assertions, and justify their "ought" assertions on the basis of said "is" assertions.
In any case, you seem to be arguing that feminism will now be joining religion in the trying to survive by claiming to be non-refutable club.
They do, but their “is” assertions are stuff like “women have historically (i.e. in the last several millennia) been, and to a certain extent still are, oppressed by men”, which aren't actually contradicted by evolutionary psychology, which says stuff like “humans are X because, in the last several hundred millennia, X-er apes have had more offspring in average”. (And the “ought” assertions they justify based on “is” assertions are stuff like “we're further south than where we want to be, so we ought to move northwards”; IOW, they're justifying instrumental values, not terminal values.)
That wasn't my intention, but at the moment I can't think of a good way to edit my comment to make it clearer.
meta: I find it interesting that your post got voted down.
I didn't downvote but I was ambivalent. The main point was good but that was offset by the unnecessary inflammatory crap that was tacked on.
What was inflammatory? Also: I find it wryly interesting that a post with a good point and informative links would be judged inflammatory in an article about not getting offended.
I insulted anti-feminist amateur evolutionary psychologists.
;-)
(Actually, I hadn't noticed that, but that's a great reason (excuse?) to not edit my comment.)
When I find other people's motivations mysterious, I find it helps to see if I have anything like that motivation (for dominance, it might be a desire to be in charge of anything at all) and imagine it as much more important in my life.
Unless there's a friendly AI which has been built in secret somewhere, we're still all human. With all the weaknesses and foibles of human nature; though we might try to mitigate those weaknesses, one of the biggest weaknesses in human nature is the belief that we have already mitigated those weaknesses, leading us to stop trying.
Status interactions are a big part of the human psyche. We signal in many ways - posture, facial expression, selection of clothing, word choice - and we respond to such signals automatically. If a man steps up to one and asks for directions to the local primary school, one would look at his signals before replying. Is he carrying a container of petrol and a box of matches, does he have a crazed look in his eye? Perhaps better to direct him to the local police station. Is he in a nice suit, smartly dressed, with well-shined shoes, accompanied by a small child in a brand-new school uniform? He probably has legitimate business at the school. And inbetween the two, there's a whole range of potential sets of signals; and where there are signals, there are those who subvert the signals. Social hackers, I guess one could call them. And where such people exist - well, is it a good thing to pay attention to the signals or not? How much importance should one place on these signals, when the signals themselves could be subverted? How should one signal oneself - for any behaviour is a signal of some sort?
Except that what's being discussed here is the exploitation of those weaknesses, not their mitigation. And seeking to exploit those weaknesses as an end in and of itself leads to a particular kind of affective death spiral that rationalists claim to want to avoid, so I'm trying to raise a "what's up with that?" signal before a particular set of adverse cultural values lock in.
Ah, I see; so while I'm saying that I expect that some exploitation will happen with high probability in any sufficiently large social group, you are trying to point out the negative side of said exploitation and thus cut it off, or at least reduce it, at an early stage.
We are humans, and even our truth-seeking activities are influenced by social aspects.
Imagine a situation where someone says that you are wrong, without explaining why. You would like to know why they think so. (They may be right or wrong, but if you don't know their arguments, you are less likely to find it out.) If they consider you too low status, they may refuse to waste their time explaining you something. If they consider you high status, they will take their time to explain, because they will feel a chance to get a useful ally or at least neutralize a potential enemy.
Generally, your social power may determine your access to information sources.
That's true, but at the same time it should be mentioned that we do live in the era of the Internet (ridiculously accessible information, no matter how low status and not worth their time anyone thinks you are).
With each passing day, we're moving closer and closer to a world where trying to build accurate models of the world is a different activity than socializing. For example, it seems plausible to say that emotions are The Enemy in epistemic discussion, but one of the main things to be engaged in and optimized for in a social setting.
I knew and tried to mention that social power has instrumental value; are you saying that signalling offense can lead to someone explaining the reasons why you are wrong often enough to be worth introducing the noise to the technical discussion?
Or, more often, who else thinks so and how much power they have...
Maneuvering into a position of social power is an intrinsic, biologically-mediated goal for many humans (cf. the "Machiavellian Intelligence Hypothesis"). Thus, treating 'status' as an intrinsic rather than instrumental goal is a very common trap for humans to fall into, especially particularly clever humans (since 'cleverness' probably evolved primarily to serve precisely these purposes, and so the stimulus will activate those modules preferentially).
If locating truth in the search space is a preferable goal to gaining social status, then it might be worthwhile to taboo the word 'status' for awhile, especially on lesswrong - it seems to be collecting a lot of unfortunate cached subtext.
Also: be very careful asking questions like that, because they tend to signal low status if you ask them wrong. ;)
Sometimes I find that signaling low status is usefull. Sometimes I don't intrinsically care about status and signaling low status is more instrumentally valuable. Sometimes I am low status and signal honestly.
And sometimes status is efficiently distributed and what is instrumentally useful to a tribe is also high status.
It's not clear that everyone can learn not to be offended, and being offended imposes costs on the group in terms of the things they're going to consider and share.
But assuming that everyone could learn not to be offended:
Everyone can learn not to be offended by the few offensive people, or a few offensive people can learn to be less offensive. The group where the latter holds has made much more efficient use of its time and can work with a wider range of other groups.
So, for all you need people who can discount a certain level of offence in order that they can share differing ideas, I don't know whether don't be offended is the most efficient group norm to put in place for dealing with cases of disrespect and/or wilful offence.
A more useful policy would be to exclude people who give willful offense or are willfully offended and apply effort equally to preventing being accidentally offensive and to disregarding accidental offensive events.