Prismattic comments on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1529)
I, for one, find obscurantist posts hinting that there are unspoken-because-unpalatable-to-the-mainstream truths to be far more irritating than posts explicitly saying things that I personally find distasteful. The former leaves the dissident view just amorphous enough to be impossible to subject to scrutiny. Given that, even in cases where the mainstream view is wrong, the implied dissident view may also be wrong in some important regard, the obscurantism is highly suboptimal.
I haven't been downvoting for this phenomenon so far, but I'm going to start doing so if it keeps happening.
What is obscurantist exactly? What I said is perfectly clear, if you look at the context of the two preceding posts.
No particular claim about male-female relations was intended (although if you want to know I endorse Roissy's view of male-female relations, if not his value-set); I was objecting to the idea that "mindkilling" should be redefined as "saying things likely to offend mainstream sensibilities". Mindkilling refers to the effect of political content on human reasoning powers in general, and the suggested redefinition struck me as Orwellian.
It is not your post that I think is obscurantist. I was commenting on the undesirability of posts that presuppose option 2 has been selected and proceed to imply that the mainstream view is false without actually making explict what alternative is being proposed.
I think the alpha-beta classification is excessively reductive. I would say that I am fairly physically intimidating to a majority of other males, but this doesn't translate into automatic adoration by nearby females.
To whoever is upvoting this, it seems like you must be taking one of the following positions:
Could you guys clarify?
I upvoted Prismatic, and I'm taking this position: 4. It may or may not be safe to post certain views on Less Wrong, but whatever they are, I precommit that I will not be part of a blowup over them. If your views are justified, I will update on them, and if they are not, I will calmly state my objections, but I will not punish you for dissent. If other people punish you unfairly for dissent, I will punish them. I would rather you post your dissenting views than hide them, and I will support you for doing so.
If enough of Less Wrong takes this position, eventually position 1. will be correct. I hope to bring about this state of affairs.
I always appreciate when someone else comes along and explains my position better than I did, so thanks.
Why not publish the "unsafe" arguments under a pseudonym (or an alternate pseudonym if your main identity is already a pseudonym)?
To do so consistently and stay safe, you'd need to take the unusual or otherwise identifiable parts of your set of concepts, favorite examples, verbal quirks, patterns of reasoning, and so on, and split everything into two: one part for use under your true identity, and one part for pseudonymous use. Even then, each of your novel ideas could taint each of your other novel ideas. There would also still be the harm to LessWrong's reputation as a whole. And what would it accomplish? It's notoriously hard to get people to change their minds on these topics, even here, and if you do there's no clear causal path from that to better long-term future outcomes. I'd rather just collectively give up.
I do wonder why Luke puts so much effort into writing about romantic relationships, given all the other things on his to do list. Perhaps he wants to demonstrate that rationality has big concrete, immediate benefits, as a way to help expand our community?
I think that's unlikely, unless someone who wants to see it happen makes a big push for it (e.g., get Eliezer to declare it a rule, or write a really convincing top-level post arguing for it and build the necessary consensus). My suggestion was made under the assumption of the current status quo.
I second this question.
Isn't is possible that Prismattic's comment could be receiving so many upvotes because other people also find comments of the sort described irritating and are embracing the opportunity to signal that irritation? Like Prismattic, I don't generally downvote comments on this basis alone. But I'm definitely tired of seeing the types of comments described, especially in those instances when, at least to my eyes, the commenters seem to be affecting a certain world-weary sorrow and wisdom while hinting at the profound truths that could be freely discussed but for -alas!- the terrible tyranny of modern social norms. But because the commenters are hiding the exact substance of their own views, there's no basis on which to judge whether these views are, as Prismattic suggests, actually more correct than the mainstream view, or perhaps equally or even more wrong in some different direction.
If what's suggested is "You guys would punish me for stating my arguments, therefore I win the debate", I agree that's unreasonable. If what's suggested is "You guys would punish me for stating my arguments, therefore no real debate has taken place", I think that's far more reasonable.
I am also interested in a clarification.
Trying to put words to my own intuitions on the matter, I would stipulate a modified 3:
It may be unsafe (in terms of image/status/etc - I would certainly expect and hope not physically) to express certain views, particularly those sufficiently far from both societal mainstream and LW mainstream, and particularly those that touch too heavily on mind-killing topics.
It is reasonably within norms to acknowledge this, particularly with an eye to reducing its effect.
What is decidedly a violation of norms, I think, is to do so in a self-serving manner.
"Norms forbid honest discussion of my pet issue X, therefor X" is obviously flawed.
"Norms forbid discussion of my pet issue X, and I have strong evidence for X but can't share it because of those norms, so just trust me that X" amounts to the same thing, in terms of what kinds of discussions are possible. It is also, to some degree, inconsistent - it is unlikely that we forbid evidence for a proposition while allowing discussion otherwise implying/assuming it.
Perhaps my view is one of 1-3, but I'm finding it difficult to categorize it:
It is ill-advised to discuss certain topics on LessWrong; if they are discussed anyway, the following choices are in the decreasing order of preference: a) not join the discussion; b) state your view clearly and be prepared to defend it; c) hint at your view but refuse to explain it or cite evidence for it, claiming that'll violate a social norm.
b) is much better than c), but a) is much better than b).
It's the same attitude that I think already exists on LW for politics (strongly influenced by the mind-killer post).
So you prefer the situation in which a dubious mainstream view remains entirely unchallenged to a situation where a doubter, instead of remaining silent, states that it is likely wrong but that spelling out an explicit argument why it is so would violate social norms? As far as I see, the information made available in the second case is a proper superset of the information available in the former. So how can this constitute "obscurantism" in any reasonable sense of the term?
I'd prefer social norms be violated. Asserting that a proposition is wrong without explaining why one has reached that conclusion or presenting an alternative is not a behavior that is generally viewed as beneficial in any other context on Lesswrong.
ETA: I also see the widespread use on Lesswrong of "politically correct" as an attribution that prima facie proves something is wrong to be problematic. Society functions on polite fictions, but that does not mean that everything that is polite is inherently false.
Do you upvote people that do?
I have mostly grown tired of making comments where I mention a contrarian position. I get asked to explain myself; it sometimes leads to an argument, and I put a lot of work into comments that often end up at negative karma. I suspect those threads add to LW, but the feedback I'm getting is that they don't.
I'll understand if you refuse, but... would you mind terribly saving me the work of searching for an example of what you're talking about? Cause, see, if I'm right about what you're referring to (something I'm not sure of, hence the question) I generally do upvote things like that.
Also I've only been here, like, two months, so if you have some kind of reputation I'm not aware of it.
The most recent example would be my comment that everyone becoming bisexual might lead to a net social loss, although the karma scores have gone up since that discussion happened (and so maybe I just need to wait before updating on the karma of contrarian comments).
I spent way too long looking through other comments I've made, and only really came across this example. I suspect this was misapplying discontent caused by other arguments. I had already noticed a while back that when I made a sloppy comment it would often get downvoted, although I would be able to make up the karma by explaining myself downthread. The only other significant example I can think of was in a thread about infanticide where I accidentally implied that I could be for the criminalization of abortion, and that comment got kicked down to -3 karma, with +1 karma from my following comments. (It's hard to decide how that whole thread contributes to this question, because the person who said "well, I can't say this many places, but I'm in favor of infanticide" got upvoted to 41 karma. That suggests to me their position isn't contrarian locally, but I suspect it is contrarian globally.)
In general, it's been observed that a comment on a controversial topic will be downvoted heavily in a quick flurry but then usually recovers; high-quality such comments tend to end up significantly positive.
And now I have seen it observed by someone who isn't me. Good to hear external confirmation! :)
Careful - if you've stated it out loud, the observation noted above might be your own.
By the way, +1 for noting the tension between "Is that your true rejection" and "Policy debates should not appear one-sided".
This does not answer my question. You claim that a situation in which information X and Y is made available constitutes "obscurantism" relative to the situation where only information X is provided. Now you say that you would prefer that not just X and Y, but also information Z be provided. That's fair enough, but it doesn't explain why (X and Y) is worse than just (X), if (X and Y and Z) is better than just (X and Y). What is this definition of "obscurantism," according to which the level of obscurantism can rise with the amount of information about one's beliefs that one makes available?
I still consider myself relatively new here, only been around for a year -- but in that year I haven't seen any actual fact presented in LessWrong that's enflamed spirits one tenth as much as the obscure half-hints by trolls like sam and his "I can't say things, because you politically correct morons will downvote me into oblivion, but be sure that my arguments would be crushing, if I was allowed to make them, which I'm not, therefore I'm not making them" style of debate.
The "obscurantism" that Prismatic is talking about isn't yet as bad as that, but it has that same flavour, to a lesser degree. This sort of thing is... annoying -- hinting at evidence, but refusing to provide it -- and blaming this obscurity at the hypothetical actions by people who haven't actually done them yet.
If the issue is e.g. whether science seems to indicate that the statistical distribution of physical and intellectual characteristic isn't identical across racially-defined subgroups of the human race, or across genders, or across whatever, then it can be discussed politely, if the participants actually seek a polite discussion, instead of just finding the most insulting way possible to talk about them. And if the participants are willing to use words like "average" and "median" and "distribution" and things like that, instead of using phrases that are associated with the worst metaphorical Neanderthals that exist in the modern world.
What I think enflames things far far worse is when people imply that you are incapable of discussing topics, but nonetheless hint at them. If the topic can't be discussed, then don't discuss it or hint at it at all. If it can be discussed, then discuss it plainly, clearly, politely; not trollishly or deliberately offensively or carelessly offensively. Take a single minute to see if you can impart the same (or more) information in a less offensive mannere.g. "Is there a causal connection between the absence of Y chromosome and average levels of mathematical aptitude"? may need a couple seconds more to write, but it'll probably lead to a better discussion than "Why and how do women suck at math"?
You are presenting the situation as if such hints were coming out of the blue in discussions of unrelated topics. In reality, however, I have seen (or given) such hints only in situations where a problematic topic has already been opened and discussed by others. In such situations, the commenter giving the hint is faced with a very unpleasant choice, where each option has very serious downsides. It seems to me that the optimal choice in some situations is to announce clearly that the topic is in fact deeply problematic, and there is no way to have a no-holds-barred rational discussion about it that wouldn't offend some sensibilities. (And thus even if it doesn't break down the discourse here, it would make the forum look bad to the outside world.)
At the very least, this can have the beneficial effect of lowering people's confidence in the biased conclusions of the existing discussion, thus making their beliefs more accurate, even in a purely reactive way. However, you seem to deny that this choice could ever be optimal. Yet I really don't see how you can write off the possibility that both alternatives -- either staying silent or expressing controversial opinions about highly charged issues openly -- can sometimes lead to worse results by some reasonable measure.
You also seem to think that merely phrasing your opinion in polite, detached, and intellectual-sounding terms is enough to avoid the dangers of bad signaling inherent to certain topics. I think this is mistaken. It might lead to the topic in question being discussed rationally on LW -- and in fact, this will likely happen on LW unless the topic is gender-related and if it manages to elicit interest -- but it definitely won't escape censure by the outside world.
This.
I really really don't want such discussion to be very prominent, because they attract the wrong contrarian cluster. But I don't want LW loosing ground rationality wise with debates that are based on some silly premises, especially ones that are continually reinforced by new arrivals and happy death spirals!
Attracting the wrong people, and alienating some of the "right" people is a bigger concern to me than the reputation of the site as a whole (though that counts too). Another concern is that hot-button issues might eat up the conversations and get too important (they are not issues I care that much about debating here).
The current compromise of avoiding some hot-button issue, and having some controversial things buried in comment threads or couched in indirect academese seems reasonable enough to me.
I agree with this. But I wish to emphasise:
Some of us look at the state of LW and fear that punishment of this appropriate behaviour is slowly escalating, while evaporative cooling is eliminating the rewards.
I concur with this diagnosis -- and I would add that the process has already led to some huge happy death spirals of a sort that would not be allowed to develop, say, a year an a half ago when I first started commenting here. In some cases, the situation has become so bad that attacking these death spirals head-on is no longer feasible without looking like a quarrelsome and disruptive troll.
Which is I think the current situation when it comes to criticism of say democracy.
Actually, general criticism of democracy isn't such a big problem. It can make you look wacky and eccentric, but it's unlikely to get you categorized among the truly evil people who must be consistently fought and ostracized by all decent persons. There are even some respectable academic and scholarly ways to trash democracy, most notably the public choice theory.
Criticisms of democracy are really dangerous only when they touch (directly or by clear implication) on some of the central great taboos. Of course, respectable scholars who take aim at democracy would never dare touch any of these with a ten foot pole, which necessarily takes most teeth out of their criticism.
I think criticism of democracy goes over less well if you have something specific that you want to replace it with.
That is true, but you get into truly dangerous territory once you drop the implicit assumption that your criticism applies to democracy in all places and times, and start analyzing what exactly correlates with it functioning better or worse.
Sam dosen't do that. Sam trolls by stating his opinions fully. He then refuses to provide evidence.
Race differences have already been explicitly discussed with little problem, if not prominently so, do a search. Gender, sexuality and sexual norms are the great unPC problem of LessWrong.
Dishonest generalization, find two posters in addition to Sam who do this. I will wait.
Now contrast this to the average (even average anon double log in account) pro-hereditarian LW-er who brings up such points. There are far more Quirrells than Sams here, and Sams get heavily downvoted except on the rare occasions they make more reasonable posts (though the particular poster has probably burned out some people's patience and will get downvoted no matter what he says because he has consistently demonstrated an unwillingness to adapt to our norms).
This is quickly devolving into the worst kind of politicking one finds on otherwise intelligent forums.
But it is other people who keep dragging them up and discussing them. Politely stating that you disagree and they are wrong, getting then heavily up voted (which indicates a significant if far from majority fraction of LWers agree with the comment) is surely better than not interrupting what you see as a happy death spiral?
Have we been visiting the same forum? I have often up-voted your responses to Sam0345's posts, indeed you nearly always successfully rebuke him. But I think your extensive interactions with him may be leading you to mistake an individual for a group.
I've decided to bow out of this thread -- as I've not significantly studied either PUA, nor cared to read about previous PUA-related threads in LessWrong, I can barely understand what you're talking about. Perhaps you've noticed a real problem that I haven't, exactly because you're focusing on different type of threads than I do.
The thing I had in mind was things like e.g. the guy who repeatedly and deliberately kept using the diminutive word "girls" to refer to female rationalists but "men" to refer to the male counterparts. This by itself -- when I perceived he intended to belittle women in this fashion, or at least didn't give a damn about not insulting them -- prevented any meaningful discussion of the actual argument he was engaged in, (whether a male-only meetup would be useful or detrimental for the purposes of LessWrong).
OB and early LW consistently blew up whenever PUA and related issues where discussed.
He really shouldn't have done that.
I appreciate your point here, but you could have chosen a better example. Those two questions have the same capacity for offensiveness. They have the same content and are compatible with the same presuppositions and connotations. They just use different language.
Now perhaps there are people who, upon seeing "women suck at math", read "boo women!", and upon seeing words like "causal" and "Y chromosome", think about causes and effects. So if you're talking to one of those people, you'll want to use the fancier language. But not everyone is like that.
I care about this because I want to be able to talk about why so few of my mathematician colleagues are female, and why they feel so weird about it, and what can be done about it, without gratuitously offending people.
I am really curious how you can demonstrate equivalence between a question that follows the pattern "Why is (X) the case?" and a question that follows the pattern "Is (Y) the case?" -- even if (Y) is arguably equivalent to (X), only phrased in more polite language.
As far as I see, the first one asks for the explanation of something that is presumed to be an established fact, while the second one expresses uncertainty about whether (arguably) the same fact is true. How on Earth can these two be said to have "the same content" and be "compatible with the same presuppositions"?
However, you are quite right that these two questions have the same potential for offensiveness, in that outside a few quirky places like LW, neither the polite phrasing nor the expression of uncertainty will get you off the hook, contrary to what Aris Katsaris seems to believe.
Ah, I see, you're right; the content of the two questions are different. I noticed there was a substantial difference in language, and assumed that was the point of the example.
Surely that's a hyperbole. Now, I know lots of people would be offended by both questions, but I doubt most people would be equally offended by both, and plenty of people would be offended by one but not the other. As a woman who doesn't suck at math, I am down to discuss the first question, but the second one makes me want to slap you.
(Of course, by declaring myself a woman who doesn't suck at math, I have already proven my own nonexistence, so my opinion can, no doubt, safely be ignored.;) )
Is it ok to threaten (or declare the desire to do) physical violence upon someone if you don't get your way simply because you are a woman? Careful which stereotypes you support. You don't usually get "heh. Female violence is harmless and cute!" without a whole lot of paternalism bundled in.
Slaps, generally, are relatively harmless. Unfulfilled desires to slap, even more so.
On the other hand, hasn't there been some discussion of the idea that you have to believe something, however briefly, to understand it?
Even though expressing a desire to slap has no macro bodily effect [1], it still has an emotional effect which is going to affect how a conversation goes, however slightly. [2]
[1] Tentative phrasing used to respect the idea that everything is physical, including thoughts and emotions, but that some things affect people physically more than others.
[2] I believe that "just ignore it" leaves out that ignoring things is work.
If I said something to offend you over the internet, and you said it made you feel like hitting me, I would think it was no big thing, especially if you went on to explicitly clarify that you would never actually hit me. I would not perceive it as a serious threat in any case.
If you said something like that in real life, in full public view with many onlookers, I might depending on your body language be slightly more concerned, but I would probably just raise an eyebrow and imply that you were being a creep. If I said the same to you, I wouldn't look as ridiculous, since most likely you're bigger and stronger than me, but I doubt it would win anyone over either.
If you actually physically attacked me, I would do my best to see that criminal charges were brought, and I would not physically attack anyone myself if I were unwilling to defend my actions in court. That last scenario is so far from what actually happened here that it really seems like a red herring, though.
Really? My instincts anticipate a significant negative response if I said I wanted to hit someone around here. On the order of a substantial faux pas not a personal security risk. But to be honest I haven't exactly calibrated that intuition all that much. Because I just don't go around saying I want to hit people.
That's uncalled-for. I am not asking either question. It's okay if you're offended by one but not the other.
Again, I care about this because I want to be able to talk about why so few of my colleagues are female, and why they feel so weird about it, and what can be done about it — without gratuitously offending people.
Those questions are not remotely equivalent. I suppose as a second order implication, if you assume that the average man is not very good at math, you could also assume that the average women is really not very good at math, but obviously both the male and female distributions have people above their respective means. In any case, "Why and how do women suck at math" sounds to me like "Why do all women suck at math," not like "Why does the average woman suck at math," even if the latter question was based on an accurate presupposition.
The distinction between gratuitously offending people, and inadvertently offending people, does not seem to be widely noticed, whether on Less Wrong or other places. Less Wrong has established implicit rules for what may be said, so there is a narrow class of things that can be said on Less Wrong without getting into trouble, that cannot be said elsewhere without getting into trouble, but that class is narrow and subject to change, so narrow, twisty, complex, and obscure, that I do not find it interesting, though Vladimir does seem to find it interesting.
To participate in consensus building on Occupy Wall Street, you need an Ivy League Education in political correctness. Less Wrong is not nearly as bad as that, but Less Wrongers that tread near forbidden topics as Vladimir does, are developing more expertise in what is permissible thought, and what means are permissible to express them, than they are developing expertise in forbidden topics.
Same capacity for offensiveness, perhaps -- in that some overly defensive people will surely choose to feel attacked ("be offended") just as much by either question. But same average offensiveness? I seriously doubt it.
Signalling is important. "Offensiveness" functions by signalling you an enemy. If you signal strongly enough that your question is about a desire to understand neurobiological causes of a statistical phenomenon, not about an attempt to attack groups of people, fewer people will feel attacked.
Now some people will surely argue that people just "ought grow tougher skins" instead. But that's an "ought"-argument, and I'm referring to an "is"-question, which choice of words and sentences leads to a better discussion.
What am referring to as obscurantism are (usually implied) claims that "I possess information that refutes a mainstream view, but I'm not going to share it, because most people can't handle the truth in a nonmindkilling fashion."
cf. Wikipedia
That's not necessarily the claim (explicit or implied). It can also be that even if the information were to be handled in a non-mind-killing fashion, the resulting conclusions would be beyond the pale of what is acceptable under the current social norms.
As for the definition of obscurantism you gave, this is definitely not obscurantism under (1), since it withholds less information to the public than if one remains completely silent. As for (2), it doesn't involve abstruseness, deliberate or not, since the claim is in fact very simple (as e.g. spelled out above). The most you can say is that it involves deliberate vagueness, but even there, the purpose of the vagueness is not to mislead, confuse, or perform some rhetorical legerdemain, but merely to hand out a limited but perfectly clear piece of information.
It'd be interesting to see some sort of dumping ground of allegedly useful, but socially unacceptable ideas, which may or may not be true, and then have a group of people discuss and test them. Doesn't seem completely outside the territority of lesswrong, but if you think these subjects are that hazardous, and that lesswrong is too useful to be risked, then a different site that did something along those lines is something I'd like to see.
A invitation based mailing list of a group of high karma non-ideological LWers seems the better route.
A site devoted to discussing impolite but probable ideas will well... disappoint very quickly. Have you ever seen the comment section of a major news site?
I support this proposal and would like to join the mailing list if one becomes available. But why do you think a mailing list would fare better than a website? Because of restricted access?
I guess it has more of a "secret society" vibe to it. Oooh, ooh, can we call it the Political Conspiracy?
Is 1100 enough karma? I've tried to stay out of ideological debates, but I don't know precisely what the criteria would be. (And who would decide, anyway?)
The comment sections on iSteve and Roissy are not great places either.
In the period roughly from 2006 until 2009, there was a flourishing scene of a number of loosely connected contrarian blogs with excellent comment sections. This includes the early years of Roissy's blog. (Curiously, the golden age of Overcoming Bias also occurred within this time period, although I don't count it as a part of this scene.)
All of these blogs, however, have shut down or gone completely downhill since then (or, at best, become nearly abandoned), and I can't think of anything remotely comparable nowadays. I can also only speculate on what lucky confluence led to their brief flourishing and whether all such places on the internet are doomed to a fairly quick decay and disintegration. I can certainly think of some plausible reasons why this might be so.
Indeed, that's my point.
A non-archived mailing list, I think, to greatly reduce the potential cost of adding new members.
Trouble is, everything transported over the internet is archived one way or another. That is actually the main reason why I've been reluctant to push forward with this initiative lately.
Observe, however the comment section of certain horribly non PC blogs. By and large. they are very smart, and remarkably well informed. Censorship is never necessary, whereas in more politically correct environments, censorship is essential, because when non PC views are spoken, commenters take it upon themselves to silence the heretic by any means necessary, disrupting communication.
If the blog owner posts fairly heretical views, and himself refrains from censoring or intemperately and rudely attacking views in the comments that are even more heretical than his own, then no one in the comments intemperately or rudely attacks any views that anyone expresses in the comments or on the blog.
The blog owner can say that left wing views are held by fools and scoundrels, but because left wing views are high prestige, a left commenter will not be called a fool and a scoundrel. If the blog owner refrains from saying that views more right wing than his own are held by fools and scoundrels, then commenters with views more right wing than his own will not be called fools and scoundrels in the comments.
Because right wing views are low prestige, it requires only the slightest encouragement from the blog owner to produce a dog fight in the comments, should someone further right than the blog owner comment, but not so easy to produce a dog fight when someone lefter than the blog owner comments.
This was previously discussed here. Right now, it's sounding like whatever (if anything) comes out of this will fail by being overly inclusive. My guess is that if this sort of thing ends up working well, it will be because some small group of people who happen to have good taste end up making decisions on a "trust me" basis, rather than because LessWrong as a community successfully applies some attempt at a transparently fair algorithm.
It sounds, then, as though you should be talking to the people punishing norm violations, not to the people responding rationally to such punishment.
Downvoted for invoking the name of the magic in vain, risking summoning its counterpart twin demon to devour us when you had no just cause! None I say!
What should I have said instead? "Incentive-followingly"? Maybe the fashion pendulum has swung too far toward not using the word.
"Calmly", "by punishing the punishment", "to the substance of the matter regardless of punishment".
People punishing norm violations aren't the villains of their own narratives, they think they're responding rationally.
My purpose in using the word was not to contrast good us to bad them, but rather to emphasize that the action Prismattic disagrees with (that of withholding one's opinion) is a move forced by an incentive that needn't itself have been set (and shouldn't have been set if Prismattic is right that opinion withholding is bad), and so it's more reasonable for Prismattic to complain to the incentive setters than to the incentive followers. Does that make sense?
The "people aren't villains of their own narratives" line always struck me as a little glib. Villains believe they're not villains, but does that mean they falsely believe they're some particular thing that truly is not a villain, or does it merely mean they correctly believe they're some particular thing that they falsely believe is not a villain (fail to label as villainous)? In my intuition these are two different things and the saying uses the plausibility of the disjunction of the two things to suggest only the first thing. Clearly villains usually gain some sort of satisfaction from their role in the world, perhaps even moral satisfaction, but that's not the same thing as there having been a good-faith effort to be a hero. I don't know, I may just be confused here.
Anyway, what matters is who's a villain in God's narrative (in the atheist sense of God). :)
I disagree with this, at least it's not at all obvious.
It means that at least on LW, they would also describe their behavior as rational (in certain contexts where reason is seen as an enemy, not everyone would be claiming the title "rational").
Clever.
Which does not necessarily mean we should change the way we treat them. They can tell themselves whatever story they like. And by punishing them appropriately they will either change their behavior or, perhaps most importantly, those witnessing the punishment will avoid the behavior that visibly invokes community disapproval.
"Straightforwardly," perhaps, or "shortsightedly" if you want to speak ill of them.
Who is punishing? (In the context of Lesswrong)
Lowered karma. Rebuke. Deletion of posts. We might have some form of banning. Might want to check the wiki.
Also the punishment (mainly in the form of lowered status and tarnished reputation) that would be foisted upon LW as an institution by the broader society if it were to become a welcoming environment for various kinds of views that aren't very respectable.
Also: Being habitually mentioned in the same breath with outrageous positions that one has taken in the past. Having such words applied to one as cause people they are repeatedly applied to to be shunned.
What I'm really displeased about is that we are so casually dismissed as troublemakers, arguing in bad faith or tarred with negative characteristics.
Look at our profiles. Look at our comments. You will find many very active and well received posters who you would otherwise consider an asset to the community. And then consider how massively up voted some comments expressing such sentiments are! There are many more who never voice it but share chunks of this proposed map of reality.
Yes many on LessWrong are knee jerk contrarians, but please consider just how large a fraction of reasonable, polite, intelligent, sceptical LW contributors have basically thrown out certain popular overly optimistic ideas out of their model of the world, because the ideas in question just don't pay rent and and are useful for signalling only. I dare say many, found the departure from some of those ideas more painful and difficult than admitting to themselves that the religion of their childhood was false.
I know I did.
Update accordingly.
Aren't we already?
There are different degrees of severity. Being perceived as weird in a nerdy way is low-status, but it's nothing compared to being perceived as harboring fundamentally evil views. Most notably, the former sort of low status isn't infectious; you can associate with weird nerdy people without any consequence for the other aspects of your life. Not so when it comes to associating with the latter sort of people.
My question wasn't what tools of punishment are available, but whether there is actually a substantial amount of such punishment occurring merely for taking non-mainstream views.
I do not. If things are thought false, its critics say so. Otherwise, its critics suppress it socially. If some idea is socially suppressed, I infer its critics fear it is true. There is a famous essay on this I couldn't find, but here is a discussion on it.
What I think we’re in danger of forgetting is that, anywhere but Less Wrong, “That’s offensive!” is actually a really persuasive argument. People who blithely ignore even the strongest of evidence will often shut up and look stupid if you successfully play the offense card. PC arguments may be so commonly heard, not because they are the “best” (most valid) arguments that could be made in support of a given assertion, but because they totally work.
If someone says, with no factual basis at all, that members of Group X murder children, piles and piles of evidence may not be enough to make the claim go away, but if you can convince people that to say so is offensive and Anti-X, you're home free. So why bother presenting the evidence?
"You're wrong" implies "you're a liar," or a more direct response could be "that's a lie." If the goal is to make someone look stupid, this can work better. Admittedly that's not always a major goal, cases won't overlap, etc.
But I think we do see people make fact-citing arguments that are delivered in the tone of "that's offensive", so the methods aren't mutually exclusive. For example, any argument beginning "There is no scientific evidence that..." in an appropriately shrill tone sends the message that offense is taken and sidesteps the logical evidence to highlight the strongest available evidence, the absence of scientific evidence.
Even if the offense argument is explicit, factual arguments could at least be added to it.
Do you mean Paul Graham's What you can't say?
Yes. To gwern (verb) it, to reconstruct it from quotes according to the Pareto principle:
Add "politically correct" to the set of possible x and y and we are in agreement. This was the point of my original comment on the matter.
Saying things violate Paul Grahm's principle isn't used here to dismiss ideas, only to, as you said, put the burden of proof on them as being prima facie false. I don't think that "heretical" was quite the same way, nor are "racist" and "fascist", etc.
I would never say "prima facie proves" so maybe we are using some words to express very different concepts.
This may be evidence that the critics fear that, but it isn't always the case. Sometimes they just think that there can be damage if people are mislead by the falsehoods for example.
Sure, it's not always the case. But if I just think that there can be damage if people are misled by a falsehood, I will probably claim it's false, and argue for that point.
This isn't really true. To give the most prominent example, Holocaust denial is heavily suppressed in Western societies, in many even with criminal penalties, although its falsity is not in any doubt whatsoever outside of the small fringe scene of people who espouse it. (And indeed, it really doesn't stand up even to the most basic scrutiny.) For most beliefs that the respectable opinion regards as deserving of suppression, respectable people are similarly convinced in their falsity with equal confidence, regardless of how much truth there might actually be in them.
Now, sometimes it does happen that certain claims are clearly true but at the same time so inflammatory and ideologically unacceptable that respectable people simply cannot bring themselves to admit it, even when the alternative requires a staggering level of doublethink and rationalization. In these situations, contrarians who provoke them by waving the obvious and incontrovertible evidence in front of their eyes will induce a special kind of rage. But these are fairly exceptional situations.
How do people respond to the claims? I acknowledge that any response other than just "that's false" de-emphasizes the falsity of it, but if the response is "That's a lie and illegal," that's a different sort of thing to say than "That's classist," or the like for other claims. If people respond with "The powerful Jews will lock you up for saying such a thing, by the way I think it's 15% likely true," then that's an interesting case too, one that isn't a counterexample.
In one sense legal coercion is at the far end of a single scale from mild disapproval to ostracization to illegalization,but in another sense it is qualitatively different. A country within which saying something is illegal might have most endorse the illegal idea, or most oppose it by simply calling it "false", or most oppose it by emphasizing its illegality and somewhat mentioning its illegality, etc., or no majority of any type. What's important here is the social climate around the statements, for which the laws on the books are important evidence but alone don't make an example or counterexample of a country.
Yes, this is the precise complaint! To frame an argument as politically incorrect is to imply that all arguments against it are based on squeamishness. It's a transparent attempt to exploit the mechanism you describe, one so beloved of tabloid hacks that practically any right of centre* talking point can be described as politically incorrect ("you can't say [thing I'm saying right now on prime-time television] any more" and so on).
Why declarations of politically incorrectness are taken any more seriously than claims to be totally mad/random or the life of the party I shall never know.
*am I being, ah what's the equivalent here - unserious perhaps? populist? - if I suggest that this trick is mostly limited to the right? That political correctness just means any non-socialist leftwing opinion, with the added implication that the opinion is both hegemonic and baseless. When left wing commentators trip over themselves to avoid criticising america or soldiers, or rush to condemn protests at the first sign of a black mask, nobody talks about political correctness. Despite all the talk about how OWS has made it acceptable to moral issues in ways that were previously beyond the pale, nobody calls it an anti-PC movement.
Perhaps we should have a separate term to describe this phenomenon, if we are going to keep going on about political correctness, and pretending we aren't talking about politics? Since otherwise we reach a point where commentators are unable to call people fascists, for being so PC is decidedly politically incorrect.
First, politically correct arguments are obviously a subset of arguments for conclusions that are the same as those reached by politically correct arguments.
Second, that conflates levels.
People don't randomly decide which arguments to give justifying their statements and actions, they tend to give the strongest ones they have available. Arguments that are politically correct are non-truth-citing arguments. The argument that an argument is politically correct is a non-truth-citing argument. Non-truth-citing arguments are generally weaker than truth-citing arguments.
See here. If someone presents a NTC argument, I infer they don't have a TCA unless there are extenuating circumstances such that I think that they would have presented a NTCA even when they had a TCA.
Likewise when someone's presents a TCA, one can infer, all else being equal, that they don't have a much more compelling one available. Even weak TCAs ought to lower one's degree of belief something is true when they are presented by someone who probably would have used a better argument had it been available, even though the argument is a valid and novel one, and one had expected the arguments for the position to be better.
Imagine you are watching two people. The first makes a claim about a subject with which you aren't familiar. At that point, you assign it a certain credibility. The second objects with a NTCA. At that point, you should think the claim more likely than before because the best objection the second person could make was weak and your original estimate had expected them to do better. If the first person objects to the objection with a NTCA, then you should think the claim less likely than at the second point, because the best counterobjection the first person could make was weak and your estimate at the second point in time had expected them to do better.
That "To frame an argument as politically incorrect" is an argument roughly as bad as a politically correct argument does not salvage politically correct arguments.
So, I think I have a reasonable sense of what people mean when they say an argument, or an assertion, is politlcally incorrect. Reading this, though, I begin to suspect that I have no idea what you mean when you say an argument is politically correct.
Ordinarily, I don't hear that term used to describe arguments at all, I hear it used to describe people who object to politically incorrect arguments... or who object to arguments on the grounds that they are politically incorrect.
Among other things, I can't tell if you intend for "politically correct" and "politically incorrect" to be jointly exhaustive terms, or whether there's a middle ground between them. If the latter, I think I agree with most of what you say here, though I'm not sure how many real-world arguments it applies to.
I mean an argument with a few characteristics:
For example, they don't take the form "It's not true that all violent rapes in the city were perpetrated by immigrants." They take the form "It's insensitive to say that all violent rapes in the city were perpetrated by immigrants."
For example, "We've done experiments, and the results suggest no difference in intelligence between Koreans and Chinese, controlling for other factors, there are probably no measurable differences between the groups" is not a PC argument, because it appeals to truth. "The assumption that Koreans are smarter than Chinese is racist, if you properly controlled for environmental differences, there would be no measured difference between the groups," has a very similar conclusion, and is a PC argument. It's not the argument's conclusion that makes it PC or not.
For example, arguing that something is wrong because "A Muslim said it" is obviously neither truth citing nor PC. PC arguments are those that are rationalizations for a particular set of conclusions.
Truth-citing and non-truth-citing are just poles of a range. Arguments such as evolutionary debunking arguments attempt to show a loose relationship between a proposition and the truth - loose, neither tight nor non-existent.
Unlike PC arguments, PI arguments are just those with conclusions or implicit assumptions targeted by PC arguments. Mercy said "To frame an argument as politically incorrect is to imply that all arguments against it are based on squeamishness. It's a transparent attempt to exploit the mechanism you describe..." this is largely true. The framing corresponds to a certain degree with reality in each case.
Positions for which the best argument is "My opponent's arguments is PC," are weak. This weakness is because the accusation that the argument is a rationalization for a predetermined conclusion, i.e. that it is a PC argument, does not attack the conclusion directly. The accusation is a form of evolutionary debunking argument, and weakens the evidence brought for the conclusion without destroying the evdence and without attacking the conclusion. The accusation is weak in a way similar to all PC arguments.
Mercy went wrong in thinking that because calling out arguments as being PC and thus not tightly bound to truth of their conclusions does not address the conclusions either, arguments' actual status as PC arguments is unimportant.
The reason to especially doubt arguments usually supported by the argument "This argument is rejected because it is a politically incorrect argument," is that valid arguments with true premises and conclusions can usually do better. There is an excuse to say "This argument is rejected because it is a politically incorrect argument," so long as one has prioritized better arguments, or if it is to explain rather than argue for something, e.g. to explain why someone was fired but not why the statement that person was fired for is true.
(nods) OK, I see what you're getting at, at least generally. Thanks for the clarification.
One thing...
This would make significantly more sense to me if it said "incorrect." Was that a typo, or am I confused?
What does this mean?
I believe a "discuss" (or synonym thereof) was omitted between "a" and "moral."
Your question rests on an assumption that obscurantism must decrease information, but I see that assumption as incorrect. In fact, under this assumption I should never regard anything said to me as obscurantist, as it should never decrease the amount of information available to me.
Wikipedia defines "obscurantism" as "the practice of deliberately preventing the facts or the full details of some matter from becoming known", and it seems to fit the bill. Of course, it may be useful or beneficial species of obscurantism, though I agree with Prismatic that it is not.
The situation as you describe it seems pre-biased by postulating that the mainstream view is dubious. This may be obvious to you, but to me, the person who's faced with the "hints" as described, it is not - if it were, I shouldn't need the hints to begin with. I think it's incorrect to condition on the dubiousness of the mainstream view. If I am to decide on how to best to take into account hints of that nature, the possibility that the mainstream view is correct after all, and the hint entirely specious, should not be disregarded. In fact, in real-life situations where such hints are offered, this may be the more frequent scenario.
The hint that says "this view is incorrect, but I will not explain why, for doing that will violate a social norm" is annoying and distracting; it engages my attention, bringing no real evidence for its claims. Because it posits a mystery, I'm likely to err on the side of giving it more attention than it deserves. The benefit is that it may cause me to investigate the view more thoroughly than I would otherwise have, and realize it is incorrect. If I precommit to ignoring such signals, I will miss some chances of that, and I will also avoid giving my attention, and more closely investigating, all those views that are correct after all, and where the signal was specious. The bargain may well be worth it.
What makes obscurantism a relevant category is that certain ways of withholding information and intentional abstruseness can be very effective for misleading people and producing convictions without evidence. In LW parlance, it is a particular kind of Dark Arts. Now, of course, it makes no sense to debate definitions when there is a true disagreement about them, but I think it shouldn't be controversial to insist that the normal meaning of "obscurantism" involves this Dark Arts element. In other words, it involves withholding information with the intent to mislead and produce mistaken or unsubstantiated beliefs, and it cannot be applied to every act of withholding information intentionally.
I do think the Wikipedia definition you quoted is unreasonably overbroad, considering the standard usage of the word. It would cover all sorts of completely honest, reasonable, and non-misleading acts of communication where one chooses to limit the amount of information given -- for example, saying that you got a new job but not disclosing the salary, or writing blog comments under a pseudonym.
It is not true that it brings no significant evidence, if the source of the hint is someone about whom you have other information -- and information about the intellectual abilities, knowledge, and likely biases of frequent commenters is easy to get in a forum like this one (if you don't in fact have it already). And you can always simply ignore such hits if you believe you have insufficient information, or you don't feel like looking for it, the way you presumably ignore any other comments that are not of interest to you.
Also, I note that your complaint here doesn't state that these hits are misleading and apt to trigger biases leading to incorrect beliefs, so you must indeed be working with the broadest possible (and I would say overbroad) definition of "obscurantism."
It may indeed -- but why precommit unconditionally, without considering the source of these signals?
Not technically true. It is possible to make a perfectly rational mind produce worse predictions about the world by providing it with selected information. This relies on it having insufficient information about your obscuring tendencies or motives. The new probabilities that the rational agent has will necessarily be a subjectively objective improvement but can still produce worse predictions of the relevant aspects of the world in an objective sense.
You're right, of course. I edited away that part, which is not relevant for the main point anyway.
Yes, why should the heretic have the right to remain silent! If he speaks truth the good doctors of the holy mother church will surely update their theological arguments accordingly and if not, well why is he risking his immortal soul by relying only on his feeble and fallible mind?
We're not discussing anyone's right to remain silent. The objection is to a heretic's tendency to announce himself as a heretic without mentioning any doctrinal specifics, then run away giggling.
After diligently reading through most of Vladimir_M's comment history, I have no option but to express my fervent agreement. I've always had a great dislike of vague hints, but the style in which he does those is just fucking unbearable.