[LINK] Why I'm not on the Rationalist Masterlist
A long blog post explains why the author, a feminist, is not comfortable with the rationalist community despite thinking it is "super cool and interesting". It's directed specifically at Yvain, but it's probably general enough to be of some interest here.
http://apophemi.wordpress.com/2014/01/04/why-im-not-on-the-rationalist-masterlist/
I'm not sure if I can summarize this fairly but the main thrust seems to be that we are overly willing to entertain offensive/taboo/hurtful ideas and this drives off many types of people. Here's a quote:
In other words, prizing discourse without limitations (I tried to find a convenient analogy for said limitations and failed. Fenders? Safety belts?) will result in an environment in which people are more comfortable speaking the more social privilege they hold.
The author perceives a link between LW type open discourse and danger to minority groups. I'm not sure whether that's true or not. Take race. Many LWers are willing to entertain ideas about the existence and possible importance of average group differences in psychological traits. So, maybe LWers are racists. But they're racists who continually obsess over optimizing their philanthropic contributions to African charities. So, maybe not racists in a dangerous way?
An overly rosy view, perhaps, and I don't want to deny the reality of the blogger's experience. Clearly, the person is intelligent and attracted to some aspects of LW discourse while turned off by other aspects.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (866)
I AM AN IDIOT, THEY ARE REAL AND TELLING THE TRUTH.
Original comment, for your viewing pleasure: (I hate it when people delete comments so you can't understand what was going on.)
Well be embarrassed, past me. Unwilling to accept the lack of evidence, I looked around for some more, and they are either a real person, or a truly spectacular hoax that spanned years building up a fake history.
So ... there you go, folks.
Yvain's excellent response.
I thought the "never read the comments section" rule could be safely ignored on that post, since comments were turned off. After following two of the pingbacks, I wanted to throttle two separate people and demand of them the ten minutes of my life that they wasted.
Lesson learned: Never read the comments section. No exceptions.
... says a post in a comments section
Where can we talk about it? He has comments turned off.
It might be sensible to make a link post for it in discussion, but it seems reasonable to discuss it here for continuity's sake.
Actually, it may be best to not discuss it in short-form comments; I haven't read the 772 comments on this post (I've been on vacation), and I don't expect to start now.
Who here thinks that the author of the blog post is female? I did.
Surprise(?)! The blog post doesn't seem to contain any information that would allow you to deduce the gender of the author. I briefly searched through the blog post and the comment found on Yvain's site, but I became none the wiser (I stopped searching at that point to respect the author's privacy). I wonder why I thought that the author of the blog post is female...
FTM transgender, I think. It's a bit unclear...
I haven't read the blog post yet, but I expect that being a feminist blogger (which was noted in the OP here) is a moderate-good predictor of being female (or at least not a hetero male).
I found gender conspicuously absent. Indeed, actual information about anything was conspicuously absent. I was strongly reminded of a curious feature of a flamewar that raged over SF-related blogs in 2009, which came to be called RaceFail.
I only came across that discussion a year after it had ended, through a chance mention somewhere, and was curious enough to look and see what it had all been about. You might think that easy: hyperlinks surely let one follow the discussion back all the way to the original postings that started it? Not at all. The curious pattern was this, and I observed it on all sides of the argument. People who were commenting on a blog post they agreed with would link directly to the specific post, and quote directly from it. People who were commenting on a blog post they disagreed with would not do that. They would link, if at all, only to the top level of the blog, and not quote but only paraphrase its content, or merely allude to it in terms that would convey little unless one had already read it -- and of course, upwards of a year after the event, there would be little possibility of tracking down which of dozens of possible postings they were talking about.
The blog post discussed here is all like that. Clearly, the author disagrees with someone and something, but never says who, what, where, or when. Everything is generality and allusion. To understand the allusions is the entry requirement, as it was for those RaceFail posts. The purpose of such writing is to be understood only by one's own side, to be a nod and a wink to say, "we know what I'm talking about, don't we?", and to leave no definite point for the enemy to attack. The difficulty that one has created for anyone outside the circle to engage with the matter can then be taken as further proof of their evilness.
That does seem vaguely appropriate, given their pseudonym is taken from this rhetorical device.
Apophemi turns that on its head. The rhetorical figure involves mentioning a thing in the act of avowing not to speak of it. Apophemi refrains from naming their matter, while speaking of it at great length.
I don't think it's possible to get a good overview of RaceFail. Aside from the linking issue (which I hadn't noticed), some of the material being attacked has been taken offline, and of course there plenty happening in private contacts which were never online.
If I look through this thread I find that there are plenty of people who had no trouble engaging the article and pointing out things of disagreement.
It's no easy text and you probably need some understanding of the underlying ideas, but it doesn't seem to me to be impossible to engage.
How much of that is because people just imagined their own ideas in the not-very-specific article, and responded to that.
If I just told you: "Someone was criticizing LW" and stopped here, it's not like your mind couldn't complete the pattern with some easily available scenarios.
The blog post contains very little specific information about anything. Without the "TL DR" as the end, I couldn't even deduce what the post was actually about (beyond: somewhere in the rationalist community someone said something offensive... and this is why I'm not on the Rationalist Masterlist).
Not that it particularly matters, but I assumed male (for reasons that aren't entirely clear to me) until I got to the line about being misgendered, at which point I shrugged my metaphorical shoulders and mentally tagged it as undetermined.
I have read fairly many blog entries similar to this one, and to my recollection all were written by women.
Because it's a valid Bayesian inference based on the content of the post.
It's really not. They refer to being misgendered, which should have been strong evidence your assumption was mistaken. And indeed, if you had clicked through to their "about" page you would have found they prefer to be referred to with male pronouns.
I don't really care - I'm fairly certain this is the work of a troll - but hey, you claimed it was an example of valid Bayesian inference, so naturally I'm going to leap on it.
Given the issue of being misgendered, the person seems to be a transperson who either was female in the past and is now male or who was male and is now female. To you think post indicates which of those are the case?
I think the post makes clear that the person is no cis-male, but it's difficult to say things that are more specific.
Damn. I've been referring to the author as female because other people were.
Given this sentence -- "...one person who has repeatedly misgendered me" -- from the second paragraph, it might be that the sex/gender of the blog author is... complicated.
Aha! I think that sentence is why I assumed the author was female - I remembered that there was a reference to them being upset by something relating to their gender, so I pattern-matched that to "female feminist".
That's a heuristic to keep an eye on.
Social justice rhetoric tends to lose me when it shifts from "I should be heard in the conversation because I can contribute to it" to "I should be heard in the conversation because I cannot contribute to it."
How did that blog post come to your attention? It appears to have just been created, and that is the only post on it.
I was following a thread from August on Yvain's site. The author of the blog post we are discussing added a comment there on January 4 and Yvain replied. I should have included this link in my original write-up. And since I've started criticizing myself, I should have left out my half-baked musings on racism and spent more effort on summarizing the post I was linking to. For example, it might have been a good idea to quote the following:
This shows that the author is able to taboo words in order to improve readers' understanding. A communication skill justifiably prized on LessWrong.
I find striking the addendum which is mainly a list of examples of objecting to tabooing words, but includes a footnote tabooing "politically correct." (though I find that particular tabooing in bad faith, unlike the example of "privilege" in the main text)
Note that this community reacts badly to some topics as well.
The failing of being mind-killed by certain triggers is pretty common, and I have to give this author credit for recognizing it. I'd prefer an attempt to analyze and work with the emotional impact, to find ways to continue to discuss things where rationality is possible, rather than a set of examples that trigger this person specifically.
I do wonder if we should have a way to add trigger-warnings and filters to posts and comments. It can't be made perfect, and there are some interesting and smart people who will still not be able to participate, but it could help for those who just can't be open and tolerant of some topics. And it could perhaps allow us to explore some of WHY some topics are mind-killing to some people, and find ways to work around it rather than just avoiding the topic.
In the sense of being unfavourable to some topics, or being irrationally unfavourable to those topics?
In the same sense as the linked post: recognizing that there may be value in the discussion, but not trusting ourselves and each other to be rational enough to actually get the value in open discussion.
On one level, that's a rational reaction to our limitations in rationality. But those limits are irrational.
The author apparently has the privilege of living in a bubble where everyone she knows fundamentally approves of all her opinions, but occasionally has one person out of 20 show up at a gathering who disagrees, and just may throw a fit if that person dare voice their opinions.
Me - atheist, egoist, libertarian - I'm lucky if one person out of 20 won't think I'm the devil if I'm open about my opinions. I weep for the discomfort she feels when my existence impinges on her awareness.
I note that a Christian or Muslim describing how they are hurt by those who dare openly(!) question their sacred values wouldn't receive such polite consideration, and certainly not by this blogger.
Are you ever in physical danger because of your opinions?
I don't believe the blogger was in any danger because of her opinions at a dinner party either.
My guess is that she travels in a terribly civilized circle where watching a boxing match would induce fainting spells. I travel in fairly pansified circles myself, and that's the way I like it. I like civilization.
As for actual violent crime, all the crime statistics I've seen show that men are at least as likely to be victimized as women.
Even in terms of partner violence, all of it of which I'm aware in my circles are of females acting out against their partners in rather dangerous ways. We've been laughing for years about how a female friend gave her boyfriend a shove down a staircase right in front of me in college. He managed to catch himself on the sloping ceiling above and avoid crashing to his death. The look he gave her in return was priceless.
Because you see, it's funny when women try to hurt men. When it's the other way around, it's a crime against humanity. And we all have to be thrown into a tizzy at the thought of violence used against a woman. The mere thought of the possibility of it entitles the blogger to have all opinions that give her a twinge of worry shut down. No matter that the statistics show that the evil enpenised person she shuts down faces the same or more risk of actual violence.
We have no idea how much violence the blogger has actually experienced, but it might have something to do why they're so concerned about it. I'm more than a bit surprised that they find SJ (?) circles so emotionally safe, but maybe they haven't run into the nastier emotional attacks is a way that affects them personally.
I agree that violence by women against men is all too frequently treated as funny-- in popular art as well privately. Is there anyone here who follows popular art enough to have an opinion about whether this has changed and in what direction?
I think violence against men by women not being taken seriously is partly sexism against women-- an idea that women aren't strong enough to do real damage. The other half of the problem (this is probably obvious to you) is a highly mistaken belief about how tough men ought to be.
Sometimes violence by men against men is portrayed as funny, too.
Violence by men against women portrayed as funny isn't as common but there are still some classic examples.
Violence by women against women is another trope entirely.
Monty Python and the Holy Grail: 1975
Airplane: 1980
It seems to me that a certain sort of violence by women against men was a common trope some decades ago-- perhaps other people can tell me whether it's still popular.
He says something obnoxious. She hits him, and not with a slap-- with a solid punch coming up from the ground. Big laugh from the audience. Rather implausibly, he isn't injured and he doesn't retaliate.
Monty Python was the example of men vs men. The examples of women against men were Airplane (1980) and Repo! The Genetic Opera (2008).
Those were examples of men against women being funny.
Oops...but now I don't know why Nancy was giving dates.
Maybe to show that things have changed somewhat? Repo the Genetic Opera is something of an unusual movie, but it's more recent than Airplane! is.
As you might expect, there's a trope for this. (caution: TVTropes link)
Judging from the examples, the answer is "yes", although I don't know comedy well enough to say whether these are truly representative.
This trope might be closer.
I knew there was something I was forgetting.
Though on second examination, that to be looks more about the sight gag than the violence dynamic. Armor-Piercing Slap (warning: TV Tropes) can include violence, but all it requires is humiliation, contra NancyLebovitz's description.
Not really a valid question; I feel similarly, but you quickly learn to suppress it when the situation becomes questionable. Anyone who reacts strongly to my more mainstream opinions, is almost certainly going to be a lost cause when it comes to my extremist opinions. I can't say I've been in physical danger because I've never pushed it to that point. However, I can think of instances where physical danger was on the table of options (the KKK in minnesota is a good example.)
Since it has suddenly become relevant, here are two results from this year's survey (data still being collected):
When asked to rate feminism on a scale of 1 (very unfavorable) to 5 (very favorable), the most common answer was 5 and the least common answer was 1. The mean answer was 3.82, and the median answer was 4.
When asked to rate the social justice movement on a scale of 1 (very unfavorable) to 5 (very favorable), the most common answer was 5 and the least common answer was 1. The mean answer was 3.61, and the median answer was 4.
In Crowder-Meyer (2007), women asked to rate their favorability of feminism on a 1 to 100 scale averaged 52.5, which on my 1 to 5 scale corresponds to a 3.1. So the average Less Wronger is about 33% more favorably disposed towards the feminist movement than the average woman (who herself is slightly more favorably disposed than the average man).
I can't find a similar comparison question for social justice favorability, but I expect such a comparison would turn out the same way.
If this surprises you, update your model.
I'm not sure about that. To my System 1, “50/100” means ‘mediocre’, whereas “3 stars (out of 5)” means ‘decent’.
Would love to see these numbers broken down by gender.
For the sake of simplicity, I used sex rather than gender and ignored nonbinaries. The average man on the site has a feminism approval score of 3.75; the average woman on the site has a score of 4.40. These are significantly different at p < .001.
The average man on the site has a social justice approval score of 3.55; the average woman on the site has a score of 4.21. These are, again, significantly different at p < .001.
Wow, this is exactly opposite of what I expected. Thank you!
You expected men to be more feminist than women? Why?
Because the Internet is weird? I've seen conversations in which the only feminists were men and the only MRAs were women.
(Myself, I expected the difference to have the same sign but be an order of magnitude smaller.)
BTW, FWIW in the survey on your blog men thought that being a woman is 3% worse than being a man and women thought that being a man is 3% better than being a woman, though the exact numbers varied noticeably depending on which question exactly they were answering.
Do you mean that this specific demographic difference is "weird" on the internet relative to real life?
Perhaps what he expected was for men to call themselves more feminist than women, for some sort of signalling reasons (of course anon survey responses aren't much use for signalling, but maybe the idea is that people get into the habit of describing themselves in particular ways and then continue to do so for consistency even in contexts where there's no signalling benefit.
They are if you signal for the group and expect other people do the same.
Perhaps this is obvious already, but the positions people explicitly endorse on surveys are not necessarily those they implicitly endorse in blog comments.
Anyone want to set up an implicit association test for LW?
Also, people are free to interpret blog comments as it suits their goals.
Offtopic, but ETA on the survey results being published?
Probably before the end of this month.
How big is the probability?
Maybe that's exactly what makes LW a good target. There are too many targets on the internet, and one has to pick their battles. The best place is the one where you already have support. If someone would write a similar article about a website with no feminists, no one on the website would care. Thus, wasted time.
In the same way, it is more strategic to aim this kind of criticism towards you personally than it would be e.g. towards me. Not because you are a worse person (from a feminist point of view). But because such criticism will worry you, while I would just laugh.
There is something extremely irritating about a person who almost agrees with you, and yet refuses to accept everything you say. Sometimes you get angry about them more than about your enemies, whose existence you already learned to accept. At least, the enemies are compatible with the "us versus them" dichotomy, while the almost-allies make it feel like the "us" side is falling apart.
EDIT: Seems like you already know this.
Possible defence: criticizing specifically people and organizations of similar views can be more cost-beneficial. If you write a giant article entreating people of similar-but-not-identical position, and that article tweaks their views and massively increases their instrumental rationality, it's much better, than if you write dozens of articles addressed at your political enemies, most of whom will never read it or will never get convinced. For example, you have communist friends who are mostly correct on social issues, but are completely wrong on dialectical materialism, their organizations are polluted with death spirals, their discussions are counter-productive because of wrong usage of words, etc. It would still be more productive to direct them to LessWrong, to explain reductionism, to teach them how to use words and reasoning, how to avoid cognitive and organization failure modes, than to try to bring neo-nazis, Christian fundamentalists, New Agers or even generic consumerist Philistines up-to-date from scratch.
This of course assumes that writing a giant critical article is actually a productive way to change someone's mind. Obviously, hateful feminist anti-LW/anti-nerd rants are doing a bad job of convincing us in their points, because most of those writers have never read Dale Carnegie, let alone Cialdini. But they write angry rants anyway, because that's how they get, as Russians say it, “the feeling of fulfilled duty”, a warm fuzzy.
"A heretic is someone who shares almost all of your beliefs. Kill him." - Some card game
Upvoted for that.
In my experience, groups that want something to attack will attack groups that are generally aligned with them, rather than groups that are further away -- possibly due to the perceived threat of losing members to the similar group.
I've seen so many Communists get called Nazis by other Communist groups -- and those groups never go after people who actually call themselves Nazis.
~~Update: Likely that feminist-inclined LWers are less likely to comment/vote and more more likely to take surveys.~~
Meta-update: This hypothesis ruled highly-improbable based on more data from Yvain.
Among lurkers, the average feminism score was 3.84. Among people who had posted something - whether a post on Main, a post in Discussion, or a comment, the average feminism score was 3.8. A t-test failed to reveal any significant difference between the two (p = .49). So there is no difference between lurkers and posters in feminism score.
Among people who have never posted a top-level article in Main, the average feminism score is 3.84. Among people who have posted top-level articles in Main, the average feminism score is 3.47. A t-test found a significant difference (p < .01). So top-level posters were slightly less feminist than the Less Wrong average. However, the average feminism of top-level posters (3.47) is still significantly higher than the average feminism among women (3.1).
I update in the direction that the model of people I form based on LW comments is pretty inaccurate.
My conclusion is that most posters in LW have conventionally liberal views (at least on social issues) but many of them refrain from participating in the periodic discussions that erupt touching on these issues. Some possible reasons for this: i) they hold these opinions in a non-passionate way that does not incline them to argue for them; ii) they are more interested in other stuff LW has to offer like logic or futurism and see politics as a distraction; iii) they mistakenly believe their opinions are unpopular and they will suffer a karma hit.
iv) they absorbed these views from their surrounding culture and don't actually have good arguments for them.
I agree that this is a very plausible possibility as well. However, IADBOC for two reasons.
First, a large part of views like "feminism" and "social justice" are plausibly terminal values. These terminal values are probably absorbed from the surrounding culture, but it is not clear how they could be argued for against someone who held opposite values. In addition, for the descriptive components of these views, "most people hold them absorbed from general culture and can't argue for them" is not correlated with "unjustified, untrue beliefs". The same description would apply to most ordinary scientific beliefs held by non-experts.
But is, as Yvain has explained on his blog, more likely to be associated with true or at least reasonable beliefs. Reasonable beliefs are more likely to become commonly accepted beliefs, and most people who hold commonly accepted beliefs absorbed them from general culture and have never seen a need to make sound arguments for them.
Observe that this argument applies even more strongly to beliefs that have lasted a long time. In particular it applies much more strongly to religion.
I don't think that that is an important distinction. Most of the effect I was talking about is that it is easier for something reasonable (something with a relatively large probability of being true) to make the jump from controversial belief to generally accepted belief. Once something is generally accepted and people stop arguing about it, there is no strong mechanism rejecting false beliefs.
To the contrary, new beliefs can seem more reasonable by being associated with previously accepted beliefs, so beliefs in clusters of strongly held beliefs such as religions and certain ideologies are less likely to be true than the first belief in the cluster to become generally accepted.
Disagree here. Unless your terminal values include things like "everyone believing X regardless of it's truth value" or "making everyone as equal as possible even at the cost of making everyone worse off", the SJ policy proposals don't actually promote the terminal values they claim to support. One could equally well claim that opposition to cryonics is based on terminal values.
Or for that matter religious views by non-theologian theists.
Your model of Feminism/SJ differs from mine. Most of the cluster of my-model-of-SJ-space consists of the terminal value "people should not face barriers to doing what they want to do on account of factors orthogonal to that goal" (which I endorse).
My model of SJ also includes (as a smaller component) the terminal value "no one should believe there are correlations between race/sex/gender and any other attribute or characteristic", which I don't endorse.
What kind of factors count as "orthogonal to that goal"? If my goal is to become a physicist, say, does the fact that I'm not very intelligent count as an "orthogonal factor"? If the answer is no, then this is one form of my claim of them trying to make everyone as equal as possible even at the cost of making everyone worse off.
If the answer is yes, the question arises what they're objection is to some disciplines having demographics that differ from the general population. Given that they tend to take this as ipso facto evidence of racism/sexism/etc. this shows that denial of correlations between race/sex and other attributes is in fact much more central to their belief system then you seem to think.
BTW, the other form of my claim can be seen in the following situation: You need to choose between three candidates A, B and C for a position, you know that A is qualified and that one of B or C is also qualified (possibly slightly more qualified then A) but the other is extremely unqualified (as it happens B is the qualified one but you don't know that). However, for reasons beyond either A or B's control it is very hard to check which of B or C is the qualified one. Does hiring A, even though this is clearly unfair to B, count as "creating a barrier orthogonal to the goal"?
In my case it's something similar to (ii)... I often feel that arguing in favor of my views will not be a useful contribution to the discussions that periodically erupt on these issues, so I don't. (Sometimes I do.)
Possible, but I suspect the "Why our kind can't cooperate" both has a stronger effect and is more likely.
Indeed. I weep to imagine what the author of the linked article would think of us if she decided to check out the discussion her piece had inspired.
To paraphrase: Our community is exclusionary in the sense that its standards for what constitutes an information hazard (and thus a Forbidden Topic) are as stingy as possible, which means that it can't be guaranteed safe for people more vulnerable to psychological damage by ideas than the typical LessWrong crowd.
It's possible that this problem could be resolved with a more comprehensive "trigger warning" tagging system and a filtering system akin to tumblr savior. Then there could be a user preference with a list of checkboxes, e.g.
etc.
This could also double as protection for people who want to participate in LessWrong but have, for example, Posttraumatic Stress Disorder which could be triggered by some topics.
I think it's worth noting that we are (yet again) having a self-criticism session because a leftist (someone so far to the left that they consider liberal egalitarian Yvain to be beyond the pale of tolerability) complained that people who disagree with them are occasionally tolerated on LW.
Come on. Politics is rarely discussed here to begin with and something like 65*% of LWers are liberals/socialists. If the occasional non-leftist thought that slips through the cracks of karma-hiding and (more importantly) self-censorship is enough to drive you away, you probably have very little to offer.
*I originally said 80%, but I checked the survey and it's closer to 65%. I think my point still stands. Only 3% of LWers surveyed described themselves as conservatives.
Interesting. I wonder why LW has so few conservatives. Surely, just like there isn't masculine rationality and feminine rationality, there shouldn't be conservative rationality and liberal rationality. It also makes me wonder how valid the objections are in the linked post if the political views of LW skew vastly away from conservative topics.
Full disclosure: I'm a black male who grew up in the inner city and I don't find anything particularly offensive about topics on LW. There goes my opposing anecdote to the one(s) presented in the linked blog.
See the penultimate paragraph of this comment, take a look at this, and try to guess whether US::conservatives have higher or lower Openness in average than US::liberals.
There is a big difference between what sex you are and what beliefs you profess: The first should not be determined by how rational you are, while the second very much should. There should be nothing surprising about the fact that more intelligent and more rational people would have different beliefs about reality than less intelligent and less rational people.
Or to put it another way: If you believe that all political affiliations should be represented equally in the sceptic/rationalist community, you are implicitly assuming that political beliefs are merely statements of personal preference instead of seeing them as claims about reality. While personal preference plays a role, I would hope that there's more to it than that.
But it might affect how rational you are.
It's possible.
Why are you bringing it up, though? As an aspiring rationalist, I believe it should be possible in principle to discuss whether one sex is more rational than the other, on average. However, it makes me feel uncomfortable that a considerable number of people here feel the need to inject the topic into a conversation where it's not really relevant. If I were a woman, I can imagine I would feel more hesitant to participate on Less Wrong as a result of this, and that would be a pity.
It affects your argument that there is something wrong with having a skewed gender balance here.
Compare with Cosma Shalizi on the heritability of IQ (emphasis mine):
At this point I would have to conclude that the guy is either very deliberately blind or is lying through his teeth.
He, of course, knows very well what the consequences for his career and social life would be were he to admit the unspeakable.
What you & Anatoly_Vorobey have quoted is talking about heritable IQ differences between individuals ("who do not have significant developmental disorders"). Is it possible you're conflating that with talking about heritable IQ differences between races or sexes?
That you use the word "unspeakable" suggests you are, as does the fact that your two cases of scientists suffering career consequences (Gottfredson & Cattell) are cases where they suggested genetic racial differences as well as genetic individual differences. (In fact, if I remember rightly, both went further and inferred likely policy implications of genetic racial differences.)
That's a good point, I think the two issues got a bit conflated in the discussion here.
However I can't but see it as a reinforcement of my scepticism. My impression is that the partial heritability of IQ in individuals is well established. At most you can talk about doubting the evidence or not believing it or something like that. Shalizi says he "has no evidence" which is not credible at all.
Yes, I think it supports your dim view of what Shalizi wrote. I also think it detracts from your implication that he's simply evading saying the "unspeakable", since heritable IQ differences between individuals are a much less contentious topic than heritable racial (or sexual) IQ differences.
You're wrong.
First, about the consequences: the theatrics of the "unspeakable" are getting a little tiresome. Shalizi is a statistics professor at Carnegie-Mellon. The Mainstream Science on Intelligence was signed by 52 professors and included very clear statements about interracial IQ differences, lack of culture bias, and explicit heritability estimates. I would ask you to name the supposedly inescapable and grave "consequences for career and social life" these 52 professors brought on their heads.
Second, about the subject matter: this quote comes at the end of a long post in which Shalizi challenges the accepted estimates of IQ heritability, and criticizes at length the frequent but confused interpretation of heritability as lack of malleability. In his next post on the subject, he criticizes the notion of a single g factor as standing on a shaky ground, having been inferred by intelligence researchers on the basis of factor analysis that is known to statisticians to be inadequate for such a conclusion. Basically, Shalizi criticizes the statistical foundations employed by IQ researchers as being statistically unsound, and he carries out this critique on a much deeper technical level than what normally makes it into summaries, popular books and blog posts. On the face of it, this isn't a completely ridiculous idea: we know that much of psychology and medicine routinely misuses statistics in ways that make experts wince, although we might also expect IQ researchers to have their statistical shit together much more decisively than your average soft-psychology paper.
There have been replies to Shalizi's critique on the same technical level, and further debates. Frankly, most of this goes over my head. I know just about enough basic statistics to understand most of Shalizi's critique but not assess it intelligently on my own, and certainly not to follow the ensuing debate. I doubt, however, that your dismissal of Shalizi's honesty is based on a solid understanding of the arguments in this debate about statistical foundations of IQ research.
That flat and unconditional statement seems to be mismatched with your sentence a bit later:
Given that you say you lack the capability to "assess it intelligently on my own" and given that I don't see the basis on which you decide I am statistically incompetent, I am rather curious why did you decide that I am wrong. Especially given that I was talking about my personal conclusions and not stating a falsifiable fact about reality.
P.S. Oh, and the bit about consequences for career? Try Blits, Jan H. The silenced partner: Linda Gottfredson and the University of Delaware
You're wrong because your conclusion that Shalizi was either blind or lying rested on two premises: one, that heritability in racial IQ differences has been proven, and two, that for Shalizi to admit this fact would be uttering the "unspeakable" and would carry severe social and career-wise consequences. I wrote a detailed explanation about the way Shalizi challenges the first premise on statistical grounds, in the field where he's an expert (and in a way that's neither blind nor dishonest, albeit it could be wrong). I gave an example that illustrates that the second premise is wildly exaggerated, especially when applied to an academic such as Shalizi. That's why you are wrong.
Your response was to twist my words into a claim that you are "statistically incompetent", where in fact I emphasized that Shalizi's critique was on a deep technical level, and that I myself lacked knowledge to assess it. That is cheap emotional manipulation. You also cited a paper about Gottfredson that wasn't relevant to what I said. Given this unpromising situation, I'm sure you'll understand if I neglect to address further responses of that kind.
That's a locked-up paper printed in a journal operated by a political advocacy group.
Linda Gottfredson doesn't seem to have been "silenced", though. (But I have a libertarian, rather than a left/right partisan, view on that concept. Someone who takes grants from wealthy ideological supporters instead of from government institutions is not thereby silenced; on the contrary, that would seem pretty darn liberating.)
As reasonable as that person sounds, I feel the need to point out that IQ differences between race has little or nothing to do with IQ differences between sexes (and even less with rationality, but I guess we gravitated away from that). Even if there is a "stupid gene", to phrase it very dumbly, there is still no reason to believe that someone with 2 X chromosomes would inherit this gene while someone with the same parents but with a Y chromosome would not.
If you (or anyone) want to argue that women naturally have lower IQ than men, I would go with an argument based on hormones instead. Sounds much more plausible to me.
Where do you think the differences in hormone levels come from?
Food, genes, certain types of activity such as sports and competitiveness in general, the environment you grow up in, being in a position of authority, to name some factors that influence hormone production.
It's certainly not just the gender divide. If you think that testosterone makes men smarter than women on average, you would also have to accept the conclusion that women with more testosterone than men will be smarter than men on average. All other things being equal, of course.
Testosterone levels in men and women are in completely different ballparks, and there is no overlap in healthy individuals of the different sexes beyond puberty. This would make me think the difference is mainly genetic.
I'm not arguing for anything beyond this point, so we don't have to go there.
Yes, Shalizi was talking about something completely different, but his attitude was similar to yours. He was saying: "sure, I could imagine that it might be so (that there might be a heritable difference), but why are you so invested in believing in that? Why do you fight for it so much?". I meant for my quotation to bolster your case.
Ahhhh, you're right, I completely misunderstood your intent. In that case we are in agreement.
You are absolutely correct on the facts, and in a saner world I could leave it at that, but you seem to have missed an unspoken part of the argument;
The common factor isn't genetics per se but rather an appeal to inherent nature. Whether that nature is the genetic legacy of selection for vastly different ancestral environments or due to the epigenetics of sexual dimorphism is very important in a scientific sense but not in the metaphysical sense of presenting a challenge to the ideals of "equality" or the "psychic unity of mankind."
When Dr Shalizi writes the rhetorical question "why it is so important to you that IQ be heritable and unchangeable?" in the context of "'human equality' and 'genetic identity'" his tone is not that of scientific skepticism of an unproven claim but rather an apologetic defense of an embattled creed. Really, why is it so important to you what the truth is? After all, we don't have any evidence to suggest that the doctrines are wrong, so why not just repeat the cant like everyone else? Who else but a heretic would feel need to ask uncomfortable questions?
For the most part, scientists writing against the hereditarian position don't bother debating the facts anymore; now that actual genetic evidence is starting to come out they know it'll just make them look foolish in a few years, and the psychometric evidence has survived four decades of concentrated attack already. It's all about implications and responsibility now, or in other words that the lie is too big to fail. It's hardly important to them if the truth at hand is a genetic or an hormonal inequality, they just want it to go away.
I read Shalizi differently, as asking something like, "Really, is it because you care about the truth qua truth that you find this particular alleged truth so important?" Far from apologetic, he is — cautiously, because there is a counterfactual gun to his head — going on the offensive, hinting that the people insistently disagreeing with him are motivated by more than unalloyed curiosity. It is not, of course, dispassionate scientific scepticism, but nor is it a defensive crouch.
My interpretation could be wrong. Shalizi isn't spelling things out in explicit, objective detail there. But my interpretation rings truer to my gut, and fits better with the fact that his peroration rounds off ten thousand words of blunt and occasionally snarky statistical critique.
I think you misinterpret Dr Shalizi, and do him a disservice. I think his answer is perfectly reasonable from a bayesian point of view. Basically, I see three common reasons to spend time researching difference between races:
A) People who are genuinely interested in the answer, for pragmatic or intellectual reasons
B) People who are a racist and want to hear a particular answer that fits their preconceived views
C) People who are trying to be controversial/contrarian/want to provoke people
Certainly there are people who are genuinely curious towards the answer, purely for intellectual reasons (A). I am somewhat interested myself. However, the fact of the matter is that many others are interested purely for racist reasons (B). Many racists aren't open in their racism, and as such mask their racism as honest scientific inquiry, making B indistinguishable from A. Showing interest in the subject is therefore Bayesian evidence for B as much as it is for A. Even worse is the fact that everyone knows that everyone realizes this on an intuitive level, which causes most As to shut up for fear of being identified as Bs, while Bs continue what they are doing. This serves to compound the effect. Meanwhile, Cs arise expressly because it is a hot button topic. As a result it is entirely rational to conclude that someone who is constantly yelling about race and inserting the subject into other conversations is more likely to be a racist on average than others. And of course, it's incredibly frustrating if you are an A and just want an honest conversation about the subject, which is now impossible (thanks, politics!).
I think Shalizi deals with this messed up situation admirably: Making clear what he believes while doing everything to avoid sounding controversial or giving fuel to racists. Of course this doesn't work very well because people who call others racist fall into two categories themselves:
D) People who are genuinely worried about the dangerous effects of racist claims.
E) People who realise they can win any argument by default by calling the other a racist
And people who fall under category E do not, of course, care about the truth of the matter in the slightest.
Kind of tempted to write a top-level post about this, now. Hmm...
I think that the fact that there is a debate and that the "good guys" use name-calling instead of scientific arguments, increases also the number of people in the group A.
It's a bit like telling people not to think of an elephant, and then justify it by saying that elephant-haters are most obsessed about elephants, therefore thinking of an elephant is an evidence of being an evil person. Well, as soon as told everyone not to think of an elephant, this stopped being true.
In the same sense that showing interest in medicine is Bayesian evidence for me wanting to poison my neighbors.
It's an interesting topic, the moreso because it is taboo, and not exactly tangential to the subject, I think.
Would you predict that the average IQ among LW census responders who self label as conservatives is lower? If so, how strong would you predict the effect to be?
Why not? Men and women are different in many ways. Why did you decide that a disposition to rationality can't possibly depend on your sex (and so your hormones, etc.)?
It's in reply to Quinton saying that there should be no masculine and feminine types of rationality. In other words, whether you are a man or a woman should not determine what the correct/rational answer is to a particular question (barring obvious exceptions). This is in stark contrast to asking whether or not political affiliation should be determined by how rational you are, which is another question entirely.
In other words: Just because correct answers to factual questions should not be determined by gender does not mean that political affiliation should not be determined by correct answers to factual questions.
I think political differences come down to values moreso than beliefs about facts. Rationalism doesn't dictate terminal values.
Sometimes it is difficult to find out what is the different value and what is essentially the same value but different models.
For example two people can have a value of "it would be bad to destroy humanity", but one of them has a model that humanity will likely destroy itself with ongoing capitalism, while the other has a model that humanity would be likely destroyed by some totalitarian movement like communism.
But instead of openly discussing their models and finding the difference, the former will accuse the latter of not caring about human suffering, and the latter will accuse the former of not caring about human suffering. Or they will focus on different applause lights, just to emphasise how different they are.
I probably underestimate the difference of values. Some people are psychopaths; and they might not be the only different group of people. But it seems to me that a lot of political mindkilling is connected with overestimating the difference, instead of admitting that our values in connection with a different model of the world would lead to different decisions. (Because our values are good, the different decisions are evil, and good cannot be evil, right?)
Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?
(And more generally: If you meet a person with strange political opinions, try to imagine a least convenient world, where your values would lead to the same opinions. Even if that would be a wrong model of our world, it still may be the model the other person believes to be correct.)
I agree, though I'll add that what facts people find plausible are shaped by their values.
Perfect information scenarios are useful in clarifying some cases, I suppose (and lets go with the non-humanity destroying option every time) but I don't find them to map too closely to actual situations.
I'm not sure I can aptly articulate by intuition here. By differences in values, I don't really think people will differ so much as to have much difference in terminal values should they each make a list of everything they would want in a perfect world (barring outliers). But the relative weights that people place on them, while differing only slightly, may end up suggesting quite different policy proposals, especially in a world of imperfect information, even if each is interested in using reason.
But I'll concede that some ideologies are much more comfortable with more utilitarian analysis versus more rigid imperatives that are more likely to yield consistent results.
I'm always a little suspicious of this line of thinking. Partly because the terminal/instrumental value division isn't very clean in humans -- since more deeply ingrained values are harder to break regardless of their centrality, and we don't have very good introspective access to value relationships, it's remarkably difficult to unambiguously nail down any terminal values in real people. Never mind figuring out where they differ. But more importantly, it's just too convenient: if you and your political enemies have different fundamental values, you've just managed to absolve yourself of any responsibility for argument. That's not connotationally the same as saying the people you disagree with are all evil mutants or hapless dupes, but it's functionally pretty damn close.
That doesn't prove it wrong, of course, but I do think it's grounds for caution.
No, I think people can be persuaded on terminal values, although to an extent that modifies my response above; rationality will tell you that certain values are more likely to conflict, and noticing internal contradictions--pitting two vales against each other--is one way to convince someone to alter--or just adjust the relative worth of--their terminal values. Due to the complexity of social reality I don't think you are going to find too many with beliefs that are perfectly consistent; that is, any mainstream political affiliations is unlikely to be a shinning paragon of coherance and logical progression built upon core principles relative to its competitors. But demonstrate with examples if I'm wrong.
If you can persuade someone to alter (not merely ignore) a value they believe to have been terminal, that's good evidence that it wasn't a terminal value.
This is only true if you think humans actually hold coherent values that are internally designated as "terminal" or "instrumental". Humans only ever even designate statements as terminal values once you introduce them to the concept.
How about different factions (landowners, truck drivers, soldiers, immigrants, etc.) all advocating their own interests? Doesn't that count as "different values"?
Or, more simply, I value myself and my family, you value yourself and your family, so we have dufferent values. Ideologies are just a more general and complicated form.
Well, it depends what you mean by values. I was mainly discussing Randy_M's comment that rationalism doesn't dictate terminal values; while different perspectives probably mean the evolution of different value systems even given identical hardwiring, that doesn't necessarily reflect different terminal values. Those don't reflect preferences but rather the algorithm by which preferences evolve; and self-interest is one module of that, not seven billion.
"The first should not have anything to do with how rational you are, while the second very much should. " What does should mean there, and from where do you derive it?
LW is a US-centric site. When I saw the option, I assumed it meant the US interpretation of the "conservative" label, which (from Europe) seems impossible to distinguish from batshit crazy.
I like to see myself as somewhat conservative, but I even more like to see myself as not batshit crazy.
The definition given in the survey was “Conservative, for example the US Republican Party and UK Tories: traditional values, low taxes, low redistribution of wealth”.
As a US conservative, I can assure you the feeling is mutual, BTW.
Not sure what you mean by that. You feel European conservativism is crazy? You feel the interpretation of US conservatism is crazy? You feel US conservatives are functionally identical to crazy, if not actually so?
I meant that all the mainstream European parties seem crazy.
People in the rationality community tend to believe that there's a lot of low-hanging fruit to be had in thinking rationally, and that the average person and the average society is missing out on this. This is difficult to reconcile with arguments for tradition and being cautious about rapid change, which is the heart of (old school) conservatism.
I think futurism is anti-conservative.
My steelman of the conservative position is 'empirical legislation' : do not make new laws until you have decent evidence they achieve the stated policy goals. "Ah, but while you are gathering your proof, the bad thing X is still happening!" "Too bad."
FAI is a conservative position.
To respond to the grandparent, I think in the US conservatives ceded all intellectual ground, and are therefore not a sexy position to adopt. (If this is true, I think one should view this as a bad thing regardless of one's political affiliation, because 'loyal opposition' is needed to sharpen teeth).
At a guess, I'd say this is linked to religion. Once you split out the libertarian faction (as the surveys historically have), it's quite rare for people on the conservative side of the fence (at least in the US) to be irreligious, and LW is nothing if not outspokenly secular.
Yes, but people on the far right are disproportionately active in political discussions here, probably because it is one of the very few internet venues where they can air their views to a diverse and intelligent readership without being immediately shouted down as evil. If you actually measured political comments, I suspect you'd find that the explicitly liberal/social ones represent much less than 65%.
60%. But yes, it was funny to find out who the evil person was.
Actually, no, it was quite sad. I mean, when reading Yvain's articles, I often feel a deep envy of the peaceful way he can write. I am more likely to jump and say something agressive. I would be really proud of myself if I could someday learn to write the way Yvain does. ... Which still would make me just another bad guy. Holy Xenu, what's the point of even trying?