Based on the results from the recent LW census, I quickly threw together a test that measures how much of a rationalist you are.

I'm mainly posting it here because I'm curious how well my factor model extrapolates. I want to have this data available when I do a more in-depth analysis of the results from the census.

I scored 14/24.

New Comment
15 comments, sorted by Click to highlight new comments since:
[-]gjm1316

There are definitely answers that your model wants rationalists to give but that I think are incompatible with LW-style rationalism. For instance:

  • "People's anecdotes about seeing ghosts aren't real evidence for ghosts" (your model wants "agree strongly"): of course people's anecdotes about seeing ghosts are evidence for ghosts; they are more probable if ghosts are real than if they aren't. They're just really weak evidence for ghosts and there are plenty of other reasons to think there aren't ghosts.
  • "We need more evidence that we would benefit before we charge ahead with futuristic technology that might irreversibly backfire" (your model wants "disagree" or "disagree strongly"): there's this thing called the AI alignment problem that a few rationalists are slightly concerned about, you might have heard of it.

And several others where I wouldn't go so far as to say "incompatible" but where I confidently expect most LWers' positions not to match your model's predictions. For instance:

  • "It is morally important to avoid making people suffer emotionally": your model wants not-agreement, but I think most LWers would agree with this.
  • "Workplaces should be dull to reflect the oppressiveness of work": your model wants not-disagreement, but I think most LWers would disagree (though probably most would think "hmm, interesting idea" first).
  • "Religious people are very stupid"; your model wants agreement, but I think most LWers are aware that there are plenty of not-very-stupid religious people (indeed, plenty of very-not-stupid religious people) and I suspect "disagree strongly" might be the most common response from LWers.

I don't claim that the above lists are complete. I got 11/24 and I am pretty sure I am nearer the median rationalist than that might suggest.

I agree with these points but as I mentioned in the test:

Warning: this is not necessarily an accurate or useful test; it's a test that arose through irresponsible statistics rather than careful thought.

The reason I made this survey is to get more direct data on how well the model extrapolates (and maybe also to improve the model so it extrapolates better).

[+][comment deleted]20

"Workplaces should be dull to reflect the oppressiveness of work"? Where did that come from? (The "correct" answer is to not disagree.)

"Women don't work in construction because it is unglamorous." I remember when this could be said unironically with a straight face. That was about fifty years ago. Being the only woman in an all-male working-class environment might be more salient these days.

"Religious people are very stupid." Is this a test for straw Vulcan rationality? Actually, you do say it measures "how much of a stereotypical rationalist you are", but on the other hand, you say these are "LessWrong-related questions". What are you really trying to measure?

I originally asked people qualitatively what they think the role of different jobs in society are. Then based on that I made a survey with about 100 questions and found there to be about 5 major factors. I then qualitatively asked people about these factors, which lead to me finding additional items that I incorporated in additional surveys. Eventually I had a pool of around 1000 items covering beliefs in various domains, albeit with the same 5-factor structure as originally.

I suggested that 20 of the items from different factors should be included in the LW census, which allowed me to estimate where LW was in terms of those factors. These 24 new items were then selected from the items in the pool that are the most extreme correlates of the delta indicated by the original 20.

Obviously since this procedure is quite distinct from actual rationalism (but also related since it does incorporate LW's answer to the 20), it's quite likely that this is a baseless extrapolation that doesn't actually generalize well. In fact this is specifically one of the things I want to test for, since it seems wise to not overgeneralize LW ideology from a sample of only 20 beliefs to a sample of more than 1000 beliefs. By taking the 24 most extreme correlates of LW's mean out of the 1000 items, I am stress-testing the model and seeing just how extremely wrong it can get.

[-][anonymous]20

21/24. Surprising because I have been downvoted and punished for having a divergent opinion.

The ones I "missed" 2 of them I think because rationalists are being insufficiently rational (the "correct" answer is incorrect in terms of what accepted factual evidence by the most credible sources says).

Mine was 12/24.

[-]niplav20

Also 12—what's going on?

[-]gjm73

What's going on is that tailcalled's factor model doesn't in fact do a good job of identifying rationalists by their sociopolitical opinions. Or something like that.

[EDITED to add:] Here's one particular variety of "something like that" that I think may be going on: an opinion may be highly characteristic of a group even if it is very uncommon within the group. For instance, suppose you're classifying folks in the US on a left/right axis. If someone agrees with "We should abolish the police and close all the prisons" then you know with great confidence which team they're on, but I'm pretty sure the great majority of leftish people in the US disagree with it. If someone agrees with "We should bring back slavery because black people aren't fit to run their own lives" then you know with great confidence which team they're on, but I'm pretty sure the great majority of rightish people in the US disagree with it.

Tailcalled's model isn't exactly doing this sort of thing to rationalists -- if someone says "stories about ghosts are zero evidence of ghosts" then they have just proved they aren't a rationalist, not done something extreme but highly characteristic of (LW-style) rationalists -- but it's arguably doing something of the sort to a broader fuzzier class of people that are maybe as near as the model can get to "rationalists". Roughly the people some would characterize as "Silicon Valley techbros".

My model takes the prevalence of the opinion into account; it's the reason that sometimes you have to e.g. agree strongly and other times you merely have to not-disagree. There's unpopular opinions that the factor model does place correctly, e.g. I can't remember whether I have a question about abolishing the police, but supporting human extinction clearly went under the leftism factor even though leftists also disagreed (because leftists were less likely to disagree and disagreed less strongly in a quantitative sense).

I think the broader/fuzzier class point applies more directly though; from a causal perspective you'd expect rationalists to have some ideology that exists in the general population (e.g. techbros) plus our own idiosyncratically developed ideology. But a factor model only captures low-rank information, so it's not going to accurately model idiosyncratic factors that only exist for a small portion of population.

In theory according to the model, rationalists should score slightly above 12 on average, and because we expect a wide spread of opinions, this means according to the model we should also expect a lot of rationalists to just score 12 directly. So there's nothing funky if you score 12.

What does the model predict non-rationalists would score?

I got a 14 as well. An odd theme in there.

There should be a question at the end: "After seeing your results, how many of the previous responses did you feel a strong desire to write a comment analyzing/refuting?" And that's the actual rationalist score...

But I'm interested that there might be a phenomenon here where the median LWer is more likely to score highly on this test, despite being less representative of LW culture, but core, more representative LWers are unlikely to score highly. 

Presumably there's some kind of power law with LW use (10000s of users who use LW for <1 hour a month, only 100s of users who use LW for 100+ hours a month). 

I predict that the 10000s of less active community members are probably more likely to give "typical" rationalist answers to these questions: "Yeah, (religious) people stupid, ghosts not real, technology good". The 100s of power users, who are actually more representative of a distinctly LW culture, are less likely to give these answers.

I got 9/24, by the way.

 

+1 for the 14/24 club.