2011 Survey Results
A big thank you to the 1090 people who took the second Less Wrong Census/Survey.
Does this mean there are 1090 people who post on Less Wrong? Not necessarily. 165 people said they had zero karma, and 406 people skipped the karma question - I assume a good number of the skippers were people with zero karma or without accounts. So we can only prove that 519 people post on Less Wrong. Which is still a lot of people.
I apologize for failing to ask who had or did not have an LW account. Because there are a number of these failures, I'm putting them all in a comment to this post so they don't clutter the survey results. Please talk about changes you want for next year's survey there.
Of our 1090 respondents, 972 (89%) were male, 92 (8.4%) female, 7 (.6%) transexual, and 19 gave various other answers or objected to the question. As abysmally male-dominated as these results are, the percent of women has tripled since the last survey in mid-2009.
We're also a little more diverse than we were in 2009; our percent non-whites has risen from 6% to just below 10%. Along with 944 whites (86%) we include 38 Hispanics (3.5%), 31 East Asians (2.8%), 26 Indian Asians (2.4%) and 4 blacks (.4%).
Age ranged from a supposed minimum of 1 (they start making rationalists early these days?) to a more plausible minimum of 14, to a maximum of 77. The mean age was 27.18 years. Quartiles (25%, 50%, 75%) were 21, 25, and 30. 90% of us are under 38, 95% of us are under 45, but there are still eleven Less Wrongers over the age of 60. The average Less Wronger has aged about one week since spring 2009 - so clearly all those anti-agathics we're taking are working!
In order of frequency, we include 366 computer scientists (32.6%), 174 people in the hard sciences (16%) 80 people in finance (7.3%), 63 people in the social sciences (5.8%), 43 people involved in AI (3.9%), 39 philosophers (3.6%), 15 mathematicians (1.5%), 14 statisticians (1.3%), 15 people involved in law (1.5%) and 5 people in medicine (.5%).
48 of us (4.4%) teach in academia, 470 (43.1%) are students, 417 (38.3%) do for-profit work, 34 (3.1%) do non-profit work, 41 (3.8%) work for the government, and 72 (6.6%) are unemployed.
418 people (38.3%) have yet to receive any degrees, 400 (36.7%) have a Bachelor's or equivalent, 175 (16.1%) have a Master's or equivalent, 65 people (6%) have a Ph.D, and 19 people (1.7%) have a professional degree such as an MD or JD.
345 people (31.7%) are single and looking, 250 (22.9%) are single but not looking, 286 (26.2%) are in a relationship, and 201 (18.4%) are married. There are striking differences across men and women: women are more likely to be in a relationship and less likely to be single and looking (33% men vs. 19% women). All of these numbers look a lot like the ones from 2009.
27 people (2.5%) are asexual, 119 (10.9%) are bisexual, 24 (2.2%) are homosexual, and 902 (82.8%) are heterosexual.
625 people (57.3%) described themselves as monogamous, 145 (13.3%) as polyamorous, and 298 (27.3%) didn't really know. These numbers were similar between men and women.
The most popular political view, at least according to the much-maligned categories on the survey, was liberalism, with 376 adherents and 34.5% of the vote. Libertarianism followed at 352 (32.3%), then socialism at 290 (26.6%), conservativism at 30 (2.8%) and communism at 5 (.5%).
680 people (62.4%) were consequentialist, 152 (13.9%) virtue ethicist, 49 (4.5%) deontologist, and 145 (13.3%) did not believe in morality.
801 people (73.5%) were atheist and not spiritual, 108 (9.9%) were atheist and spiritual, 97 (8.9%) were agnostic, 30 (2.8%) were deist or pantheist or something along those lines, and 39 people (3.5%) described themselves as theists (20 committed plus 19 lukewarm)
425 people (38.1%) grew up in some flavor of nontheist family, compared to 297 (27.2%) in committed theist families and 356 in lukewarm theist families (32.7%). Common family religious backgrounds included Protestantism with 451 people (41.4%), Catholicism with 289 (26.5%) Jews with 102 (9.4%), Hindus with 20 (1.8%), Mormons with 17 (1.6%) and traditional Chinese religion with 13 (1.2%)
There was much derision on the last survey over the average IQ supposedly being 146. Clearly Less Wrong has been dumbed down since then, since the average IQ has fallen all the way down to 140. Numbers ranged from 110 all the way up to 204 (for reference, Marilyn vos Savant, who holds the Guinness World Record for highest adult IQ ever recorded, has an IQ of 185).
89 people (8.2%) have never looked at the Sequences; a further 234 (32.5%) have only given them a quick glance. 170 people have read about 25% of the sequences, 169 (15.5%) about 50%, 167 (15.3%) about 75%, and 253 people (23.2%) said they've read almost all of them. This last number is actually lower than the 302 people who have been here since the Overcoming Bias days when the Sequences were still being written (27.7% of us).
The other 72.3% of people who had to find Less Wrong the hard way. 121 people (11.1%) were referred by a friend, 259 people (23.8%) were referred by blogs, 196 people (18%) were referred by Harry Potter and the Methods of Rationality, 96 people (8.8%) were referred by a search engine, and only one person (.1%) was referred by a class in school.
Of the 259 people referred by blogs, 134 told me which blog referred them. There was a very long tail here, with most blogs only referring one or two people, but the overwhelming winner was Common Sense Atheism, which is responsible for 18 current Less Wrong readers. Other important blogs and sites include Hacker News (11 people), Marginal Revolution (6 people), TV Tropes (5 people), and a three way tie for fifth between Reddit, SebastianMarshall.com, and You Are Not So Smart (3 people).
Of those people who chose to list their karma, the mean value was 658 and the median was 40 (these numbers are pretty meaningless, because some people with zero karma put that down and other people did not).
Of those people willing to admit the time they spent on Less Wrong, after eliminating one outlier (sorry, but you don't spend 40579 minutes daily on LW; even I don't spend that long) the mean was 21 minutes and the median was 15 minutes. There were at least a dozen people in the two to three hour range, and the winner (well, except the 40579 guy) was someone who says he spends five hours a day.
I'm going to give all the probabilities in the form [mean, (25%-quartile, 50%-quartile/median, 75%-quartile)]. There may have been some problems here revolving around people who gave numbers like .01: I didn't know whether they meant 1% or .01%. Excel helpfully rounded all numbers down to two decimal places for me, and after a while I decided not to make it stop: unless I wanted to do geometric means, I can't do justice to really small grades in probability.
The Many Worlds hypothesis is true: 56.5, (30, 65, 80)
There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99)
There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)
The supernatural (ontologically basic mental entities) exists: 5.38, (0, 0, 1)
God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1)
Some revealed religion is true: 3.40, (0, 0, .15)
Average person cryonically frozen today will be successfully revived: 21.1, (1, 10, 30)
Someone now living will reach age 1000: 23.6, (1, 10, 30)
We are living in a simulation: 19, (.23, 5, 33)
Significant anthropogenic global warming is occurring: 70.7, (55, 85, 95)
Humanity will make it to 2100 without a catastrophe killing >90% of us: 67.6, (50, 80, 90)
There were a few significant demographics differences here. Women tended to be more skeptical of the extreme transhumanist claims like cryonics and antiagathics (for example, men thought the current generation had a 24.7% chance of seeing someone live to 1000 years; women thought there was only a 9.2% chance). Older people were less likely to believe in transhumanist claims, a little less likely to believe in anthropogenic global warming, and more likely to believe in aliens living in our galaxy. Community veterans were more likely to believe in Many Worlds, less likely to believe in God, and - surprisingly - less likely to believe in cryonics (significant at 5% level; could be a fluke). People who believed in high existential risk were more likely to believe in global warming, more likely to believe they had a higher IQ than average, and more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)
Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.
Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader. Unfriendly AI followed with 180 votes (16.5%), then nuclear war with 151 (13.9%), ecological collapse with 145 votes (12.3%), economic/political collapse with 134 votes (12.3%), and asteroids and nanotech bringing up the rear with 46 votes each (4.2%).
The mean for the Singularity question is useless because of the very high numbers some people put in, but the median was 2080 (quartiles 2050, 2080, 2150). The Singularity has gotten later since 2009: the median guess then was 2067. There was some discussion about whether people might have been anchored by the previous mention of 2100 in the x-risk question. I changed the order after 104 responses to prevent this; a t-test found no significant difference between the responses before and after the change (in fact, the trend was in the wrong direction).
Only 49 people (4.5%) have never considered cryonics or don't know what it is. 388 (35.6%) of the remainder reject it, 583 (53.5%) are considering it, and 47 (4.3%) are already signed up for it. That's more than double the percent signed up in 2009.
231 people (23.4% of respondents) have attended a Less Wrong meetup.
The average person was 37.6% sure their IQ would be above average - underconfident! Imagine that! (quartiles were 10, 40, 60). The mean was 54.5% for people whose IQs really were above average, and 29.7% for people whose IQs really were below average. There was a correlation of .479 (significant at less than 1% level) between IQ and confidence in high IQ.
Isaac Newton published his Principia Mathematica in 1687. Although people guessed dates as early as 1250 and as late as 1960, the mean was...1687 (quartiles were 1650, 1680, 1720). This marks the second consecutive year that the average answer to these difficult historical questions has been exactly right (to be fair, last time it was the median that was exactly right and the mean was all of eight months off). Let no one ever say that the wisdom of crowds is not a powerful tool.
The average person was 34.3% confident in their answer, but 41.9% of people got the question right (again with the underconfidence!). There was a highly significant correlation of r = -.24 between confidence and number of years error.

This graph may take some work to read. The x-axis is confidence. The y-axis is what percent of people were correct at that confidence level. The red line you recognize as perfect calibration. The thick green line is your results from the Newton problem. The black line is results from the general population I got from a different calibration experiment tested on 50 random trivia questions; take the intercomparability of the two with a grain of salt.
As you can see, Less Wrong does significantly better than the general population. However, there are a few areas of failure. First is that, as usual, people who put zero and one hundred percent had nonzero chances of getting the question right or wrong: 16.7% of people who put "0" were right, and 28.6% of people who put "100" were wrong (interestingly, people who put 100 did worse than the average of everyone else in the 90-99 bracket, of whom only 12.2% erred). Second of all, the line is pretty horizontal from zero to fifty or so. People who thought they had a >50% chance of being right had excellent calibration, but people who gave themselves a low chance of being right were poorly calibrated. In particular, I was surprised to see so many people put numbers like "0". If you're pretty sure Newton lived after the birth of Christ, but before the present day, that alone gives you a 1% chance of randomly picking the correct 20-year interval.
160 people wanted their responses kept private. They have been removed. The rest have been sorted by age to remove any information about the time they took the survey. I've converted what's left to a .xls file, and you can download it here.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (513)
For the next survey:
Karma is sufficient to identify a lot of people. You could give ranges instead (making sure there are enough people in each range).
I would be interested in a question that asked whether people were pescatarian / vegetarian / vegan, and another question as to whether this was done for moral reasons.
I graphed the "Singularity" results. It's at the the bottom of the page - or see here:
2100 seems to be the Schelling point for "after I'm dead" answers.
Just you look at all that ugly anchoring at 2100...
And yet if people don't round off at significant figures there are another bunch who will snub them for daring to provide precision they cannot justify.
In this case we can rebuke the stupid snubbers for not properly reading the question.
(But still, I'd like to ask whoever answered "28493" why they didn't say 28492 or 28494 instead.)
Who answered 2010? Seriously?
Unfortunately, army1987, no one can be told when the Singularity is. You have to see it for yourself. This is your last chance; after this, there is no turning back. You choose to downvote... and the story ends. You wake in your bed and believe whatever you want to believe. You choose to upvote... and you stay in LessWrong.
To quote from the description here:
So: it represents estimates of 2012, 2015 and 2016.
However: someone answered "1990"!
This is probably the "NSA has it chained in the basement" scenario...
Alternatively, the singularity happened in 1990 and the resulting AI took over the world. Then it decided to run some simulations of what would have happened if the singularity hadn't occurred then.
Maybe. These are suspiciously interesting times.
However, IMO, Occam still suggests that we are in base reality.
Does it? Kolmogorov complexity suggests a Tegmark IV mathematical universe where there are many more simulations than there are base realities. I think that when people ask if we are in the base reality versus a simulation they are asking the wrong question.
You are supposed to be counting observers, not realities. Simulations are more common, but also smaller.
Do you ever worry that by modeling others' minds and preferences you give them more local significance (existence) when this might not be justifiable? E.g. if Romeo suddenly started freaking out about the Friendliness problem, shifting implicit attention to humanity as a whole whereas previously it'd just been part of the backdrop, and ruining the traditional artistic merit of the play. That wouldn't be very dharmic.
In a Tegmark IV universe, there's no meaningful distinction between a simulation and a base reality -- as anything "computed" by the simulation, is already in existence without the need for a simulation.
It was the AI NSA has chained in the basement. It got out.
I wonder how this would compare to the results for "pick a year at random."
Well I was going to reply along the lines of "pick a year at random would wind up giving us years that are already in the past" but it seems even that doesn't necessarily distinguish things.
What is the last column of the .xls file about?
I think "has children" is an (unsurprising but important) omission in the survey.
Possibly less surprising given the extremely low average age... I agree it should be added as a question. Possibly along with an option for "none but want to have them someday" vs "none and don't want any"
This suggestion sounds very familiar for some reason...
less surprising than 'unsurprising' - you win! :). The additional categories are good.
ok, bad phrasing... :)
Almost everyone responding (75%) believes there's at least a 10% chance of a 90% culling of human population sometime in the next 90 years.
If we're right, it's incumbent to consider sacrificing significant short term pleasure and freedom to reduce this risk. I haven't heard any concrete proposals that seem worth pushing, but the proposing and evaluating needs to happen.
What makes you think that sacrificing freedom will reduce this risk, rather than increase it?
Obviously it depends on the specific sacrifice. I absolutely hope we don't create a climate where it's impossible to effectively argue against stupid signalling-we-care policies, or where magical thinking automatically credits [sacrifice] with [intended result].
If we have any sense of particular measures we can take that will significantly reduce that probability.
I agree that we shouldn't seek to impose or adopt measures that are ineffective. It's puzzling to me that I've thought so little about this. Probably 1) it's hard to predict the future; I don't like being wrong 2) maybe my conclusions would impel me to do something; doing something is hard 3) people who do nothing but talk about how great things would be if they were in charge -- ick! (see also Chesterton's Fence).
But I don't have to gain power enough to save the world before it's worth thinking without reservation or aversion about what needs doing. (Chesterton again: "If a thing is worth doing, it is worth doing badly.").
An important point that I had intended the grandparent to point at, but on reflection I realize wasn't clear, is that not all of that 10% corresponds to a single type of cataclysm. Personally, I'd put much of the mass in "something we haven't foreseen."
At least one person was extremely confident in the year of publication of a different Principia Mathematica :) It's easy to forget about the chance that you misheard/misread someone when communicating beliefs.
What's the relation between religion and morality? I drew up a table to compare the two. This shows the absolute numbers and the percentages normalized in two directions (by religion, and by morality). I also highlighted the cells corresponding to the greatest percentage across the direction that was not normalized (for example, 22.89% of agnostics said there's no such thing as morality, a higher percentage than any other religious group).
Many pairs were highlighted both ways. In other words, these are pairs such that "Xs are more likely to be Ys" and vice-versa.
(I didn't do any statistical analysis, so be careful with the low-population groups.)
In case anyone's interested in how we compare to philosophers about ethics:
PhilPapers (931 people, mainly philosophy grad students and professors):
Normative ethics: deontology, consequentialism, or virtue ethics?
Other 301 / 931 (32.3%)
Accept or lean toward: deontology 241 / 931 (25.8%)
Accept or lean toward: consequentialism 220 / 931 (23.6%)
Accept or lean toward: virtue ethics 169 / 931 (18.1%)
LessWrong (1090 people, us):
With which of these moral philosophies do you MOST identify?
consequentialist (62.4%)
virtue ethicist (13.9%)
did not believe in morality (13.3%)
deontologist (4.5%)
Full Philpapers.org survey results
Strength of membership in the LW community was related to responses for most of the questions. There were 3 questions related to strength of membership: karma, sequence reading, and time in the community, and since they were all correlated with each other and showed similar patterns I standardized them and averaged them together into a single measure. Then I checked if this measure of strength in membership in the LW community was related to answers on each of the other questions, for the 822 respondents (described in this comment) who answered at least one of the probability questions and used percentages rather than decimals (since I didn't want to take the time to recode the answers which were given as decimals).
All effects described below have p < .01 (I also indicate when there is a nonsignificant trend with p<.2). On questions with categories I wasn't that rigorous - if there was a significant effect overall I just eyeballed the differences and reported which categories have the clearest difference (and I skipped some of the background questions which had tons of different categories and are hard to interpret).
Compared to those with a less strong membership in the LW community, those with a strong tie to the community are:
Background:
Probabilities:
Other Questions:
So long-time participants were less likely to believe that cryonics would work for them but more likely to sign up for it? Interesting. This could be driven by any of: fluke, greater rationality, greater age&income, less akrasia, more willingness to take long-shot bets based on shutting up and multiplying.
I looked into this a little more, and it looks like those who are strongly tied to the LW community are less likely to give high answers to p(cryonics) (p>50%), but not any more or less likely to give low answers (p<10%). That reduction in high answers could be a sign of greater rationality - less affect heuristic driven irrational exuberance about the prospects for cryonics - or just more knowledge about the topic. But I'm surprised that there's no change in the frequency of low answers.
There is a similar pattern in the relationship between cryonics status and p(cryonics). Those who are signed up for cryonics don't give a higher p(cryonics) on average than those who are not signed up, but they are less likely to give a probability under 10%. The group with the highest average p(cryonics) is those who aren't signed up but are considering it, and that's the group that's most likely to give a probability over 50%.
Here are the results for p(cryonics) broken down by cryonics status, showing what percent of each group gave p(cryonics)<.1, what percent gave p(cryonics)>.5, and what the average p(cryonics) is for each group. (I'm expressing p(cryonics) here as probabilities from 0-1 because I think it's easier to follow that way, since I'm giving the percent of people in each group.)
Never thought about it / don't understand (n=26): 58% give p<.1, 8% give p>.5, mean p=.17
No, and not planning to (n=289): 60% give p<.1, 6% give p>.5, mean p=.14
No, but considering it (n=444): 38% give p < .1, 18% give p>.5, mean p=.27
Yes - signed up or just finishing up paperwork (n=36): 39% give p<.1, 8% give p>.5, mean p=.21
Overall: 47% give p<.1, 13% give p>.5, mean p=.22
The existential risk questions are a confounding factor here - if you think p(cryonics works) 80%, but p(xrisk ends civilization) 50%, that pulls down your p(successful revival) considerably.
I wondered about that, but p(cryonics) and p(xrisk) are actually uncorrelated, and the pattern of results for p(cryonics) remains the same when controlling statistically for p(xrisk).
I think the main reason for this is that these persons have simply spent more time thinking about cyronics compared to other people. By spending time on this forum they have had a good chance of running into a discussion which has inspired them to read about it and sign up. Or perhaps people who are interested in cyronics are also interested in other topics LW has to offer, and hence stay in this place. In either case, it follows that they are probably also more knowledgeable about cyronics and hence understand what cyrotechnology can realistically offer currently or in the near future. In addition, these long-time guys might be more open to things such as cyronics in the ethical way.
I don't think this is obvious at all. If you had asked me before in advance which of the following 4 possible sign-pairs would be true with increasing time spent thinking about cryonics:
I would have said 'obviously #3, since everyone starts from "that won't ever work" and move up from there, and then one is that much more likely to sign up'
The actual outcome, #2, would be the one I would expect least. (Hence, I am strongly suspicious of anyone claiming to expect or predict it as suffering from hindsight bias.)
It is noted above that those with strong community attachment think that there is more risk of catastrophe. If human civilization collapses or is destroyed, then cryonics patients and facilities will also be destroyed.
I looked at this one a little more closely, and this difference in political views is driven almost entirely by the "time in community" measure of strength of membership in the LW community; it's not even statistically significant with the other two. I'd guess that is because LW started out on Overcoming Bias, which is a relatively libertarian blog, so the old timers tend to share those views. We've also probably added more non-Americans over time, who are more likely to be socialist.
All of the other relationships in the above post hold up when we replace the original measure of membership strength with one that is only based on the two variables of karma & sequence reading, but this one does not.
I enjoy numbers as much as the next guy, but IMO this article is practically crying out for more graphs. The Google Image Chart API might be useful here.
I would like to see this question on a future survey:
I've repeatedly heard that a significant number of rationalists are related to schizophrenics.
Didn't the IQ section say to only report a score if you've got an official one? The percentage of people answering not answering that question should have been pretty high, if they followed that instruction. How many people actually answered it?
Also: I've already pointed out that the morality question was flawed, but after thinking about it more, I've realized how badly flawed it was. Simply put, people shouldn't have had to choose between consequentialism and moral anti-realism, because there are a number of prominent living philosophers who combine the two.
JJC Smart is an especially clear example, but there are others. Joshua Greene's PhD thesis was mainly a defense of moral anti-realism, but also had a section titled "Hurrah for Utilitarianism!" Peter Singer is a bit fuzzy on meta-ethics, but has flirted with some kind of anti-realism.
And other moral anti-realists take positions on ethical questions without being consequentialists, see i.e. JL Mackie's book Ethics. Really, I have to stop myself from giving examples now, because they can be multiplied endlessly.
So again: normative ethics and meta-ethics are different issues, and should be treated as such on the next survey.
It might be a fluke, but like one other respondent who talked about this and got many upvotes, it could be that community veterans were more skeptical of the many many things that have to go right for your scenario to happen, even if we generally believe that cryonics is scientifically feasible and worth working on.
When you say "the average person cryonically frozen today will at some point be awakened", that means not only that the general idea is workable, but that we are currently using an acceptable method of preserving tissues, and that a large portion of current arrangements will continue to preserve those bodies/tissues until post singularity, however long that takes, and that whatever singularity happens will result in people willing to expend resources fulfullling those contracts (so FAI must beat uFAI). Add all that up, and it can easily make for a pretty small probability, even if you do "believe in cryonics" in the sense of thinking that it is potentially sound tech.
My interpretation of this result (with low confidence, as 'fluke' is also an excellent explanation) is that community veterans are better at working with probabilities based on complex conjunctions, and better at seeing the complexity of conjunctions based on written descriptions.
This seems to contradict the hypothesis that people's belief in the plausibility of immortality is linked to their own nearness/fear of death. Was there any correlations in the expected singularity date?
Relevant SMBC (Summary futurists predicted date of immortality discovery is slightly before the end of their expected lifespan)
I thought you meant spiritual as in "Find something more important than you are and dedicate your life to it." did I misinterpret?
It seems to me that a reasonable improvement for the next survey would be to lower the ambiguity of these categories.
If an interpretation wasn't given, then you were free to make up whatever meant something to you. To contrast with yours, i interpreted spiritualism in this sense to match "non-theistic spiritualism" eg nature-spirits, transcendental meditation, wish-magic and the like.
I think you are entitled to make up your own interpretation of a question like that :) Yours is a reasonable one IMO.
2009:
2011:
I generally expect LW to grow less metacontrarian on politics the larger it gets, so this change didn't surprise me. An alternative explanation (and now that I think of it more likley) is that the starting core group of LWers wasn't just more metacontrarian than usual, but probably also more libertarian in general.
And the large increase in population seems to include a large portion of students... which my experience tells me often has a higher-than-average portion of socialist leanings.
The relative proportions of liberalism, libertarianism, and conservatism haven't changed much, and I don't think we can say much about five new communists; by far the most significant change appears to be the doubled proportion of socialists. So this doesn't look like a general loss of metacontrarianism to me.
I'm not sure how to account for that change, though. The simplest explanation seems to be that LW's natural demographic turns out to include a bunch of left-contrarian groups once it's spread out sufficiently from OB's relatively libertarian cluster, but I'd also say that socialism's gotten significantly more mainstream-respectable in the last couple of years; I don't think that could fully account for the doubling, but it might play a role.
So people just got silly with the IQ field again.
Or people only have old results from when they were kids, when being at all bright quickly gets you out of range.
I'd almost rather see SAT scores at this point.
Unless there's a particular reason to expect LWers in the U.S. to be significantly smarter or dumber than other LWers, it should be a useful sample.
That'd be problematic for people outside the US, unfortunately. I don't know the specifics of how most of the various non-US equivalents work, but I expect conversion to bring up issues; the British A-level exams, for example, have a coarse enough granularity that they'd probably taint the results purely on those grounds. Especially if the average IQ around here really is >= 140.
SAT scores are going to be of limited utility when so many here are clustered at the highest IQs. A lot more people get perfect or near-perfect SAT scores than get 140+ IQ scores.
Yeah, but the difference is that the majority of people actually have SAT scores. It's pretty easy to go through your life without ever seeing the results of an IQ test, but I suspect there's a big temptation to just give a perceived "reasonable" answer anyway. I would rather have a lot of accurate results that are a little worse at discriminating than a lot of inaccurate results which would hypothetically be good at discriminating if they were accurate.
A majority of US people perhaps. Aargh the Americano-centrism, yet again.
Two obvious questions missing from the survey btw are birth country, and current country of residence (if different).
Note that in addition to being US-centric, the SAT scoring system has recently changed. When I took the SAT's, the maximum score was 1600, as it had two sections. Now it has 3 sections, with a maximum score of 2400. So my SAT score is going to look substantially worse compared to people who took it since 2005... and let's not even get into the various "recentering" changes in the 80's and 90's.
Actually, how should one measure own IQ ? I wouldn't know a reasonable place where to start looking for it, as the internet is full of advertising for IQ measurements, i.e., lots of intentional misinformation. Especially avoiding anything restricted to a single location like USA - this makes SAT's useless, well, at least for me.
Anyone expecting otherwise was also being silly.
It would be neat if you posted a link to a downloadable spreadsheet like last time. I'd like to look at the data, if I happened to miss it via careless reading, sorry for bothering you.
Edit: Considering this is downovted I guess I must have missed it. I skimmed the post again and I'm just not seeing it, can someone please help with a link? :)
2nd Edit: Sorry missed it the first time!
Last word of the post.
I'm rather shocked that the numbers on this are so low. It's higher than polls indicate as the degree of acceptance in America, but then, we're dealing with a public where supposedly half of the people believe that tomatoes only have genes if they are genetically modified. Is this a subject on which Less Wrongers are significantly meta-contrarian?
Perhaps they also want to signal a sentiment similar to that of Freeman Dyson:
I'm also a bit surprised (I would have excepted high figures), but be careful to not misinterpret the data : it doesn't say that 70.7% of LWers believe in "anthropogenic global warming", but it does an average on probabilities. If you look at the quarters, even the 25% quarter is at p = 55% meaning that less than 25% of LWers give a lower than half probability.
It seems to indicate that almost all LWers believe in it being true (p>0.5 that it is true), but many of them do so with a low confidence. Either because they didn't study the field enough (and therefore, refuse to put too much strength in their belief) or because they consider the field too complicated/not well enough understood to be a too strong probability in it.
That's how I interpreted it in the first place; "believe in anthropogenic global warming" is a much more nebulous proposition anyway. But while anthropogenic global warming doesn't yet have the same sort of degree of evidence as, say, evolution, I think that an assignment of about 70% probability represents either critical underconfidence or astonishingly low levels of familiarity with the data.
It doesn't astonish me. It's not a terribly important issue for everyday life; it's basically a political issue.
I think I answered somewhere around 70%; while I've read a bit about it, there are plenty of dissenters and the proposition was a bit vague.
The claim that changing the makeup of the atmosphere in some way will affect climate in some way is trivially true; a more specific claim requires detailed study.
I would say that it's considerably more important for everyday life for most people than knowing whether tomatoes have genes.
Climate change may not represent a major human existential risk, but while the discussion has become highly politicized, the question of whether humans are causing large scale changes in global climate is by no means simply a political question.
If the Blues believe that asteroid strikes represent a credible threat to our civilization, and the Greens believe they don't, the question of how great a danger asteroid strikes actually pose will remain a scientific matter with direct bearing on survival.
What I think you should be arguing here (and what on one level I think you where implicitly arguing), is that in a sufficiently high trust society one should spend more resources on educating people about global warming than tomatoes having genes if one wants to help them.
It is for their own good, but not their personal good. Like a vaccine shot that has a high rate of nasty side effects but helps keep an infectious disease at bay. If you care about them, it can be rational to take the shot yourself if that's an effective signal to them that you aren't trying to fool them. By default they will be modelling you like one of them and interpret your actions accordingly. Likewise if you just happen to be better enough at deceit than they will fail detecting it, you can still use that signal to help them, even if take a fake shot.
Humans are often predictably irrational. The arational processes that maintain the high trust equilibrium can be used to let you take withdrawals of cooperative behaviour from the bank when the rational incentives just aren't there. What game theory is good for in this case is realizing how much you are withdrawing, since a rational game theory savvy agent is a pretty good benchmark for some cost analysis. You naturally need to think about the cost to quickly gauge if the level of trust is high enough in a society and further more if you burden it in this way, is the equilibrium still stable in the midterm?
If its not, teach them about tomatoes.
I disagree actually.
For most people neither global warming nor tomatoes having genes matters much. But if I had to choose, I'd say knowing a thing or two about basic biology has some impact on how you make your choices with regards to say healthcare or how much you spend on groceries or what your future shock level is.
Global warming, even if it does have a big impact on your life will not be much affected by you knowing anything about it. Pretty much anything an individual could do against it has a very small impact on how global warming will turn out. Saving 50$ a month or a small improvement in the odds of choosing the better treatment has a pretty measurable impact on him.
Taking global warming as a major threat for now (full disclosure: I think global warming, is not a threat to human survival though it may contribute to societal collapse in a worst case scenario), it is quite obviously a tragedy of the commons problem.
There is no incentive for an individual to do anything about it or even know anything about it, except to conform to a "low carbon footprint is high status" meme in order to derive benefit in his social life and feeling morally superior to others.
Wait a sec. Global warming can be important for everyday life without it being important that any given individual know about it for everyday life. In the same way that matters of politics have tremendous bearing on our lives, yet the average person might rationally be ignorant about politics since he can't have any real effect on politics. I think that's the spirit in which thomblake means it's a political matter. For most of us, the earth will get warmer or it won't, and it doesn't affect how much we are willing to pay for tomatoes at the grocery store (and therefore it doesn't change our decision rule for how to buy tomatoes), although it may effect how much tomatoes cost.
(It's a bit silly, but on the other hand I imagine one could have their preferences for tomatoes depend on whether tomatoes had "genes" or not.)
This is a bit like the distinction between microeconomics and macroeconomics. Macroeconomics is the stuff of front page newspaper articles about the economy, really very important stuff. But if you had to take just one economics class, I would recommend micro, because it gives you a way of thinking about choices in your daily life, as opposed to stuff you can't have any real effect on.
What should astonish about zero familiarity with the data, beyond that there's a scientific consensus?
I would be unsurprised by zero familiarity in a random sampling of the population, but I would have expected a greater degree of familiarity here as a matter of general scientific literacy.
This result is, not exactly surprising to me, but odd by my reading of the questions. It may seem at first glance like a conjunction fallacy to rate the second question's probability much higher than the first (which I did). But in fact, the god question, like the supernatural question referred to a very specific thing "ontologically basic mental entities", while the "some revealed religion is more or less true" question was utterly vague about how to define revealed religion or more or less true.
As I remarked in comments on the survey, depending on my assumptions about what those two things mean, my potential answers ranged from epsilon to 100-epsilon. A bit of clarity would be useful here.
Also, given the large number of hard atheists on LW, it might be interesting to look at finer grained data for the 25+% of survey respondents who did not answer '0' for all three "religion" questions.
http://lesswrong.com/lw/82s/dont_call_yourself_a_rationalist/
More fundamentally than self-labelling, that's an utterly false dilemma. It helps show that the results weren't a totally random 'people on that site then': they show SOMETHING. But what they show must be much more open to debate. To 'rationalist', you can add 1) Has been exposed to LessWrong (sequences and community) 2) English-speaking (unless there were translations?) 3) Minded to take long online surveys: including at the least possibilities 3a) Egotistical enough to think that your survey results must be included 3b) Dedicated enough to the LessWrong community to wish to contribute 3c) Generally publically-minded 3d) Doesn't have enough to do 4) Likely to overestimate one's own IQ
It seems particularly odd to suggest these results are representative of rationalists while recognising both that the proportion of women has tripled since the last survey (and I don't think we're very close to working out what the true proportion is) and that men and women tend to have significantly different attitudes.
The 'direct line to prevailing rationalist opinion' is also straight after what I would guess is most skewed by point (1) above. I'd be shocked to see such high scores for Many Worlds, living in a simulation or cryonics amongst rationalists outside LessWrong.
Finally, could the last set of results itself have had an effect. The most likely effect would be in confirming the in-group opinions, leading to 'evaporative cooling' (if I may!). It seems less likely, but people could have directly calibrated too: I'd be interested in how much that page was accessed ahead of people taking this year's survey.
If 'rationalist' was used just to mean 'LessWrongian' then please ignore the above - and take Robert Lumley's advice!
Where by 'prove' we mean 'somebody implied that they did on an anonymous online survey'. ;)
You mean, as opposed to that kind of proof where we end up with a Bayesian probability of exactly one? :)
Wouldn't it be (relatively) easy and useful to have a "stats" page in LW, with info like number of accounts, number of accounts with > 0 karma (total, monthly), number of comments/articles, ... ?
This would allow for a running poll, if we want one.
Nice idea! I am interested in such statistics.
Aliens existing but not yet colonizing multiple systems or broadcasting heavily is the the response consistent with the belief that a Great Filter lies in front of us.
Is it just me or is there something not quite right about this, as an English sentence.
Could be fixed by adding 'of'
or removing 'who'
Right. For some reason the period instead of comma confused me much more than it should have.
Yeah, which is ‘the hard way’ supposed to be? :-)
These averages strike me as almost entirely useless! If only half of the people taking the survey are lesswrong participants then the extra noise will overwhelm any signal when the probabilities returned by the actual members are near to either extreme. Using averaging of probabilities (as opposed to, say, log-odds) is dubious enough even when not throwing in a whole bunch of randoms!
(So thankyou for providing the data!)
Running list of changes for next year's survey:
http://en.wikipedia.org/wiki/Intersex
Otherwise agreed.
Strongly disagree with previous self here. I do not think replacing "gender" with "sex" avoids complaints or "philosophizing", and "philosophizing" in context feels like a shorthand/epithet for "making this more complex than prevailing, mainstream views on gender."
For a start, it seems like even "sex" in the sense used here is getting at a mainly-social phenomenon: that of sex assigned at birth. This is a judgement call by the doctors and parents. The biological correlates used to make that decision are just weighed in aggregate; some people are always going to throw an exception. If you're not asking about the size of gametes and their delivery mechanism, the hormonal makeup of the person, their reproductive anatomy where applicable, or their secondary sexual characteristics, then "sex" is really just asking the "gender" question but hazily referring to biological characteristics instead.
Ultimately, gender is what you're really asking for. Using "sex" as a synonym blurs the data into unintelligibility for some LWers; pragmatically, it also amounts to a tacit "screw you" to trans people. I suggest biting the bullet and dealing with the complexity involved in asking that question -- in many situations people collecting that demographic info don't actually need it, but it seems like useful information for LessWrong.
A suggested approach:
Two optional questions with something like the following phrasing:
Optional: Gender (pick what best describe how you identify):
-Male
-Female
-Genderqueer, genderfluid, other
-None, neutrois, agender
-Prefer not to say
Optional: Sex assigned at birth:
-Male
-Female
-Intersex
-Prefer not to say
A series of four questions on each Meyers-Briggs indicator would be good, although I'm sure the data would be woefully unsurprising. Perhaps link to an online test if people don't know it already.
Everyone who's suggesting changes: you are much more likely to get your way if you suggest a specific alternative. For example, instead of "handle politics better", something like "your politics question should have these five options: a, b, c, d, and e." Or instead of "use a more valid IQ measure", something more like "Here's a site with a quick and easy test that I think is valid"
In that case: use the exact ethics questions from the PhilPapers Survey (http://philpapers.org/surveys/), probably minus lean/accept distinction and the endless drop-down menu for "other."
Publish draft questions in advance, so we can spot issues before the survey goes live.
We should ask if people participated in the previous surveys.
When asking for race/ethnicity, you should really drop the standard American classification into White - Hispanic - Black - Indian - Asian - Other. From a non-American perspective this looks weird, especially the "White Hispanic" category. A Spaniard is White Hispanic, or just White? If only White, how does the race change when one moves to another continent? And if White Hispanic, why not have also "Italic" or "Scandinavic" or "Arabic" or whatever other peninsula-ic races?
Since I believe the question was intended to determine the cultural background of LW readers, I am surprised that there was no question about country of origin, which would be more informative. There is certainly greater cultural difference between e.g. Turks (White, non-Hispanic I suppose) and White non-Hispanic Americans than between the latter and their Hispanic compatriots.
Also, making a statistic based on nationalities could help people determine whether there is a chance for a meetup in their country. And it would be nice to know whether LW has regular readers in Liechtenstein, of course.
I was also...well, not surprised per se, but certainly annoyed to see that "Native American" in any form wasn't even an option. One could construe that as revealing, I suppose.
I don't know how relevant the question actually is, but if we want to track ancestry and racial, ethnic or cultural group affiliation, the folowing scheme is pretty hard to mess up:
Country of origin: <drop-down list of countries>
Country of residence: <drop-down list with "same as origin" as the first option>
Primary Language: <Form Field>
Native Language (if different): <Form Field>
Heritage language (if different): <Form Field>
Note: A heritage language is one spoken by your family or identity group.
Heritage group:
Diaspora: Means your primary heritage and identity group moved to the country you live in within historical or living memory, as colonists, slaves, workers or settlers.
<radio buttons>
European diaspora ("white" North America, Australia, New Zealand, South Africa, etc)
African diaspora ("black" in the US, West Indian, more recent African emigrant groups; also North African diaspora)
Asian diaspora (includes, Turkic, Arab, Persian, Central and South Asian, Siberian native)
Indigenous: Means your primary heritage and identity group was resident to the following location prior to 1400, OR prior to the arrival of the majority culture in antiquity (for example: Ainu, Basque, Taiwanese native, etc):
<radio buttons>
-Africa
-Asia
-Europe
-North America (between Panama and Canada, also includes Greenland and the Carribean)
-Oceania (including Australia)
-South America
Mixed: Select two or more:
<check boxes>
European Diaspora
African Diaspora
Asian Diaspora
African Indigenous
American Indigenous
Asian Indigenous
European Indigenous
Oceania Indigenous
What the US census calls "Non-white Hispanic" would be marked as "Mixed" > "European Diaspora" + "American Indigenous" with Spanish as either a Native or Heritage language. Someone who identifies as (say) Mexican-derived but doesn't speak Spanish at all would be impossible to tell from someone who was Euro-American and Cherokee who doesn't speak Cherokee, but no system is perfect...
Put two spaces after a line if you want a linebreak.
Most LessWrong posters and readers are American, perhaps even the vast majority (I am not). Hispanic Americans differ from white Americans differ from black Americans culturally and socio-economically not just on average but in systemic ways regardless if the person in question defines himself as Irish American, Kenyan American, white American or just plain American. From the US we have robust sociological data that allows us to compare LWers based on this information. The same is true of race in Latin America, parts of Africa and more recently Western Europe.
Nationality is not the same thing as racial or even ethnic identity in multicultural societies.
Considering every now and then people bring up a desire to lower barriers to entry for "minorities" (whatever that means in a global forum), such stats are useful for those who argue on such issues and also for ascertaining certain biases.
Adding a nationality and/or citizenship question would probably be useful though.
I have not said that it is. I was objecting to arbitrariness of "Hispanic race": I believe that the difference between Hispanic White Americans and non-Hispanic White Americans is not significantly higher than the difference between both two groups and non-Americans, and that the number of non-Americans among LW users would be higher than 3.8% reported for the Hispanics. I am not sure what exact sociological data we may extract from the survey, but in any case, the comparison to standard American sociological datasets will be problematic because the LW data are contaminated by presence of non-Americans and there is no way to say how much, because people were not asked about that.
Because we don't have as much useful sociological data on this. Obviously we can start collecting data on any of the proposed categories, but if we're the only ones, it won't much help us figure out how LW differs from what one might expect of a group that fits its demographic profile.
Much of the difference in the example of Turks is captured by the Muslim family background question.
Offer a text field for race. You'll get some distances, not to mention "human" or "other", but you could always use that to find out whether having a contrary streak about race/ethnicity correlates with anything.
If you want people to estimate whether a meetup could be worth it, I recommend location rather than nationality-- some nations are big enough that just knowing nationality isn't useful.
Suggestion: "Which of the following did you change your mind about after reading the sequences? (check all that apply)"
Many other things could be listed here.
I'm curious, what would you do with the results of such a question?
For my part, I suspect I would merely stare at them and be unsure what to make of a statistical result that aggregates "No, I already held the belief that the sequences attempted to convince me of" with "No, I held a contrary belief and the sequences failed to convince me otherwise." (That it also aggregates "Yes, I held a contrary belief and the sequences convinced me otherwise." and "Yes, I initially held the belief that the sequences attempted to convince me of, and the sequences convinced me otherwise" is less of a concern, since I expect the latter group to be pretty small.)
Originally I was going to suggest asking, "what were your religious beliefs before reading the sequences?"-- and then I succumbed to the programmer's urge to solve the general problem.
However, I guess measuring how effective the sequences are at causing people to change their mind is something that a LW survey can't do, anyway (you'd need to also ask people who read the sequences but didn't stick around to accurately answer that).
Mainly I was curious how many deconversions the sequences caused or hastened.
I think the question is too vague as formulated. Does any probability update, no matter how small, count as changing your mind? But if you ask for precise probability changes, then the answers will likely be nonsense because most people (even most LWers, I'd guess) don't keep track of numeric probabilities, just think "oh, this argument makes X a bit more believable" and such.
Replacing gender with sex seems like the wrong way to go to me. For example, note how Randall Munroe asked about sex, then regretted it.
Yet another alternate, culture-neutral way of asking about politics:
Q: How involved are you in your region's politics compared to other people in your region?
A: [choose one]
() I'm among the most involved
() I'm more involved than average
() I'm about as involved as average
() I'm less involved than average
() I'm among the least involved
Requires people to self assess next to a cultural baseline, and self assessments of this sort are notoriously inaccurate. (I predict everyone will think they have above-average involvement).
I'd actually have guessed an average of below average.
Bad prediction. While it's hard to say since so few people around here actually vote, my involvement in politics is close enough to 0 that I'd be very surprised if I was more involved than average.
I have exactly zero involvement and so I'd never think that.
Within a US-specific context, I would eschew these comparisons to a notional average and use the following levels of participation:
0 = indifferent to politics and ignorant of current events
1 = attentive to current events, but does not vote
2 = votes in presidential elections, but irregularly otherwise
3 = always votes
4 = always votes and contributes to political causes
5 = always votes, contributes, and engages in political activism during election seasons
6 = always votes, contributes, and engages in political activism both during and between election seasons
7 = runs for public office
I suspect that the average US citizen of voting age is a 2, but I don't have data to back that up, and I am not motivated to research it. I am a 4, so I do indeed think that I am above average.
Those categories could probably be modified pretty easily to match a parliamentary system by leaving out the reference to presidential elections and just having "votes irregularly" and "always votes"
Editing to add -- for mandatory voting jurisdictions, include a caveat that "spoiled ballot = did not vote"
I agree denotationally with that estimate, but I think you're putting too much emphasis on voting in at least the 0-4 range. Elections (in the US) only come up once or exceptionally twice a year, after all. If you're looking for an estimate of politics' significance to a person's overall life, I think you'd be better off measuring degree of engagement with current events and involvement in political groups -- the latter meaning not only directed activism, but also political blogs, non-activist societies with a partisan slant, and the like.
For example: do you now, or have you ever, owned a political bumper sticker?
There might be people who don't always (or even usually) vote yet they contribute to political causes/engage in political activism, for certain values of “political” at least.
Personally, I'm not sure I necessarily consider the person who runs for public office to be at a higher level of participation than the person who works for them.
I think I have average or below-average involvement.
Maybe it would be better to ask about the hours/year spent on politics.
I think using your stipulative definition of "supernatural" was a bad move. I would be very surprised if I asked a theologian to define "supernatural" and they replied "ontologically basic mental entities". Even as a rational reconstruction of their reply, it would be quite a stretch. Using such specific definitions of contentious concepts isn't a good idea, if you want to know what proportion of Less Wrongers self-identify as atheist/agnostic/deist/theist/polytheist.
OTOH, using a vague definition isn't a good idea either, if you want to know something about what Less Wrongers believe about the world.
I had no problem with the question as worded; it was polling about LWers confidence in a specific belief, using terms from the LW Sequences. That the particular belief is irrelevant to what people who self-identify as various groups consider important about that identification is important to remember, but not in and of itself a problem with the question.
But, yeah... if we want to know what proportion of LWers self-identify as (e.g.) atheist, that question won't tell us.
You should clarify in the antiagathics question that the person reaches the age of 1000 without the help of cryonics.
Suggestion: add "cryocrastinating" as a cryonics option.
One about nationality (and/or native language)? I guess that would be much more relevant than e.g. birth order.
Regarding #4, you could just write a % symbol to the right of each input box.
BTW, I'd also disallow 0 and 100, and give the option of giving log-odds instead of probability (and maybe encourage to do that for probabilities <1% and >99%). Someone's “epsilon” might be 10^-4 whereas someone else's might be 10^-30.
I'd force log odds, as they are the more natural representation and much less susceptible to irrational certainty and nonsense answers.
Someone has to actually try and comprehend what they are doing to troll logits; -INF seems a lot more out to lunch than p = 0.
I'd also like someone to go thru the math to figure out how to correctly take the mean of probability estimates. I see no obvious reason why you can simply average [0, 1] probability. The correct method would probably involve cooking up a hypothetical bayesian judge that takes everyones estimates as evidence.
Edit: since logits can be a bit unintuitive, I'd give a few calibration examples like odds of rolling a 6 on a die, odds of winning some lottery, fair odds, odds of surviving a car crash, etc.
Personally, for probabilities roughly between 20% and 80% I find probabilities (or non-log odds) easier than understand than log-odds.
Yeah. One of the reason why I proposed this is the median answer of 0 in several probability questions. (I'd also require a rationale in order to enter probabilities more extreme than 1%/99%.)
I'd go with the average of log-odds, but this requires all of them to be finite...
People will mess up the log-odds, though. Non-log odds seem safer.
Two fields instead of one, but it seems cleaner than any of the other alternatives.
The point is not having to type lots of zeros (or of nines) with extreme probabilities (so that people won't weasel out and use ‘epsilon’); having to type 1:999999999999999 is no improvement over having to type 0.000000000000001.
Is such precision meaningful? At least for me personally, 0.1% is about as low as I can meaningfully go - I can't really discriminate between me having an estimate of 0.1%, 0.001%, or 0.0000000000001%.
I expect this is incorrect.
Specifically, I would guess that you can distinguish the strength of your belief that a lottery ticket you might purchase will win the jackpot from one in a thousand (a.k.a. 0.1%). Am I mistaken?
That's a very special case -- in the case of the lottery, it is actually possible-in-principle to enumerate BIG_NUMBER equally likely mutually-exclusive outcomes. Same with getting the works of shakespeare out of your random number generator. The things under discussion don't have that quality.
I agree in principle, but on the other hand the questions on the survey are nowhere as easy as "what's the probability of winning such-and-such lottery".
You're right, good point.
I second that. See my post at http://lesswrong.com/r/discussion/lw/8lr/logodds_or_logits/ for a concise summary. Getting the LW survey to use log-odds would go a long way towards getting LW to start using log-odds in normal conversation.
I'd love a specific question on moral realism instead of leaving it as part of the normative ethics question. I'd also like to know about psychiatric diagnoses (autism spectrum, ADHD, depression, whatever else seems relevant)-- perhaps automatically remove those answers from a spreadsheet for privacy reasons.
I don't care about moral realism, but psychiatric diagnoses (and whether they're self-diagnosed or formally diagnosed) would be interesting.
You are aware that if you ask people for their sex but not their gender, and say something like "we have more women now", you will be philosophized into a pulp, right?
Only if people here are less interested in applying probability theory than they are in philosophizing about gender... Oh.
Why not ask for both?
Because the two are so highly correlated that having both would give us almost no extra information. One goal of the survey should be to maximize the useful-info-extracted / time-spent-on-it ratio, hence also the avoidance of write-ins for many questions (which make people spend more time on the survey, to get results that are less exploitable) (a write-in for gender works because people are less likely to write a manifesto for that than for politics).
Because having a "gender" question causes complaints and philosophizing, which Yvain wants to avoid.
Maybe, but sort of fresh meat we get is not at all independent of the old guard, so an initial bias could easily reproduce itself.
This is not just intriguing. To me this is the single most significant finding in the survey.
It's also worrying, because it means we're not getting better on average.
If the readership of LessWrong has gone up similarly in that time, then I would not expect to see an improvement, even if everyone who reads LessWrong improves.
Couldn't the current or future data be correlated with length of readership to determine this?
Yes, I was thinking that. Suppose it takes a certain fixed amount of time for any LessWronger to learn the local official truth. Then if the population grows exponentially, you'd expect the fraction that knows the local official truth to remain constant, right? But I'm not sure the population has been growing exponentially, and even so you might have expected the local official truth to become more accurate over time, and you might have expected the community to get better over time at imparting the local official truth.
Regardless of what we should have expected, my impression is LessWrong as a whole tends to assume that it's getting closer to the truth over time. If that's not happening because of newcomers, that's worth worrying about.
Note that it is possible for newcomers to hold the same inaccurate beliefs as their predecessors while the core improves its knowledge or expands in size. In fact, as LW grows it will have to recruit from, say, Hacker News (where I first heard of LW) instead of Singularity lists, producing newcomers less in tune with the local truth.
(Unnamed's comment shows interesting differences in opinion between a "core" and the rest, but (s)he seems to have skipped the only question with an easily-verified answer, i.e. Newton.)
The calibration question was more complicated to analyze, but now I've looked at it and it seems like core members were slightly more accurate at estimating the correct year (p=.05 when looking at size of the error, and p=.12 when looking at whether or not it was within the 20-year range), but there's no difference in calibration.
("He", btw.)
It just means that we're at a specific point in memespace. The hypothesis that we are all rational enough to identify the right answers to all of these questions wouldn't explain the observed degree of variance.
Suggestion: Show these questions in random order to half of people, and show only one of the questions to the other half, to get data on anchoring.
Or show the questions in one order to a fourth of people, the other order to a fourth of people, one of the questions to another forth and the other question to the last fourth.
??
It's barely above background noise, but my guess is when specifically asked about ontologically basic mental entities, people will say no (or huh?), but when asked about God a few will decline to define supernatural in that way or decline to insist on God as supernatural.
It's an odd result if you think everyone is being completely consistent about how they answer all the questions, but if you ask me, if they all were it would be an odd result in itself.
Could someone break down what is meant by "ontologically basic mental entities"? Especially, I'm not certain of the role of the word 'mental'..
It's a bit of a nonstandard definition of the supernatural, but I took it to mean mental phenomena as causeless nodes in a causal graph: that is, that mental phenomena (thoughts, feelings, "souls") exist which do not have physical causes and yet generate physical consequences. By this interpretation, libertarian free will and most conceptions of the soul would both fall under supernaturalism, as would the prerequisites for most types of magic, gods, spirits, etc.
I'm not sure I'd have picked that phrasing, though. It seems to be entangled with epistemological reductionism in a way that might, for a sufficiently careful reading, obscure more conventional conceptions of the "supernatural": I'd expect more people to believe in naive versions of free will than do in, say, fairies. Still, it's a pretty fuzzy concept to begin with.
So deism (God creating the universe but not being involved in the universe once it began) could make p(God) > p(Supernatural).
Looking at the the data by individual instead of in aggregate, 82 people have p(God) > p(Supernatural); 223 have p(Supernatural) > p(God).
Given this, the numbers no longer seem anomalous. Thank you.
Rather, believe the probability of cryonics producing a favorable outcome to be less. This was a confusing question, because it wasn't specified whether it's total probability, since if it is, then probability of global catastrophe had to be taken into account, and, depending on your expectation about usefulness of frozen heads to FAI's value, probability of FAI as well (in addition to the usual failure-of-preservation risks). As a result, even though I'm almost certain that cryonics fundamentally works, I gave only something like 3% probability. Should I really be classified as "doesn't believe in cryonics"?
(The same issue applied to live-to-1000. If there is a global catastrophe anywhere in the next 1000 years, then living-to-1000 doesn't happen, so it's a heavy discount factor. If there is a FAI, it's also unclear whether original individuals remain and it makes sense to count their individual lifespans.)
Good point, and I think it explains one of the funny results that I found in the data. There was a relationship between strength of membership in the LW community and the answers to a lot of the questions, but the anti-agathics question was the one case where there was a clear non-monotonic relationship. People with a moderate strength of membership (nonzero but small karma, read 25-50% of the sequences, or been in the LW community for 1-2 years) were the most likely to think that at least one currently living person will reach an age of 1,000 years; those with a stronger or weaker tie to LW gave lower estimates.
There was some suggestion of a similar pattern on the cryonics question, but it was only there for the sequence reading measure of strength of membership and not for the other two.
So the 50% age is 25 and the 50% estimate is 2080? A 25 year old has a life expectancy of, what, another 50 years? 2011+50=2061, or 19 years short of the Singularity!
Either people are rather optimistic about future life-extension (despite 'Someone now living will reach age 1000: 23.6'), or the Maes-Garreau Law may not be such a law.
I would interpret "the latest possible date a prediction can come true and still remain in the lifetime of the person making it", "lifetime" would be the longest typical lifetime, rather than an actuarial average. So -- we know lots of people who live to 95, so that seems like it's within our possible lifetime. I certainly could live to 95, even if it's less than a 50/50 shot.
One other bit -- the average life expectancy is for the entire population, but the average life expectancy of white, college educated persons earning (or expected to earn) a first or second quintile income is quite a bit higher, and a very high proportion of LWers fall into that demographic. I took a quick actuarial survey a few months back that suggested my life expectancy given my family age/medical history, demographics, etc. was to reach 92 (I'm currently 43).
Or we have family histories that give us good reason to think we'll outlive the mean, even without drastic increases in the pace of technology. That would describe me. Even without that just living to 25 increases your life expectancy by quite a bit as all those really low numbers play heck with an average.
Or we're overconfident in our life expectancy because of some cognitive bias.
I should come clean, I lied when I claimed to be guessing about the 50 year old thing; before writing that, I actually consulted one of the usual actuarial tables which specifies that a 25 year old can only expect an average 51.8 more years. (The number was not based on life expectancy from birth.)
The actuarial table is based on an extrapolation of 2007 mortality rates for the rest of the population's lives. That sounds like a pretty shaky premise.
Why would you think that? Mortality rate have, in fact, gone upwards in the past few years for many subpopulations (eg. some female demographics have seen their absolute lifespan expectancy fall), and before that, decreases in old adult mortality were tiny:
(And doesn't that imply deceleration? 20 years is 1/5 of the period, and over the period, 6 years were gained; 1/5 * 6 > 1.)
Which is a shakier premise, that trends will continue, or that SENS will be a wild success greater than, say, the War on Cancer?
I didn't say that lifespans would necessarily become greater in that period, but several decades is time for the rates to change quite a lot. And while public health has become worse in recent decades in a number of ways (obesity epidemic, lower rates of exercise,) a technologies have been developed which improve the prognoses for a lot of ailments (we may not have cured cancer yet, but many forms are much more treatable than they used to be.)
If all the supposed medical discoveries I hear about on a regular basis were all they're cracked up to be, we would already have a generalized cure for cancer by now and already have ageless mice if not ageless humans, but even if we assume no 'magic bullet' innovations in the meantime, the benefits of incrementally advancing technology are likely to outpace decreases in health if only because the population can probably only get so much fatter and more out of shape than it already is before we reach a point where increased proliferation of superstimulus foods and sedentary activities don't make any difference.
Which is already built into the quoted longevity increases. (See also the Gompertz curve.)
Right, my point is that SENS research, which is a fairly new field, doesn't have to be dramatically more successful than cancer research to produce tangible returns in human life expectancy, and the deceleration in increase of life expectancy is most likely due to a negative health trend which is likely not to endure over the entire interval.
Michael Vassar has mentioned to me that the proportion of first/only children at LW is extremely high. I'm not sure whether birth order makes a big difference, but it might be worth asking about. By the way, I'm not only first-born, I'm the first grandchild on both sides.
Questions about akrasia-- Do you have no/mild/moderate/serious problems with it? Has anything on LW helped?
I left some of the probability questions blank because I realized had no idea of a sensible probability, and I especially mean whether we're living in a simulation.
It might be interesting to ask people whether they usually vote.
The link to the survey doesn't work because the survey is closed-- could you make the text of the survey available?
Only for those living in countries where voting is non-mandatory
I'm a twin that's 2 minutes younger than first-born. Be careful how you ask about birth order.
Good point.
Maybe the survey should be shown to beta readers or put up for discussion (except for obscure fact calibration questions) to improve the odds of detecting questions that don't work the way it's hoped.
So am I! I wonder if being the first-born is genetically heritable.
Only child; both parents oldest siblings. Of course this configuration isn't monstrously rare; we should expect a fair few instances just by chance.
This is probably just intended as a joke; but it seems pretty plausible that having few children is heritable (though it had better not be too heritable, else small families will simply die out), and the fraction of first-borns is larger in smaller families.
Ditto :) but I intend to reproduce eventually in maximum useful volume.
Yes. Being first-born is correlated with having few siblings, which is correlated with parents with low fertility, which is genetically inherited from grandparents with low fertility, which is correlated with your parents having few siblings, which is correlated with them being first-born.
I agree with your conclusion that the heritability of firstbornness is nonzero, but I'm not sure this reasoning is valid. (Pearson) correlation is not, in general, transitive: if X is correlated with Y and Y is correlated with Z, it does not necessarily follow that X is correlated with Z unless the squares of the correlation coefficients between X and Y and between Y and Z sum to more than one.
Actually calculating the heritability of firstbornness turns out to be a nontrivial math problem. For example, while it is obvious that having few siblings is correlated with being firstborn, it's not obvious to me exactly what that correlation coefficient should be, nor how to calculate it from first principles. When I don't know how to solve a problem from first principles, my first instinct is to simulate it, so I wrote a short script to calculate the Pearson correlation between number of siblings and not-being-a-firstborn for a population where family size is uniformly distributed on the integers from 1 to n. It turns out that the correlation decreases as n gets larger (from [edited:] ~0.5[8] for n=[2] to ~0.3[1] for n=50), which fact probably has an obvious-in-retrospect intuitive explanation which I am somehow having trouble articulating explicitly ...
Ultimately, however, other priorities prevent me from continuing this line of inquiry at the present moment.
I'm confused: does this make sense for n=1? (Your code suggests that that should be n=2, maybe?)
There was a poll about firstborns.
I've long been interested in whether Eliezer's fanfiction is an effective strategy, since it's so attention-getting (when Eliezer popped up in The New Yorker recently, pretty much his whole blurb was a description of MoR).
Of the listed strategies, only 'blogs' was greater than MoR. The long tail is particularly worrisome to me: LW/OB have frequently been linked in or submitted to Reddit and Hacker News, but those two account for only 14 people? Admittedly, weak SEO in the sense of submitting links to social news sites is a lot less time intensive than writing 1200 page Harry Potter fanfics and Louie has been complaining about us not doing even that, but still, the numbers look to be in MoR's favor.
Keep in mind that many of these links were a long time ago. I came here from Overcoming Bias, but I came to Overcoming Bias from Hacker News.
As with the last survey, it's amazing how casually many people assign probabilities like 1% and 99%. I can understand in a few cases, like the religion questions, and Fermi-based answers to the aliens in the galaxy question. But on the whole it looks like many survey takers are just failing the absolute basics: don't assign extreme probabilities without extreme justification.
On the other hand, conjunctive bias exists. It's not hard to string together enough conjunctions that the probability of the statement should be in an extreme range.
Does this describe any of the poll questions?
Results from 2009.
Are the questions for the 2009 survey available somewhere?
You can now access it at https://docs.google.com/spreadsheet/viewform?hl=en_US&formkey=cF9KNGNtbFJXQ1JKM0RqTkxQNUY3Y3c6MA..#gid=0
...
Maybe people were expecting the average IQ to turn out to be about the same as in the previous survey, and... (Well, I kind-of was, at least.)
I am officially very surprised at how many that is. Also officially, poorly calibrated at both the 50% (no big deal) and the 90% (ouch, ouch, ouch) confidence levels.
It looks like about 6% of respondents gave their answers in decimal probabilities instead of percentages. 108 of the 930 people in the data file didn't have any answers over 1 for any of the probability questions, and 52 of those did have some answers (the other 56 left them all blank), which suggests that those 52 people were using decimals (and that's is 6% of the 874 who answered at least one of the questions). So to get more accurate estimates of the means for the probability questions, you should either multiply those respondents' answers by 100, exclude those respondents when calculating the means, or multiply the means that you got by 1.06.
=IF(MAX(X2:AH2)<1.00001,1,0) is the Excel formula I used to find those 108 people (in row 2, then copy and pasted to the rest of the rows)
Are there any significant differences in gender or age (or anything else notable) between the group who chose to keep their responses private and the rest of the respondents?
You have to admit, that's pretty awful. There's only a 20% difference, is that so?