Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

2011 Survey Results

93 Post author: Yvain 05 December 2011 10:49AM

A big thank you to the 1090 people who took the second Less Wrong Census/Survey.

Does this mean there are 1090 people who post on Less Wrong? Not necessarily. 165 people said they had zero karma, and 406 people skipped the karma question - I assume a good number of the skippers were people with zero karma or without accounts. So we can only prove that 519 people post on Less Wrong. Which is still a lot of people.

I apologize for failing to ask who had or did not have an LW account. Because there are a number of these failures, I'm putting them all in a comment to this post so they don't clutter the survey results. Please talk about changes you want for next year's survey there.

Of our 1090 respondents, 972 (89%) were male, 92 (8.4%) female, 7 (.6%) transexual, and 19 gave various other answers or objected to the question. As abysmally male-dominated as these results are, the percent of women has tripled since the last survey in mid-2009.

We're also a little more diverse than we were in 2009; our percent non-whites has risen from 6% to just below 10%. Along with 944 whites (86%) we include 38 Hispanics (3.5%), 31 East Asians (2.8%), 26 Indian Asians (2.4%) and 4 blacks (.4%).

Age ranged from a supposed minimum of 1 (they start making rationalists early these days?) to a more plausible minimum of 14, to a maximum of 77. The mean age was 27.18 years. Quartiles (25%, 50%, 75%) were 21, 25, and 30. 90% of us are under 38, 95% of us are under 45, but there are still eleven Less Wrongers over the age of 60. The average Less Wronger has aged about one week since spring 2009 - so clearly all those anti-agathics we're taking are working!

In order of frequency, we include 366 computer scientists (32.6%), 174 people in the hard sciences (16%) 80 people in finance (7.3%), 63 people in the social sciences (5.8%), 43 people involved in AI (3.9%), 39 philosophers (3.6%), 15 mathematicians (1.5%), 14 statisticians (1.3%), 15 people involved in law (1.5%) and 5 people in medicine (.5%).

48 of us (4.4%) teach in academia, 470 (43.1%) are students, 417 (38.3%) do for-profit work, 34 (3.1%) do non-profit work, 41 (3.8%) work for the government, and 72 (6.6%) are unemployed.

418 people (38.3%) have yet to receive any degrees, 400 (36.7%) have a Bachelor's or equivalent, 175 (16.1%) have a Master's or equivalent, 65 people (6%) have a Ph.D, and 19 people (1.7%) have a professional degree such as an MD or JD.

345 people (31.7%) are single and looking, 250 (22.9%) are single but not looking, 286 (26.2%) are in a relationship, and 201 (18.4%) are married. There are striking differences across men and women: women are more likely to be in a relationship and less likely to be single and looking (33% men vs. 19% women). All of these numbers look a lot like the ones from 2009.

27 people (2.5%) are asexual, 119 (10.9%) are bisexual, 24 (2.2%) are homosexual, and 902 (82.8%) are heterosexual.

625 people (57.3%) described themselves as monogamous, 145 (13.3%) as polyamorous, and 298 (27.3%) didn't really know. These numbers were similar between men and women.

The most popular political view, at least according to the much-maligned categories on the survey, was liberalism, with 376 adherents and 34.5% of the vote. Libertarianism followed at 352 (32.3%), then socialism at 290 (26.6%), conservativism at 30 (2.8%) and communism at 5 (.5%).

680 people (62.4%) were consequentialist, 152 (13.9%) virtue ethicist, 49 (4.5%) deontologist, and 145 (13.3%) did not believe in morality.

801 people (73.5%) were atheist and not spiritual, 108 (9.9%) were atheist and spiritual, 97 (8.9%) were agnostic, 30 (2.8%) were deist or pantheist or something along those lines, and 39 people (3.5%) described themselves as theists (20 committed plus 19 lukewarm)

425 people (38.1%) grew up in some flavor of nontheist family, compared to 297 (27.2%) in committed theist families and 356 in lukewarm theist families (32.7%). Common family religious backgrounds included Protestantism with 451 people (41.4%), Catholicism with 289 (26.5%) Jews with 102 (9.4%), Hindus with 20 (1.8%), Mormons with 17 (1.6%) and traditional Chinese religion with 13 (1.2%)

There was much derision on the last survey over the average IQ supposedly being 146. Clearly Less Wrong has been dumbed down since then, since the average IQ has fallen all the way down to 140. Numbers ranged from 110 all the way up to 204 (for reference, Marilyn vos Savant, who holds the Guinness World Record for highest adult IQ ever recorded, has an IQ of 185).

89 people (8.2%) have never looked at the Sequences; a further 234 (32.5%) have only given them a quick glance. 170 people have read about 25% of the sequences, 169 (15.5%) about 50%, 167 (15.3%) about 75%, and 253 people (23.2%) said they've read almost all of them. This last number is actually lower than the 302 people who have been here since the Overcoming Bias days when the Sequences were still being written (27.7% of us).

The other 72.3% of people who had to find Less Wrong the hard way. 121 people (11.1%) were referred by a friend, 259 people (23.8%) were referred by blogs, 196 people (18%) were referred by Harry Potter and the Methods of Rationality, 96 people (8.8%) were referred by a search engine, and only one person (.1%) was referred by a class in school.

Of the 259 people referred by blogs, 134 told me which blog referred them. There was a very long tail here, with most blogs only referring one or two people, but the overwhelming winner was Common Sense Atheism, which is responsible for 18 current Less Wrong readers. Other important blogs and sites include Hacker News (11 people), Marginal Revolution (6 people), TV Tropes (5 people), and a three way tie for fifth between Reddit, SebastianMarshall.com, and You Are Not So Smart (3 people).

Of those people who chose to list their karma, the mean value was 658 and the median was 40 (these numbers are pretty meaningless, because some people with zero karma put that down and other people did not).

Of those people willing to admit the time they spent on Less Wrong, after eliminating one outlier (sorry, but you don't spend 40579 minutes daily on LW; even I don't spend that long) the mean was 21 minutes and the median was 15 minutes. There were at least a dozen people in the two to three hour range, and the winner (well, except the 40579 guy) was someone who says he spends five hours a day.

I'm going to give all the probabilities in the form [mean, (25%-quartile, 50%-quartile/median, 75%-quartile)]. There may have been some problems here revolving around people who gave numbers like .01: I didn't know whether they meant 1% or .01%. Excel helpfully rounded all numbers down to two decimal places for me, and after a while I decided not to make it stop: unless I wanted to do geometric means, I can't do justice to really small grades in probability.

The Many Worlds hypothesis is true: 56.5, (30, 65, 80)
There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99)
There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)
The supernatural (ontologically basic mental entities) exists: 5.38, (0, 0, 1)
God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1)
Some revealed religion is true: 3.40, (0, 0, .15)
Average person cryonically frozen today will be successfully revived: 21.1, (1, 10, 30)
Someone now living will reach age 1000: 23.6, (1, 10, 30)
We are living in a simulation: 19, (.23, 5, 33)
Significant anthropogenic global warming is occurring: 70.7, (55, 85, 95)
Humanity will make it to 2100 without a catastrophe killing >90% of us: 67.6, (50, 80, 90)

There were a few significant demographics differences here. Women tended to be more skeptical of the extreme transhumanist claims like cryonics and antiagathics (for example, men thought the current generation had a 24.7% chance of seeing someone live to 1000 years; women thought there was only a 9.2% chance). Older people were less likely to believe in transhumanist claims, a little less likely to believe in anthropogenic global warming, and more likely to believe in aliens living in our galaxy. Community veterans were more likely to believe in Many Worlds, less likely to believe in God, and - surprisingly - less likely to believe in cryonics (significant at 5% level; could be a fluke). People who believed in high existential risk were more likely to believe in global warming, more likely to believe they had a higher IQ than average, and more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader. Unfriendly AI followed with 180 votes (16.5%), then nuclear war with 151 (13.9%), ecological collapse with 145 votes (12.3%), economic/political collapse with 134 votes (12.3%), and asteroids and nanotech bringing up the rear with 46 votes each (4.2%).

The mean for the Singularity question is useless because of the very high numbers some people put in, but the median was 2080 (quartiles 2050, 2080, 2150). The Singularity has gotten later since 2009: the median guess then was 2067. There was some discussion about whether people might have been anchored by the previous mention of 2100 in the x-risk question. I changed the order after 104 responses to prevent this; a t-test found no significant difference between the responses before and after the change (in fact, the trend was in the wrong direction).

Only 49 people (4.5%) have never considered cryonics or don't know what it is. 388 (35.6%) of the remainder reject it, 583 (53.5%) are considering it, and 47 (4.3%) are already signed up for it. That's more than double the percent signed up in 2009.

231 people (23.4% of respondents) have attended a Less Wrong meetup.

The average person was 37.6% sure their IQ would be above average - underconfident! Imagine that! (quartiles were 10, 40, 60). The mean was 54.5% for people whose IQs really were above average, and 29.7% for people whose IQs really were below average. There was a correlation of .479 (significant at less than 1% level) between IQ and confidence in high IQ.

Isaac Newton published his Principia Mathematica in 1687. Although people guessed dates as early as 1250 and as late as 1960, the mean was...1687 (quartiles were 1650, 1680, 1720). This marks the second consecutive year that the average answer to these difficult historical questions has been exactly right (to be fair, last time it was the median that was exactly right and the mean was all of eight months off). Let no one ever say that the wisdom of crowds is not a powerful tool.

The average person was 34.3% confident in their answer, but 41.9% of people got the question right (again with the underconfidence!). There was a highly significant correlation of r = -.24 between confidence and number of years error.

This graph may take some work to read. The x-axis is confidence. The y-axis is what percent of people were correct at that confidence level. The red line you recognize as perfect calibration. The thick green line is your results from the Newton problem. The black line is results from the general population I got from a different calibration experiment tested on 50 random trivia questions; take the intercomparability of the two with a grain of salt.

As you can see, Less Wrong does significantly better than the general population. However, there are a few areas of failure. First is that, as usual, people who put zero and one hundred percent had nonzero chances of getting the question right or wrong: 16.7% of people who put "0" were right, and 28.6% of people who put "100" were wrong (interestingly, people who put 100 did worse than the average of everyone else in the 90-99 bracket, of whom only 12.2% erred). Second of all, the line is pretty horizontal from zero to fifty or so. People who thought they had a >50% chance of being right had excellent calibration, but people who gave themselves a low chance of being right were poorly calibrated. In particular, I was surprised to see so many people put numbers like "0". If you're pretty sure Newton lived after the birth of Christ, but before the present day, that alone gives you a 1% chance of randomly picking the correct 20-year interval.

160 people wanted their responses kept private. They have been removed. The rest have been sorted by age to remove any information about the time they took the survey. I've converted what's left to a .xls file, and you can download it here.

Comments (513)

Comment author: Jack 04 December 2011 08:39:02PM 43 points [-]

People who believed in high existential risk were ... more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Aliens existing but not yet colonizing multiple systems or broadcasting heavily is the the response consistent with the belief that a Great Filter lies in front of us.

Comment author: Unnamed 05 December 2011 07:20:42PM *  37 points [-]

Strength of membership in the LW community was related to responses for most of the questions. There were 3 questions related to strength of membership: karma, sequence reading, and time in the community, and since they were all correlated with each other and showed similar patterns I standardized them and averaged them together into a single measure. Then I checked if this measure of strength in membership in the LW community was related to answers on each of the other questions, for the 822 respondents (described in this comment) who answered at least one of the probability questions and used percentages rather than decimals (since I didn't want to take the time to recode the answers which were given as decimals).

All effects described below have p < .01 (I also indicate when there is a nonsignificant trend with p<.2). On questions with categories I wasn't that rigorous - if there was a significant effect overall I just eyeballed the differences and reported which categories have the clearest difference (and I skipped some of the background questions which had tons of different categories and are hard to interpret).

Compared to those with a less strong membership in the LW community, those with a strong tie to the community are:

Background:

  • Gender - no difference
  • Age - no difference
  • Relationship Status - no difference
  • Sexual Orientation - no difference
  • Relationship Style - less likely to prefer monogamous, more likely to prefer polyamorous or to have no preference
  • Political Views - less likely to be socialist, more likely to be libertarian (but this is driven by the length of time in the community, which may reflect changing demographics - see my reply to this comment)
  • Religious Views - more likely to be atheist & not spiritual, especially less likely to be agnostic
  • Family Religion - no difference
  • Moral Views - more likely to be consequentialist
  • IQ - higher

Probabilities:

  • Many Worlds - higher
  • Aliens in the universe - lower (edited: I had mistakenly reversed the two aliens questions)
  • Aliens in our galaxy - trend towards lower (p=.04)
  • Supernatural - lower
  • God - lower
  • Religion - trend towards lower (p=.11, and this is statistically significant with a different analysis)
  • Cryonics - lower
  • Anti-Agathics - trend towards higher (p=.13) (this was the one question with a significant non-monotonic relationship: those with a moderately strong tie to the community had the highest probability estimate, while those with weak or strong ties had lower estimates)
  • Simulation - trend towards higher (p=.20)
  • Global Warming - higher
  • No Catastrophe - lower (i.e., think it is less likely that we will make it to 2100 without a catastrophe, i.e. think the chances of xrisk are higher)

Other Questions:

  • Singularity - sooner (this is statistically significant after truncating the outliers), and more likely to give an estimate rather than leave it blank
  • Type of XRisk - more likely to think that Unfriendly AI is the most likely XRisk
  • Cryonics Status - More likely to be signed up or to be considering it, less likely to be not planning to or to not have thought about it
Comment author: Unnamed 05 December 2011 09:04:56PM *  18 points [-]

Political Views - less likely to be socialist, more likely to be libertarian

I looked at this one a little more closely, and this difference in political views is driven almost entirely by the "time in community" measure of strength of membership in the LW community; it's not even statistically significant with the other two. I'd guess that is because LW started out on Overcoming Bias, which is a relatively libertarian blog, so the old timers tend to share those views. We've also probably added more non-Americans over time, who are more likely to be socialist.

All of the other relationships in the above post hold up when we replace the original measure of membership strength with one that is only based on the two variables of karma & sequence reading, but this one does not.

Comment author: Normal_Anomaly 07 December 2011 09:58:05PM 5 points [-]

Cryonics - lower

Cryonics Status - More likely to be signed up or to be considering it, less likely to be not planning to or to not have thought about it

So long-time participants were less likely to believe that cryonics would work for them but more likely to sign up for it? Interesting. This could be driven by any of: fluke, greater rationality, greater age&income, less akrasia, more willingness to take long-shot bets based on shutting up and multiplying.

Comment author: Unnamed 08 December 2011 03:43:47AM *  2 points [-]

I looked into this a little more, and it looks like those who are strongly tied to the LW community are less likely to give high answers to p(cryonics) (p>50%), but not any more or less likely to give low answers (p<10%). That reduction in high answers could be a sign of greater rationality - less affect heuristic driven irrational exuberance about the prospects for cryonics - or just more knowledge about the topic. But I'm surprised that there's no change in the frequency of low answers.

There is a similar pattern in the relationship between cryonics status and p(cryonics). Those who are signed up for cryonics don't give a higher p(cryonics) on average than those who are not signed up, but they are less likely to give a probability under 10%. The group with the highest average p(cryonics) is those who aren't signed up but are considering it, and that's the group that's most likely to give a probability over 50%.

Here are the results for p(cryonics) broken down by cryonics status, showing what percent of each group gave p(cryonics)<.1, what percent gave p(cryonics)>.5, and what the average p(cryonics) is for each group. (I'm expressing p(cryonics) here as probabilities from 0-1 because I think it's easier to follow that way, since I'm giving the percent of people in each group.)

Never thought about it / don't understand (n=26): 58% give p<.1, 8% give p>.5, mean p=.17
No, and not planning to (n=289): 60% give p<.1, 6% give p>.5, mean p=.14
No, but considering it (n=444): 38% give p < .1, 18% give p>.5, mean p=.27
Yes - signed up or just finishing up paperwork (n=36): 39% give p<.1, 8% give p>.5, mean p=.21
Overall: 47% give p<.1, 13% give p>.5, mean p=.22

Comment author: ewbrownv 12 December 2011 11:20:24PM 2 points [-]

The existential risk questions are a confounding factor here - if you think p(cryonics works) 80%, but p(xrisk ends civilization) 50%, that pulls down your p(successful revival) considerably.

Comment author: Unnamed 13 December 2011 12:45:18AM 2 points [-]

I wondered about that, but p(cryonics) and p(xrisk) are actually uncorrelated, and the pattern of results for p(cryonics) remains the same when controlling statistically for p(xrisk).

Comment author: Craig_Heldreth 04 December 2011 08:00:47PM 33 points [-]

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

This is not just intriguing. To me this is the single most significant finding in the survey.

Comment author: steven0461 05 December 2011 03:16:44AM *  11 points [-]

It's also worrying, because it means we're not getting better on average.

Comment author: RichardKennaway 05 December 2011 12:59:48PM 15 points [-]

If the readership of LessWrong has gone up similarly in that time, then I would not expect to see an improvement, even if everyone who reads LessWrong improves.

Comment author: steven0461 05 December 2011 11:06:43PM *  5 points [-]

Yes, I was thinking that. Suppose it takes a certain fixed amount of time for any LessWronger to learn the local official truth. Then if the population grows exponentially, you'd expect the fraction that knows the local official truth to remain constant, right? But I'm not sure the population has been growing exponentially, and even so you might have expected the local official truth to become more accurate over time, and you might have expected the community to get better over time at imparting the local official truth.

Regardless of what we should have expected, my impression is LessWrong as a whole tends to assume that it's getting closer to the truth over time. If that's not happening because of newcomers, that's worth worrying about.

Comment author: endoself 05 December 2011 03:53:48AM *  3 points [-]

It just means that we're at a specific point in memespace. The hypothesis that we are all rational enough to identify the right answers to all of these questions wouldn't explain the observed degree of variance.

Comment author: Vladimir_Nesov 04 December 2011 08:53:18PM *  21 points [-]

"less likely to believe in cryonics"

Rather, believe the probability of cryonics producing a favorable outcome to be less. This was a confusing question, because it wasn't specified whether it's total probability, since if it is, then probability of global catastrophe had to be taken into account, and, depending on your expectation about usefulness of frozen heads to FAI's value, probability of FAI as well (in addition to the usual failure-of-preservation risks). As a result, even though I'm almost certain that cryonics fundamentally works, I gave only something like 3% probability. Should I really be classified as "doesn't believe in cryonics"?

(The same issue applied to live-to-1000. If there is a global catastrophe anywhere in the next 1000 years, then living-to-1000 doesn't happen, so it's a heavy discount factor. If there is a FAI, it's also unclear whether original individuals remain and it makes sense to count their individual lifespans.)

Comment author: Unnamed 05 December 2011 08:30:01PM 2 points [-]

The same issue applied to live-to-1000. If there is a global catastrophe anywhere in the next 1000 years, then living-to-1000 doesn't happen, so it's a heavy discount factor. If there is a FAI, it's also unclear whether original individuals remain and it makes sense to count their individual lifespans.

Good point, and I think it explains one of the funny results that I found in the data. There was a relationship between strength of membership in the LW community and the answers to a lot of the questions, but the anti-agathics question was the one case where there was a clear non-monotonic relationship. People with a moderate strength of membership (nonzero but small karma, read 25-50% of the sequences, or been in the LW community for 1-2 years) were the most likely to think that at least one currently living person will reach an age of 1,000 years; those with a stronger or weaker tie to LW gave lower estimates.

There was some suggestion of a similar pattern on the cryonics question, but it was only there for the sequence reading measure of strength of membership and not for the other two.

Comment author: amcknight 06 December 2011 04:04:25AM *  17 points [-]

In case anyone's interested in how we compare to philosophers about ethics:

PhilPapers (931 people, mainly philosophy grad students and professors):
Normative ethics: deontology, consequentialism, or virtue ethics?
Other 301 / 931 (32.3%)
Accept or lean toward: deontology 241 / 931 (25.8%)
Accept or lean toward: consequentialism 220 / 931 (23.6%)
Accept or lean toward: virtue ethics 169 / 931 (18.1%)

LessWrong (1090 people, us):
With which of these moral philosophies do you MOST identify?
consequentialist (62.4%)
virtue ethicist (13.9%)
did not believe in morality (13.3%)
deontologist (4.5%)

Full Philpapers.org survey results

Comment author: Dr_Manhattan 06 December 2011 03:55:01PM 16 points [-]

I think "has children" is an (unsurprising but important) omission in the survey.

Comment author: taryneast 06 December 2011 07:29:24PM *  3 points [-]

Possibly less surprising given the extremely low average age... I agree it should be added as a question. Possibly along with an option for "none but want to have them someday" vs "none and don't want any"

Comment author: Prismattic 07 December 2011 01:30:23AM 2 points [-]
Comment author: gwern 04 December 2011 09:48:27PM 16 points [-]

The mean age was 27.18 years. Quartiles (25%, 50%, 75%) were 21, 25, and 30. 90% of us are under 38, 95% of us are under 45, but there are still eleven Less Wrongers over the age of 60....The mean for the Singularity question is useless because of the very high numbers some people put in, but the median was 2080 (quartiles 2050, 2080, 2150). The Singularity has gotten later since 2009: the median guess then was 2067.

So the 50% age is 25 and the 50% estimate is 2080? A 25 year old has a life expectancy of, what, another 50 years? 2011+50=2061, or 19 years short of the Singularity!

Either people are rather optimistic about future life-extension (despite 'Someone now living will reach age 1000: 23.6'), or the Maes-Garreau Law may not be such a law.

Comment author: michaelsullivan 05 December 2011 07:28:42PM 3 points [-]

I would interpret "the latest possible date a prediction can come true and still remain in the lifetime of the person making it", "lifetime" would be the longest typical lifetime, rather than an actuarial average. So -- we know lots of people who live to 95, so that seems like it's within our possible lifetime. I certainly could live to 95, even if it's less than a 50/50 shot.

One other bit -- the average life expectancy is for the entire population, but the average life expectancy of white, college educated persons earning (or expected to earn) a first or second quintile income is quite a bit higher, and a very high proportion of LWers fall into that demographic. I took a quick actuarial survey a few months back that suggested my life expectancy given my family age/medical history, demographics, etc. was to reach 92 (I'm currently 43).

Comment author: RomanDavis 05 December 2011 03:27:19AM *  5 points [-]

Or we have family histories that give us good reason to think we'll outlive the mean, even without drastic increases in the pace of technology. That would describe me. Even without that just living to 25 increases your life expectancy by quite a bit as all those really low numbers play heck with an average.

Or we're overconfident in our life expectancy because of some cognitive bias.

Comment author: gwern 05 December 2011 04:28:51AM 7 points [-]

Even without that just living to 25 increases your life expectancy by quite a bit as all those really low numbers play heck with an average.

I should come clean, I lied when I claimed to be guessing about the 50 year old thing; before writing that, I actually consulted one of the usual actuarial tables which specifies that a 25 year old can only expect an average 51.8 more years. (The number was not based on life expectancy from birth.)

Comment author: Desrtopa 05 December 2011 02:22:53PM 3 points [-]

The actuarial table is based on an extrapolation of 2007 mortality rates for the rest of the population's lives. That sounds like a pretty shaky premise.

Comment author: gwern 05 December 2011 04:51:05PM 7 points [-]

Why would you think that? Mortality rate have, in fact, gone upwards in the past few years for many subpopulations (eg. some female demographics have seen their absolute lifespan expectancy fall), and before that, decreases in old adult mortality were tiny:

life extension from age 65 was increased only 6 years over the entire 20th century; from age 75 gains were only 4.2 years, from age 85 only 2.3 years and from age 100 a single year. From age 65 over the most recent 20 years, the gain has been about a year

(And doesn't that imply deceleration? 20 years is 1/5 of the period, and over the period, 6 years were gained; 1/5 * 6 > 1.)

Which is a shakier premise, that trends will continue, or that SENS will be a wild success greater than, say, the War on Cancer?

Comment author: Desrtopa 05 December 2011 06:13:43PM 2 points [-]

I didn't say that lifespans would necessarily become greater in that period, but several decades is time for the rates to change quite a lot. And while public health has become worse in recent decades in a number of ways (obesity epidemic, lower rates of exercise,) a technologies have been developed which improve the prognoses for a lot of ailments (we may not have cured cancer yet, but many forms are much more treatable than they used to be.)

If all the supposed medical discoveries I hear about on a regular basis were all they're cracked up to be, we would already have a generalized cure for cancer by now and already have ageless mice if not ageless humans, but even if we assume no 'magic bullet' innovations in the meantime, the benefits of incrementally advancing technology are likely to outpace decreases in health if only because the population can probably only get so much fatter and more out of shape than it already is before we reach a point where increased proliferation of superstimulus foods and sedentary activities don't make any difference.

Comment author: gwern 05 December 2011 06:50:45PM 2 points [-]

we may not have cured cancer yet, but many forms are much more treatable than they used to be

Which is already built into the quoted longevity increases. (See also the Gompertz curve.)

Comment author: Desrtopa 05 December 2011 06:58:02PM 2 points [-]

Right, my point is that SENS research, which is a fairly new field, doesn't have to be dramatically more successful than cancer research to produce tangible returns in human life expectancy, and the deceleration in increase of life expectancy is most likely due to a negative health trend which is likely not to endure over the entire interval.

Comment author: Oligopsony 05 December 2011 12:54:46AM 14 points [-]

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

Maybe, but sort of fresh meat we get is not at all independent of the old guard, so an initial bias could easily reproduce itself.

Comment author: AlexMennen 04 December 2011 11:21:32PM 14 points [-]

There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99) There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)

Suggestion: Show these questions in random order to half of people, and show only one of the questions to the other half, to get data on anchoring.

Comment author: RobertLumley 06 December 2011 12:03:31AM 3 points [-]

Or show the questions in one order to a fourth of people, the other order to a fourth of people, one of the questions to another forth and the other question to the last fourth.

Comment author: Yvain 04 December 2011 07:14:42PM *  40 points [-]

Running list of changes for next year's survey:

  1. Ask who's a poster versus a lurker!
  2. A non-write-in "Other" for most questions
  3. Replace "gender" with "sex" to avoid complaints/philosophizing.
  4. Very very clear instructions to use percent probabilities and not decimal probabilities
  5. Singularity year question should have explicit instructions for people who don't believe in singularity
  6. Separate out "relationship status" and "looking for new relationships" questions to account for polys
  7. Clarify that research is allowed on the probability questions
  8. Clarify possible destruction of humanity in cryonics/antiagathics questions.
  9. What does it mean for aliens to "exist in the universe"? Light cone?
  10. Make sure people write down "0" if they have 0 karma.
  11. Add "want to sign up, but not available" as cryonics option.
  12. Birth order.
  13. Have children?
  14. Country of origin?
  15. Consider asking about SAT scores for Americans to have something to correlate IQs with.
  16. Consider changing morality to PhilPapers version.
Comment author: army1987 04 December 2011 09:43:39PM 28 points [-]

One about nationality (and/or native language)? I guess that would be much more relevant than e.g. birth order.

Comment author: Larks 06 December 2011 02:07:44PM 23 points [-]

Publish draft questions in advance, so we can spot issues before the survey goes live.

Comment author: orthonormal 04 December 2011 07:32:37PM 23 points [-]

Regarding #4, you could just write a % symbol to the right of each input box.

Comment author: army1987 04 December 2011 09:47:50PM *  11 points [-]

BTW, I'd also disallow 0 and 100, and give the option of giving log-odds instead of probability (and maybe encourage to do that for probabilities <1% and >99%). Someone's “epsilon” might be 10^-4 whereas someone else's might be 10^-30.

Comment author: brilee 05 December 2011 03:32:08PM 6 points [-]

I second that. See my post at http://lesswrong.com/r/discussion/lw/8lr/logodds_or_logits/ for a concise summary. Getting the LW survey to use log-odds would go a long way towards getting LW to start using log-odds in normal conversation.

Comment author: Luke_A_Somers 05 December 2011 04:40:31PM *  5 points [-]

People will mess up the log-odds, though. Non-log odds seem safer.

Odds of ...

Someone living today living for over 1000 subjectively experienced years : No one living today living for over 1000 subjectively experienced years

[ ] : [ ]

Two fields instead of one, but it seems cleaner than any of the other alternatives.

Comment author: army1987 05 December 2011 06:41:35PM *  4 points [-]

The point is not having to type lots of zeros (or of nines) with extreme probabilities (so that people won't weasel out and use ‘epsilon’); having to type 1:999999999999999 is no improvement over having to type 0.000000000000001.

Comment author: Jack 05 December 2011 06:03:27PM 17 points [-]

We should ask if people participated in the previous surveys.

Comment author: Jack 04 December 2011 08:43:02PM 17 points [-]

I'd love a specific question on moral realism instead of leaving it as part of the normative ethics question. I'd also like to know about psychiatric diagnoses (autism spectrum, ADHD, depression, whatever else seems relevant)-- perhaps automatically remove those answers from a spreadsheet for privacy reasons.

Comment author: NancyLebovitz 05 December 2011 01:27:02AM 13 points [-]

I don't care about moral realism, but psychiatric diagnoses (and whether they're self-diagnosed or formally diagnosed) would be interesting.

Comment author: lavalamp 05 December 2011 08:31:01PM *  10 points [-]

Suggestion: "Which of the following did you change your mind about after reading the sequences? (check all that apply)"

  • [] Religion
  • [] Cryonics
  • [] Politics
  • [] Nothing
  • [] et cetera.

Many other things could be listed here.

Comment author: TheOtherDave 05 December 2011 09:27:55PM 2 points [-]

I'm curious, what would you do with the results of such a question?

For my part, I suspect I would merely stare at them and be unsure what to make of a statistical result that aggregates "No, I already held the belief that the sequences attempted to convince me of" with "No, I held a contrary belief and the sequences failed to convince me otherwise." (That it also aggregates "Yes, I held a contrary belief and the sequences convinced me otherwise." and "Yes, I initially held the belief that the sequences attempted to convince me of, and the sequences convinced me otherwise" is less of a concern, since I expect the latter group to be pretty small.)

Comment author: lavalamp 05 December 2011 10:14:14PM 2 points [-]

Originally I was going to suggest asking, "what were your religious beliefs before reading the sequences?"-- and then I succumbed to the programmer's urge to solve the general problem.

However, I guess measuring how effective the sequences are at causing people to change their mind is something that a LW survey can't do, anyway (you'd need to also ask people who read the sequences but didn't stick around to accurately answer that).

Mainly I was curious how many deconversions the sequences caused or hastened.

Comment author: Jayson_Virissimo 05 December 2011 11:15:51AM *  6 points [-]

I think using your stipulative definition of "supernatural" was a bad move. I would be very surprised if I asked a theologian to define "supernatural" and they replied "ontologically basic mental entities". Even as a rational reconstruction of their reply, it would be quite a stretch. Using such specific definitions of contentious concepts isn't a good idea, if you want to know what proportion of Less Wrongers self-identify as atheist/agnostic/deist/theist/polytheist.

Comment author: CharlesR 05 December 2011 07:44:12AM 6 points [-]

You should clarify in the antiagathics question that the person reaches the age of 1000 without the help of cryonics.

Comment author: selylindi 05 December 2011 07:37:22PM *  5 points [-]

Yet another alternate, culture-neutral way of asking about politics:

Q: How involved are you in your region's politics compared to other people in your region?
A: [choose one]
() I'm among the most involved
() I'm more involved than average
() I'm about as involved as average
() I'm less involved than average
() I'm among the least involved

Comment author: FiftyTwo 05 December 2011 10:21:04PM 3 points [-]

Requires people to self assess next to a cultural baseline, and self assessments of this sort are notoriously inaccurate. (I predict everyone will think they have above-average involvement).

Comment author: Prismattic 14 December 2011 04:01:53AM *  2 points [-]

Within a US-specific context, I would eschew these comparisons to a notional average and use the following levels of participation:

0 = indifferent to politics and ignorant of current events
1 = attentive to current events, but does not vote
2 = votes in presidential elections, but irregularly otherwise
3 = always votes
4 = always votes and contributes to political causes
5 = always votes, contributes, and engages in political activism during election seasons
6 = always votes, contributes, and engages in political activism both during and between election seasons
7 = runs for public office

I suspect that the average US citizen of voting age is a 2, but I don't have data to back that up, and I am not motivated to research it. I am a 4, so I do indeed think that I am above average.

Those categories could probably be modified pretty easily to match a parliamentary system by leaving out the reference to presidential elections and just having "votes irregularly" and "always votes"

Editing to add -- for mandatory voting jurisdictions, include a caveat that "spoiled ballot = did not vote"

Comment author: TheOtherDave 14 December 2011 05:01:15AM 2 points [-]

Personally, I'm not sure I necessarily consider the person who runs for public office to be at a higher level of participation than the person who works for them.

Comment author: NancyLebovitz 05 December 2011 10:36:41PM 2 points [-]

I think I have average or below-average involvement.

Maybe it would be better to ask about the hours/year spent on politics.

Comment author: lavalamp 05 December 2011 04:20:39AM 9 points [-]

Suggestion: add "cryocrastinating" as a cryonics option.

Comment author: [deleted] 22 December 2011 01:53:08AM 3 points [-]

Replace "gender" with "sex" to avoid complaints/philosophizing.

http://en.wikipedia.org/wiki/Intersex

Otherwise agreed.

Comment author: [deleted] 07 August 2012 04:25:35PM *  7 points [-]

Strongly disagree with previous self here. I do not think replacing "gender" with "sex" avoids complaints or "philosophizing", and "philosophizing" in context feels like a shorthand/epithet for "making this more complex than prevailing, mainstream views on gender."

For a start, it seems like even "sex" in the sense used here is getting at a mainly-social phenomenon: that of sex assigned at birth. This is a judgement call by the doctors and parents. The biological correlates used to make that decision are just weighed in aggregate; some people are always going to throw an exception. If you're not asking about the size of gametes and their delivery mechanism, the hormonal makeup of the person, their reproductive anatomy where applicable, or their secondary sexual characteristics, then "sex" is really just asking the "gender" question but hazily referring to biological characteristics instead.

Ultimately, gender is what you're really asking for. Using "sex" as a synonym blurs the data into unintelligibility for some LWers; pragmatically, it also amounts to a tacit "screw you" to trans people. I suggest biting the bullet and dealing with the complexity involved in asking that question -- in many situations people collecting that demographic info don't actually need it, but it seems like useful information for LessWrong.

A suggested approach:

Two optional questions with something like the following phrasing:

Optional: Gender (pick what best describe how you identify):

-Male
-Female
-Genderqueer, genderfluid, other
-None, neutrois, agender
-Prefer not to say

Optional: Sex assigned at birth:
-Male
-Female
-Intersex
-Prefer not to say

Comment author: Yvain 07 December 2011 01:11:27PM 3 points [-]

Everyone who's suggesting changes: you are much more likely to get your way if you suggest a specific alternative. For example, instead of "handle politics better", something like "your politics question should have these five options: a, b, c, d, and e." Or instead of "use a more valid IQ measure", something more like "Here's a site with a quick and easy test that I think is valid"

Comment author: prase 05 December 2011 08:01:46PM *  11 points [-]

When asking for race/ethnicity, you should really drop the standard American classification into White - Hispanic - Black - Indian - Asian - Other. From a non-American perspective this looks weird, especially the "White Hispanic" category. A Spaniard is White Hispanic, or just White? If only White, how does the race change when one moves to another continent? And if White Hispanic, why not have also "Italic" or "Scandinavic" or "Arabic" or whatever other peninsula-ic races?

Since I believe the question was intended to determine the cultural background of LW readers, I am surprised that there was no question about country of origin, which would be more informative. There is certainly greater cultural difference between e.g. Turks (White, non-Hispanic I suppose) and White non-Hispanic Americans than between the latter and their Hispanic compatriots.

Also, making a statistic based on nationalities could help people determine whether there is a chance for a meetup in their country. And it would be nice to know whether LW has regular readers in Liechtenstein, of course.

Comment author: [deleted] 22 December 2011 03:03:10AM *  4 points [-]

I was also...well, not surprised per se, but certainly annoyed to see that "Native American" in any form wasn't even an option. One could construe that as revealing, I suppose.

I don't know how relevant the question actually is, but if we want to track ancestry and racial, ethnic or cultural group affiliation, the folowing scheme is pretty hard to mess up:

Country of origin: <drop-down list of countries>
Country of residence: <drop-down list with "same as origin" as the first option>
Primary Language: <Form Field>
Native Language (if different): <Form Field>
Heritage language (if different): <Form Field>

Note: A heritage language is one spoken by your family or identity group.

Heritage group:

Diaspora: Means your primary heritage and identity group moved to the country you live in within historical or living memory, as colonists, slaves, workers or settlers.

<radio buttons>
European diaspora ("white" North America, Australia, New Zealand, South Africa, etc)
African diaspora ("black" in the US, West Indian, more recent African emigrant groups; also North African diaspora)
Asian diaspora (includes, Turkic, Arab, Persian, Central and South Asian, Siberian native)

Indigenous: Means your primary heritage and identity group was resident to the following location prior to 1400, OR prior to the arrival of the majority culture in antiquity (for example: Ainu, Basque, Taiwanese native, etc):

<radio buttons>
-Africa
-Asia
-Europe
-North America (between Panama and Canada, also includes Greenland and the Carribean)
-Oceania (including Australia)
-South America

Mixed: Select two or more:

<check boxes>
European Diaspora
African Diaspora
Asian Diaspora
African Indigenous
American Indigenous
Asian Indigenous
European Indigenous
Oceania Indigenous

What the US census calls "Non-white Hispanic" would be marked as "Mixed" > "European Diaspora" + "American Indigenous" with Spanish as either a Native or Heritage language. Someone who identifies as (say) Mexican-derived but doesn't speak Spanish at all would be impossible to tell from someone who was Euro-American and Cherokee who doesn't speak Cherokee, but no system is perfect...

Comment author: wedrifid 22 December 2011 04:22:38AM 2 points [-]

EDIT: Not sure why the formatting won't preserve my linebreaks, apologies for the garbled table.

Put two spaces after a line if you want a linebreak.

Comment author: Konkvistador 08 December 2011 10:09:00AM *  3 points [-]

If only White, how does the race change when one moves to another continent? And if White Hispanic, why not have also "Italic" or "Scandinavic" or "Arabic" or whatever other peninsula-ic races?

Because we don't have as much useful sociological data on this. Obviously we can start collecting data on any of the proposed categories, but if we're the only ones, it won't much help us figure out how LW differs from what one might expect of a group that fits its demographic profile.

Since I believe the question was intended to determine the cultural background of LW readers, I am surprised that there was no question about country of origin, which would be more informative. There is certainly greater cultural difference between e.g. Turks (White, non-Hispanic I suppose) and White non-Hispanic Americans than between the latter and their Hispanic compatriots.

Much of the difference in the example of Turks is captured by the Muslim family background question.

Comment author: NancyLebovitz 05 December 2011 10:45:54PM 3 points [-]

Offer a text field for race. You'll get some distances, not to mention "human" or "other", but you could always use that to find out whether having a contrary streak about race/ethnicity correlates with anything.

If you want people to estimate whether a meetup could be worth it, I recommend location rather than nationality-- some nations are big enough that just knowing nationality isn't useful.

Comment author: Konkvistador 08 December 2011 10:08:08AM *  6 points [-]

Most LessWrong posters and readers are American, perhaps even the vast majority (I am not). Hispanic Americans differ from white Americans differ from black Americans culturally and socio-economically not just on average but in systemic ways regardless if the person in question defines himself as Irish American, Kenyan American, white American or just plain American. From the US we have robust sociological data that allows us to compare LWers based on this information. The same is true of race in Latin America, parts of Africa and more recently Western Europe.

Nationality is not the same thing as racial or even ethnic identity in multicultural societies.

Considering every now and then people bring up a desire to lower barriers to entry for "minorities" (whatever that means in a global forum), such stats are useful for those who argue on such issues and also for ascertaining certain biases.

Adding a nationality and/or citizenship question would probably be useful though.

Comment author: prase 08 December 2011 06:37:51PM 2 points [-]

Nationality is not the same thing as racial or even ethnic identity in multicultural societies.

I have not said that it is. I was objecting to arbitrariness of "Hispanic race": I believe that the difference between Hispanic White Americans and non-Hispanic White Americans is not significantly higher than the difference between both two groups and non-Americans, and that the number of non-Americans among LW users would be higher than 3.8% reported for the Hispanics. I am not sure what exact sociological data we may extract from the survey, but in any case, the comparison to standard American sociological datasets will be problematic because the LW data are contaminated by presence of non-Americans and there is no way to say how much, because people were not asked about that.

Comment author: Pfft 06 December 2011 01:01:38AM 4 points [-]

Replacing gender with sex seems like the wrong way to go to me. For example, note how Randall Munroe asked about sex, then regretted it.

Comment author: RobertLumley 19 December 2011 04:11:12PM 2 points [-]

A series of four questions on each Meyers-Briggs indicator would be good, although I'm sure the data would be woefully unsurprising. Perhaps link to an online test if people don't know it already.

Comment author: MixedNuts 04 December 2011 07:30:56PM 10 points [-]

You are aware that if you ask people for their sex but not their gender, and say something like "we have more women now", you will be philosophized into a pulp, right?

Comment author: FiftyTwo 05 December 2011 10:19:33PM 4 points [-]

Why not ask for both?

Comment author: Emile 06 December 2011 11:58:50AM *  3 points [-]

Because the two are so highly correlated that having both would give us almost no extra information. One goal of the survey should be to maximize the useful-info-extracted / time-spent-on-it ratio, hence also the avoidance of write-ins for many questions (which make people spend more time on the survey, to get results that are less exploitable) (a write-in for gender works because people are less likely to write a manifesto for that than for politics).

Comment author: wedrifid 06 December 2011 10:53:05AM 5 points [-]

You are aware that if you ask people for their sex but not their gender, and say something like "we have more women now", you will be philosophized into a pulp, right?

Only if people here are less interested in applying probability theory than they are in philosophizing about gender... Oh.

Comment author: timtyler 09 December 2011 05:38:17PM *  13 points [-]

I graphed the "Singularity" results. It's at the the bottom of the page - or see here:

Comment author: Armok_GoB 09 December 2011 06:13:42PM 8 points [-]

Just you look at all that ugly anchoring at 2100...

Comment author: wedrifid 09 December 2011 06:52:13PM *  15 points [-]

Just you look at all that ugly anchoring at 2100...

And yet if people don't round off at significant figures there are another bunch who will snub them for daring to provide precision they cannot justify.

Comment author: timtyler 09 December 2011 08:05:12PM *  3 points [-]

In this case we can rebuke the stupid snubbers for not properly reading the question.

Comment author: army1987 09 December 2011 08:39:45PM 4 points [-]

(But still, I'd like to ask whoever answered "28493" why they didn't say 28492 or 28494 instead.)

Comment author: Konkvistador 06 March 2012 04:50:48PM 6 points [-]

2100 seems to be the Schelling point for "after I'm dead" answers.

Comment author: army1987 09 December 2011 06:30:33PM 4 points [-]

Who answered 2010? Seriously?

Comment author: timtyler 09 December 2011 08:03:51PM *  9 points [-]

Who answered 2010? Seriously?

To quote from the description here:

Note: each point (rather misleadingly) represents data for the next 10 years.

So: it represents estimates of 2012, 2015 and 2016.

However: someone answered "1990"!

This is probably the "NSA has it chained in the basement" scenario...

Comment author: ChrisHallquist 05 March 2012 04:30:20AM 4 points [-]

Alternatively, the singularity happened in 1990 and the resulting AI took over the world. Then it decided to run some simulations of what would have happened if the singularity hadn't occurred then.

Comment author: gwern 09 December 2011 07:43:48PM *  15 points [-]

Unfortunately, army1987, no one can be told when the Singularity is. You have to see it for yourself. This is your last chance; after this, there is no turning back. You choose to downvote... and the story ends. You wake in your bed and believe whatever you want to believe. You choose to upvote... and you stay in LessWrong.

Comment author: TheOtherDave 09 December 2011 07:20:06PM 2 points [-]

I wonder how this would compare to the results for "pick a year at random."

Comment author: Bugmaster 06 December 2011 01:10:21AM 12 points [-]

I enjoy numbers as much as the next guy, but IMO this article is practically crying out for more graphs. The Google Image Chart API might be useful here.

Comment author: Yoreth 06 December 2011 11:16:28AM *  11 points [-]

What's the relation between religion and morality? I drew up a table to compare the two. This shows the absolute numbers and the percentages normalized in two directions (by religion, and by morality). I also highlighted the cells corresponding to the greatest percentage across the direction that was not normalized (for example, 22.89% of agnostics said there's no such thing as morality, a higher percentage than any other religious group).

Many pairs were highlighted both ways. In other words, these are pairs such that "Xs are more likely to be Ys" and vice-versa.

  • [BLANK]; [BLANK]
  • Atheist and not spiritual; Consequentialist
  • Agnostic; No such thing
  • Deist/Pantheist/etc.; Virtue ethics
  • Committed theist; Deontology

(I didn't do any statistical analysis, so be careful with the low-population groups.)

Comment author: michaelcurzi 05 December 2011 09:00:44PM 11 points [-]

I would like to see this question on a future survey:

Are you genetically related to anyone with schizophrenia? (yes / no) How distant is the connection? (nuclear family / cousins, aunts and uncles / further / no connection)

I've repeatedly heard that a significant number of rationalists are related to schizophrenics.

Comment author: michaelsullivan 05 December 2011 08:29:42PM 9 points [-]

Community veterans were more likely to believe in Many Worlds, less likely to believe in God, and - surprisingly - less likely to believe in cryonics (significant at 5% level; could be a fluke).

It might be a fluke, but like one other respondent who talked about this and got many upvotes, it could be that community veterans were more skeptical of the many many things that have to go right for your scenario to happen, even if we generally believe that cryonics is scientifically feasible and worth working on.

When you say "the average person cryonically frozen today will at some point be awakened", that means not only that the general idea is workable, but that we are currently using an acceptable method of preserving tissues, and that a large portion of current arrangements will continue to preserve those bodies/tissues until post singularity, however long that takes, and that whatever singularity happens will result in people willing to expend resources fulfullling those contracts (so FAI must beat uFAI). Add all that up, and it can easily make for a pretty small probability, even if you do "believe in cryonics" in the sense of thinking that it is potentially sound tech.

My interpretation of this result (with low confidence, as 'fluke' is also an excellent explanation) is that community veterans are better at working with probabilities based on complex conjunctions, and better at seeing the complexity of conjunctions based on written descriptions.

Comment author: wedrifid 05 December 2011 04:00:03AM 9 points [-]

These averages strike me as almost entirely useless! If only half of the people taking the survey are lesswrong participants then the extra noise will overwhelm any signal when the probabilities returned by the actual members are near to either extreme. Using averaging of probabilities (as opposed to, say, log-odds) is dubious enough even when not throwing in a whole bunch of randoms!

(So thankyou for providing the data!)

Comment author: Unnamed 04 December 2011 08:48:25PM 9 points [-]

It looks like about 6% of respondents gave their answers in decimal probabilities instead of percentages. 108 of the 930 people in the data file didn't have any answers over 1 for any of the probability questions, and 52 of those did have some answers (the other 56 left them all blank), which suggests that those 52 people were using decimals (and that's is 6% of the 874 who answered at least one of the questions). So to get more accurate estimates of the means for the probability questions, you should either multiply those respondents' answers by 100, exclude those respondents when calculating the means, or multiply the means that you got by 1.06.

=IF(MAX(X2:AH2)<1.00001,1,0) is the Excel formula I used to find those 108 people (in row 2, then copy and pasted to the rest of the rows)

Comment author: J_Taylor 04 December 2011 08:27:29PM *  23 points [-]

The supernatural (ontologically basic mental entities) exists: 5.38, (0, 0, 1)

God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1)

??

Comment author: Unnamed 04 December 2011 09:25:19PM *  21 points [-]

P(Supernatural) What is the probability that supernatural events, defined as those involving ontologically basic mental entities, have occurred since the beginning of the universe?

P(God) What is the probability that there is a god, defined as a supernatural (see above) intelligent entity who created the universe?

So deism (God creating the universe but not being involved in the universe once it began) could make p(God) > p(Supernatural).

Looking at the the data by individual instead of in aggregate, 82 people have p(God) > p(Supernatural); 223 have p(Supernatural) > p(God).

Comment author: J_Taylor 04 December 2011 09:31:04PM 7 points [-]

Given this, the numbers no longer seem anomalous. Thank you.

Comment author: byrnema 05 December 2011 08:38:07PM 2 points [-]

Could someone break down what is meant by "ontologically basic mental entities"? Especially, I'm not certain of the role of the word 'mental'..

Comment author: Nornagest 05 December 2011 08:48:38PM *  8 points [-]

It's a bit of a nonstandard definition of the supernatural, but I took it to mean mental phenomena as causeless nodes in a causal graph: that is, that mental phenomena (thoughts, feelings, "souls") exist which do not have physical causes and yet generate physical consequences. By this interpretation, libertarian free will and most conceptions of the soul would both fall under supernaturalism, as would the prerequisites for most types of magic, gods, spirits, etc.

I'm not sure I'd have picked that phrasing, though. It seems to be entangled with epistemological reductionism in a way that might, for a sufficiently careful reading, obscure more conventional conceptions of the "supernatural": I'd expect more people to believe in naive versions of free will than do in, say, fairies. Still, it's a pretty fuzzy concept to begin with.

Comment author: Larks 04 December 2011 10:48:47PM *  8 points [-]
Comment author: Vladimir_Nesov 05 December 2011 04:27:05PM 2 points [-]

Are the questions for the 2009 survey available somewhere?

Comment author: Yvain 05 December 2011 08:21:01PM 3 points [-]
Comment author: peter_hurford 13 December 2011 06:23:53AM 7 points [-]

I would be interested in a question that asked whether people were pescatarian / vegetarian / vegan, and another question as to whether this was done for moral reasons.

Comment author: Morendil 04 December 2011 10:42:35PM 7 points [-]

I am officially very surprised at how many that is. Also officially, poorly calibrated at both the 50% (no big deal) and the 90% (ouch, ouch, ouch) confidence levels.

Comment author: Yvain 04 December 2011 10:48:07PM 4 points [-]

You're okay. I asked the question about the number of responses then. When I asked the question, there were only 970 :)

Comment author: mindspillage 04 December 2011 09:59:12PM 7 points [-]

Are there any significant differences in gender or age (or anything else notable) between the group who chose to keep their responses private and the rest of the respondents?

Comment author: NancyLebovitz 04 December 2011 08:12:39PM 18 points [-]

Michael Vassar has mentioned to me that the proportion of first/only children at LW is extremely high. I'm not sure whether birth order makes a big difference, but it might be worth asking about. By the way, I'm not only first-born, I'm the first grandchild on both sides.

Questions about akrasia-- Do you have no/mild/moderate/serious problems with it? Has anything on LW helped?

I left some of the probability questions blank because I realized had no idea of a sensible probability, and I especially mean whether we're living in a simulation.

It might be interesting to ask people whether they usually vote.

The link to the survey doesn't work because the survey is closed-- could you make the text of the survey available?

Comment author: steven0461 04 December 2011 08:59:15PM 8 points [-]

There was a poll about firstborns.

Comment author: amcknight 06 December 2011 03:37:07AM 6 points [-]

I'm a twin that's 2 minutes younger than first-born. Be careful how you ask about birth order.

Comment author: Eliezer_Yudkowsky 05 December 2011 01:24:23AM 10 points [-]

By the way, I'm not only first-born, I'm the first grandchild on both sides.

So am I! I wonder if being the first-born is genetically heritable.

Comment author: MixedNuts 05 December 2011 01:32:22AM 14 points [-]

Yes. Being first-born is correlated with having few siblings, which is correlated with parents with low fertility, which is genetically inherited from grandparents with low fertility, which is correlated with your parents having few siblings, which is correlated with them being first-born.

Comment author: Zack_M_Davis 05 December 2011 04:24:25AM *  8 points [-]

is correlated with [...] which is correlated with [...] which is genetically inherited from [...] which is correlated with

I agree with your conclusion that the heritability of firstbornness is nonzero, but I'm not sure this reasoning is valid. (Pearson) correlation is not, in general, transitive: if X is correlated with Y and Y is correlated with Z, it does not necessarily follow that X is correlated with Z unless the squares of the correlation coefficients between X and Y and between Y and Z sum to more than one.

Actually calculating the heritability of firstbornness turns out to be a nontrivial math problem. For example, while it is obvious that having few siblings is correlated with being firstborn, it's not obvious to me exactly what that correlation coefficient should be, nor how to calculate it from first principles. When I don't know how to solve a problem from first principles, my first instinct is to simulate it, so I wrote a short script to calculate the Pearson correlation between number of siblings and not-being-a-firstborn for a population where family size is uniformly distributed on the integers from 1 to n. It turns out that the correlation decreases as n gets larger (from [edited:] ~0.5[8] for n=[2] to ~0.3[1] for n=50), which fact probably has an obvious-in-retrospect intuitive explanation which I am somehow having trouble articulating explicitly ...

Ultimately, however, other priorities prevent me from continuing this line of inquiry at the present moment.

Comment author: dbaupp 06 December 2011 12:36:21AM 2 points [-]

Pearson correlation between number of siblings and not-being-a-firstborn for a population where family size is uniformly distributed on the integers from 1 to n [...] ~0.57 for n=1

I'm confused: does this make sense for n=1? (Your code suggests that that should be n=2, maybe?)

Comment author: Jonathan_Graehl 06 December 2011 09:08:01PM 6 points [-]

Almost everyone responding (75%) believes there's at least a 10% chance of a 90% culling of human population sometime in the next 90 years.

If we're right, it's incumbent to consider sacrificing significant short term pleasure and freedom to reduce this risk. I haven't heard any concrete proposals that seem worth pushing, but the proposing and evaluating needs to happen.

Comment author: ksvanhorn 10 December 2011 11:46:58PM 2 points [-]

What makes you think that sacrificing freedom will reduce this risk, rather than increase it?

Comment author: dlthomas 06 December 2011 09:15:31PM 2 points [-]

If we have any sense of particular measures we can take that will significantly reduce that probability.

Comment author: Jonathan_Graehl 06 December 2011 09:00:43PM 6 points [-]

At least one person was extremely confident in the year of publication of a different Principia Mathematica :) It's easy to forget about the chance that you misheard/misread someone when communicating beliefs.

Comment author: Konkvistador 05 December 2011 08:51:37PM *  5 points [-]

2009:

  • 45% libertarianism
  • 38.4% liberalism
  • 12.3% socialism
  • 4.3% (6) conservativism
  • "not one person willing to own up to being a commie."

2011:

  • liberalism 34.5% (376)
  • libertarianism 32.3% (352)
  • socialism 26.6% at (290)
  • conservatism 2.8% (30)
  • communism 0.5% (5)

I generally expect LW to grow less metacontrarian on politics the larger it gets, so this change didn't surprise me. An alternative explanation (and now that I think of it more likley) is that the starting core group of LWers wasn't just more metacontrarian than usual, but probably also more libertarian in general.

Comment author: taryneast 06 December 2011 07:31:46PM 5 points [-]

And the large increase in population seems to include a large portion of students... which my experience tells me often has a higher-than-average portion of socialist leanings.

Comment author: Nornagest 06 December 2011 07:51:59PM *  2 points [-]

The relative proportions of liberalism, libertarianism, and conservatism haven't changed much, and I don't think we can say much about five new communists; by far the most significant change appears to be the doubled proportion of socialists. So this doesn't look like a general loss of metacontrarianism to me.

I'm not sure how to account for that change, though. The simplest explanation seems to be that LW's natural demographic turns out to include a bunch of left-contrarian groups once it's spread out sufficiently from OB's relatively libertarian cluster, but I'd also say that socialism's gotten significantly more mainstream-respectable in the last couple of years; I don't think that could fully account for the doubling, but it might play a role.

Comment author: gwern 04 December 2011 09:35:41PM 13 points [-]

The other 72.3% of people who had to find Less Wrong the hard way. 121 people (11.1%) were referred by a friend, 259 people (23.8%) were referred by blogs, 196 people (18%) were referred by Harry Potter and the Methods of Rationality, 96 people (8.8%) were referred by a search engine, and only one person (.1%) was referred by a class in school.

Of the 259 people referred by blogs, 134 told me which blog referred them. There was a very long tail here, with most blogs only referring one or two people, but the overwhelming winner was Common Sense Atheism, which is responsible for 18 current Less Wrong readers. Other important blogs and sites include Hacker News (11 people), Marginal Revolution (6 people), TV Tropes (5 people), and a three way tie for fifth between Reddit, SebastianMarshall.com, and You Are Not So Smart (3 people).

I've long been interested in whether Eliezer's fanfiction is an effective strategy, since it's so attention-getting (when Eliezer popped up in The New Yorker recently, pretty much his whole blurb was a description of MoR).

Of the listed strategies, only 'blogs' was greater than MoR. The long tail is particularly worrisome to me: LW/OB have frequently been linked in or submitted to Reddit and Hacker News, but those two account for only 14 people? Admittedly, weak SEO in the sense of submitting links to social news sites is a lot less time intensive than writing 1200 page Harry Potter fanfics and Louie has been complaining about us not doing even that, but still, the numbers look to be in MoR's favor.

Comment author: Darmani 05 December 2011 03:39:07AM 5 points [-]

Keep in mind that many of these links were a long time ago. I came here from Overcoming Bias, but I came to Overcoming Bias from Hacker News.

Comment author: wedrifid 05 December 2011 03:36:10AM 12 points [-]

So we can only prove that 519 people post on Less Wrong.

Where by 'prove' we mean 'somebody implied that they did on an anonymous online survey'. ;)

Comment author: kilobug 05 December 2011 10:53:07AM 10 points [-]

Wouldn't it be (relatively) easy and useful to have a "stats" page in LW, with info like number of accounts, number of accounts with > 0 karma (total, monthly), number of comments/articles, ... ?

Comment author: XiXiDu 05 December 2011 11:24:26AM *  10 points [-]

Wouldn't it be (relatively) easy and useful to have a "stats" page in LW, with info like number of accounts, number of accounts with > 0 karma (total, monthly), number of comments/articles, ... ?

Nice idea! I am interested in such statistics.

Comment author: Yvain 05 December 2011 03:16:27PM 13 points [-]

You mean, as opposed to that kind of proof where we end up with a Bayesian probability of exactly one? :)

Comment author: FiftyTwo 05 December 2011 10:18:27PM 4 points [-]

Older people were less likely to believe in transhumanist claims,

This seems to contradict the hypothesis that people's belief in the plausibility of immortality is linked to their own nearness/fear of death. Was there any correlations in the expected singularity date?

Relevant SMBC (Summary futurists predicted date of immortality discovery is slightly before the end of their expected lifespan)

Comment author: ChrisHallquist 05 December 2011 08:51:58PM *  10 points [-]

Didn't the IQ section say to only report a score if you've got an official one? The percentage of people answering not answering that question should have been pretty high, if they followed that instruction. How many people actually answered it?

Also: I've already pointed out that the morality question was flawed, but after thinking about it more, I've realized how badly flawed it was. Simply put, people shouldn't have had to choose between consequentialism and moral anti-realism, because there are a number of prominent living philosophers who combine the two.

JJC Smart is an especially clear example, but there are others. Joshua Greene's PhD thesis was mainly a defense of moral anti-realism, but also had a section titled "Hurrah for Utilitarianism!" Peter Singer is a bit fuzzy on meta-ethics, but has flirted with some kind of anti-realism.

And other moral anti-realists take positions on ethical questions without being consequentialists, see i.e. JL Mackie's book Ethics. Really, I have to stop myself from giving examples now, because they can be multiplied endlessly.

So again: normative ethics and meta-ethics are different issues, and should be treated as such on the next survey.

Comment author: army1987 04 December 2011 09:41:23PM 9 points [-]

There was much derision on the last survey over the average IQ supposedly being 146. Clearly Less Wrong has been dumbed down since then, since the average IQ has fallen all the way down to 140.

...

The average person was 37.6% sure their IQ would be above average - underconfident!

Maybe people were expecting the average IQ to turn out to be about the same as in the previous survey, and... (Well, I kind-of was, at least.)

Comment author: steven0461 04 December 2011 10:50:30PM 8 points [-]

As with the last survey, it's amazing how casually many people assign probabilities like 1% and 99%. I can understand in a few cases, like the religion questions, and Fermi-based answers to the aliens in the galaxy question. But on the whole it looks like many survey takers are just failing the absolute basics: don't assign extreme probabilities without extreme justification.

Comment author: Eugine_Nier 05 December 2011 04:03:03AM 6 points [-]

On the other hand, conjunctive bias exists. It's not hard to string together enough conjunctions that the probability of the statement should be in an extreme range.

Comment author: steven0461 05 December 2011 04:21:49AM 4 points [-]

Does this describe any of the poll questions?

Comment author: Desrtopa 05 December 2011 02:12:01PM 11 points [-]

Significant anthropogenic global warming is occurring: 70.7, (55, 85, 95)

I'm rather shocked that the numbers on this are so low. It's higher than polls indicate as the degree of acceptance in America, but then, we're dealing with a public where supposedly half of the people believe that tomatoes only have genes if they are genetically modified. Is this a subject on which Less Wrongers are significantly meta-contrarian?

Comment author: Konkvistador 06 December 2011 08:45:13AM 8 points [-]

Perhaps they also want to signal a sentiment similar to that of Freeman Dyson:

I believe global warming is grossly exaggerated as a problem. It's a real problem, but it's nothing like as serious as people are led to believe. The idea that global warming is the most important problem facing the world is total nonsense and is doing a lot of harm. It distracts people's attention from much more serious problems.

Comment author: kilobug 05 December 2011 02:24:59PM 15 points [-]

I'm also a bit surprised (I would have excepted high figures), but be careful to not misinterpret the data : it doesn't say that 70.7% of LWers believe in "anthropogenic global warming", but it does an average on probabilities. If you look at the quarters, even the 25% quarter is at p = 55% meaning that less than 25% of LWers give a lower than half probability.

It seems to indicate that almost all LWers believe in it being true (p>0.5 that it is true), but many of them do so with a low confidence. Either because they didn't study the field enough (and therefore, refuse to put too much strength in their belief) or because they consider the field too complicated/not well enough understood to be a too strong probability in it.

Comment author: Desrtopa 05 December 2011 02:36:23PM 2 points [-]

That's how I interpreted it in the first place; "believe in anthropogenic global warming" is a much more nebulous proposition anyway. But while anthropogenic global warming doesn't yet have the same sort of degree of evidence as, say, evolution, I think that an assignment of about 70% probability represents either critical underconfidence or astonishingly low levels of familiarity with the data.

Comment author: thomblake 05 December 2011 04:51:26PM 6 points [-]

astonishingly low levels of familiarity with the data.

It doesn't astonish me. It's not a terribly important issue for everyday life; it's basically a political issue.

I think I answered somewhere around 70%; while I've read a bit about it, there are plenty of dissenters and the proposition was a bit vague.

The claim that changing the makeup of the atmosphere in some way will affect climate in some way is trivially true; a more specific claim requires detailed study.

Comment author: Desrtopa 05 December 2011 05:53:49PM 6 points [-]

It doesn't astonish me. It's not a terribly important issue for everyday life; it's basically a political issue.

I would say that it's considerably more important for everyday life for most people than knowing whether tomatoes have genes.

Climate change may not represent a major human existential risk, but while the discussion has become highly politicized, the question of whether humans are causing large scale changes in global climate is by no means simply a political question.

If the Blues believe that asteroid strikes represent a credible threat to our civilization, and the Greens believe they don't, the question of how great a danger asteroid strikes actually pose will remain a scientific matter with direct bearing on survival.

Comment author: Konkvistador 08 December 2011 09:02:35AM *  4 points [-]

I would say that it's considerably more important for everyday life for most people than knowing whether tomatoes have genes.

What I think you should be arguing here (and what on one level I think you where implicitly arguing), is that in a sufficiently high trust society one should spend more resources on educating people about global warming than tomatoes having genes if one wants to help them.

It is for their own good, but not their personal good. Like a vaccine shot that has a high rate of nasty side effects but helps keep an infectious disease at bay. If you care about them, it can be rational to take the shot yourself if that's an effective signal to them that you aren't trying to fool them. By default they will be modelling you like one of them and interpret your actions accordingly. Likewise if you just happen to be better enough at deceit than they will fail detecting it, you can still use that signal to help them, even if take a fake shot.

Humans are often predictably irrational. The arational processes that maintain the high trust equilibrium can be used to let you take withdrawals of cooperative behaviour from the bank when the rational incentives just aren't there. What game theory is good for in this case is realizing how much you are withdrawing, since a rational game theory savvy agent is a pretty good benchmark for some cost analysis. You naturally need to think about the cost to quickly gauge if the level of trust is high enough in a society and further more if you burden it in this way, is the equilibrium still stable in the midterm?

If its not, teach them about tomatoes.

Comment author: Konkvistador 06 December 2011 08:38:26AM *  9 points [-]

I would say that it's considerably more important for everyday life for most people than knowing whether tomatoes have genes.

I disagree actually.

For most people neither global warming nor tomatoes having genes matters much. But if I had to choose, I'd say knowing a thing or two about basic biology has some impact on how you make your choices with regards to say healthcare or how much you spend on groceries or what your future shock level is.

Global warming, even if it does have a big impact on your life will not be much affected by you knowing anything about it. Pretty much anything an individual could do against it has a very small impact on how global warming will turn out. Saving 50$ a month or a small improvement in the odds of choosing the better treatment has a pretty measurable impact on him.

Taking global warming as a major threat for now (full disclosure: I think global warming, is not a threat to human survival though it may contribute to societal collapse in a worst case scenario), it is quite obviously a tragedy of the commons problem.

There is no incentive for an individual to do anything about it or even know anything about it, except to conform to a "low carbon footprint is high status" meme in order to derive benefit in his social life and feeling morally superior to others.

Comment author: xv15 06 December 2011 04:43:41AM *  8 points [-]

Wait a sec. Global warming can be important for everyday life without it being important that any given individual know about it for everyday life. In the same way that matters of politics have tremendous bearing on our lives, yet the average person might rationally be ignorant about politics since he can't have any real effect on politics. I think that's the spirit in which thomblake means it's a political matter. For most of us, the earth will get warmer or it won't, and it doesn't affect how much we are willing to pay for tomatoes at the grocery store (and therefore it doesn't change our decision rule for how to buy tomatoes), although it may effect how much tomatoes cost.

(It's a bit silly, but on the other hand I imagine one could have their preferences for tomatoes depend on whether tomatoes had "genes" or not.)

This is a bit like the distinction between microeconomics and macroeconomics. Macroeconomics is the stuff of front page newspaper articles about the economy, really very important stuff. But if you had to take just one economics class, I would recommend micro, because it gives you a way of thinking about choices in your daily life, as opposed to stuff you can't have any real effect on.

Comment author: Oligopsony 05 December 2011 03:33:15PM 3 points [-]

What should astonish about zero familiarity with the data, beyond that there's a scientific consensus?

Comment author: Desrtopa 05 December 2011 05:42:19PM 4 points [-]

I would be unsurprised by zero familiarity in a random sampling of the population, but I would have expected a greater degree of familiarity here as a matter of general scientific literacy.

Comment author: Oscar_Cunningham 04 December 2011 07:43:43PM *  7 points [-]

There were a few significant demographics differences here. Women tended to be more skeptical of the extreme transhumanist claims like cryonics and antiagathics (for example, men thought the current generation had a 24.7% chance of seeing someone live to 1000 years; women thought there was only a 9.2% chance). Older people were less likely to believe in transhumanist claims, a little less likely to believe in anthropogenic global warming, and more likely to believe in aliens living in our galaxy.

This bit is interesting. If our age and gender affects our beliefs than at least some of us are doing it wrong. Update accordingly. I'm young and male, so I should give less credence to global warming and more credence to nearby aliens.

Comment author: [deleted] 04 December 2011 08:11:19PM 12 points [-]

You have that backwards. If you're young and male, you should suspect that part of your confidence in global warming and lack of aliens is due to your demographics, and therefore update away from global warming and toward aliens.

Comment author: amacfie 05 December 2011 02:21:33PM 13 points [-]

So people just got silly with the IQ field again.

Comment author: MixedNuts 06 December 2011 07:24:35PM 6 points [-]

Or people only have old results from when they were kids, when being at all bright quickly gets you out of range.

Comment author: PeterisP 05 December 2011 10:04:41PM 5 points [-]

Actually, how should one measure own IQ ? I wouldn't know a reasonable place where to start looking for it, as the internet is full of advertising for IQ measurements, i.e., lots of intentional misinformation. Especially avoiding anything restricted to a single location like USA - this makes SAT's useless, well, at least for me.

Comment author: Jack 05 December 2011 06:09:12PM 11 points [-]

I'd almost rather see SAT scores at this point.

Comment author: Nornagest 05 December 2011 06:29:51PM *  15 points [-]

That'd be problematic for people outside the US, unfortunately. I don't know the specifics of how most of the various non-US equivalents work, but I expect conversion to bring up issues; the British A-level exams, for example, have a coarse enough granularity that they'd probably taint the results purely on those grounds. Especially if the average IQ around here really is >= 140.

Comment author: Prismattic 05 December 2011 09:45:50PM 5 points [-]

SAT scores are going to be of limited utility when so many here are clustered at the highest IQs. A lot more people get perfect or near-perfect SAT scores than get 140+ IQ scores.

Comment author: cata 05 December 2011 09:53:36PM *  11 points [-]

Yeah, but the difference is that the majority of people actually have SAT scores. It's pretty easy to go through your life without ever seeing the results of an IQ test, but I suspect there's a big temptation to just give a perceived "reasonable" answer anyway. I would rather have a lot of accurate results that are a little worse at discriminating than a lot of inaccurate results which would hypothetically be good at discriminating if they were accurate.

Comment author: ArisKatsaris 06 December 2011 01:21:23PM 22 points [-]

Yeah, but the difference is that the majority of people actually have SAT scores.

A majority of US people perhaps. Aargh the Americano-centrism, yet again.

Two obvious questions missing from the survey btw are birth country, and current country of residence (if different).

Comment author: pjeby 06 December 2011 05:34:01AM 10 points [-]

Note that in addition to being US-centric, the SAT scoring system has recently changed. When I took the SAT's, the maximum score was 1600, as it had two sections. Now it has 3 sections, with a maximum score of 2400. So my SAT score is going to look substantially worse compared to people who took it since 2005... and let's not even get into the various "recentering" changes in the 80's and 90's.

Comment author: [deleted] 08 December 2011 05:13:21AM 4 points [-]

Unless there's a particular reason to expect LWers in the U.S. to be significantly smarter or dumber than other LWers, it should be a useful sample.

Comment author: paper-machine 05 December 2011 04:36:16PM 3 points [-]

Anyone expecting otherwise was also being silly.

Comment author: michaelsullivan 05 December 2011 08:29:52PM *  3 points [-]

God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1) Some revealed religion is true: 3.40, (0, 0, .15)

This result is, not exactly surprising to me, but odd by my reading of the questions. It may seem at first glance like a conjunction fallacy to rate the second question's probability much higher than the first (which I did). But in fact, the god question, like the supernatural question referred to a very specific thing "ontologically basic mental entities", while the "some revealed religion is more or less true" question was utterly vague about how to define revealed religion or more or less true.

As I remarked in comments on the survey, depending on my assumptions about what those two things mean, my potential answers ranged from epsilon to 100-epsilon. A bit of clarity would be useful here.

Also, given the large number of hard atheists on LW, it might be interesting to look at finer grained data for the 25+% of survey respondents who did not answer '0' for all three "religion" questions.

Comment author: MarkusRamikin 05 December 2011 10:29:43AM 3 points [-]

The other 72.3% of people who had to find Less Wrong the hard way.

Is it just me or is there something not quite right about this, as an English sentence.

Comment author: pedanterrific 05 December 2011 10:33:46AM 5 points [-]

Could be fixed by adding 'of'

Of the other 72.3% of people who had to find Less Wrong the hard way,

or removing 'who'

The other 72.3% of people had to find Less Wrong the hard way.

Comment author: gwern 04 December 2011 09:39:40PM 6 points [-]

There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99)
There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)

You have to admit, that's pretty awful. There's only a 20% difference, is that so?

Comment author: SilasBarta 05 December 2011 07:13:27PM *  5 points [-]

Percentage point difference in belief probability isn't all that meaningful. 50% to 51% is a lot smaller confidence difference than 98% to 99%.

69.4% probability means 3.27 odds; 41.2% probability means 1.70 odds.

That means that, in the aggregate, survey takers find (3.27/1.70) = 1.924 -> 0.944 more bits of evidence for life somewhere in the universe, compared to somewhere in the galaxy.

Is that unreasonably big or unreasonably small?

EDIT: Oops, I can't convert properly. That should be 2.27 odds and 0.70 odds, an odds ratio of 3.24, or 1.70 more bits.

Comment author: dlthomas 06 December 2011 08:53:44PM 2 points [-]

Note that the top 25% put 99 or above for Universe. Of those, I would be surprised if there weren't a big chunk that put 100 (indicating 100 - epsilon, of course). This is not weighed in appropriately. Likewise for the bottom 25% for Galaxy.

Basically, "If you hugely truncate the outside edges, the average probabilities wind up too close together" should be entirely unsurprising.

Comment author: wedrifid 05 December 2011 04:09:05AM *  3 points [-]

You have to admit, that's pretty awful. There's only a 20% difference, is that so?

Fear not! The 28% difference in the average meaningless. The difference I see in that quote is (90-30), which isn't nearly so bad - and the "1" is also rather telling. More importantly by contrasting the averages with the medians and quartiles we can get something of a picture of what the data looks like. Enough to make a guess as to how it would change if we cut the noise by sampling only, say, those with >= 200 reported karma.

(Note: I am at least as shocked by the current downvote of this comment as gwern is by his "20%", and for rather similar reasons.)

Comment author: Konkvistador 05 December 2011 08:39:31PM *  4 points [-]

It would be neat if you posted a link to a downloadable spreadsheet like last time. I'd like to look at the data, if I happened to miss it via careless reading, sorry for bothering you.

Edit: Considering this is downovted I guess I must have missed it. I skimmed the post again and I'm just not seeing it, can someone please help with a link? :)

2nd Edit: Sorry missed it the first time!

Comment author: Emile 05 December 2011 10:19:36PM 4 points [-]
Comment author: army1987 09 December 2011 08:46:44PM 2 points [-]

What is the last column of the .xls file about?

Comment author: Unnamed 04 December 2011 07:48:02PM 2 points [-]

Could you make a copy of the survey (with the exact wordings of all the questions) available for download?

Comment author: [deleted] 05 December 2011 11:38:28PM 3 points [-]

(9.9%) were atheist and spiritual

I thought you meant spiritual as in "Find something more important than you are and dedicate your life to it." did I misinterpret?

Comment author: taryneast 06 December 2011 07:35:39PM 5 points [-]

If an interpretation wasn't given, then you were free to make up whatever meant something to you. To contrast with yours, i interpreted spiritualism in this sense to match "non-theistic spiritualism" eg nature-spirits, transcendental meditation, wish-magic and the like.

Comment author: Polymeron 11 December 2011 11:52:32AM 4 points [-]

It seems to me that a reasonable improvement for the next survey would be to lower the ambiguity of these categories.

Comment author: Armok_GoB 04 December 2011 08:17:56PM 2 points [-]

This made my trust in the community and my judgement of its average quality go down a LOT, and my estimate of my own value to the community, SIAI, and the world in general go up with a LOT.

Comment author: Emile 04 December 2011 08:27:53PM *  11 points [-]

Which parts, specifically?

(it didn't have an effect like that on me, I didn't see that many surprising things)

Comment author: Armok_GoB 04 December 2011 11:45:58PM 3 points [-]

I expected almost everyone to agree with Eliezer on most important things, to have been here for a long time, to have read all the sequences, to spend lots of time here... In short, to be like the top posters seem to (and even with them the halo effect might be involved), except with lower IQ and/or writing skill.

Comment author: XiXiDu 05 December 2011 09:46:13AM *  25 points [-]

This made my trust in the community and my judgement of its average quality go down a LOT...

I expected almost everyone to agree with Eliezer on most important things...

Alicorn (top-poster) doesn't agree with Eliezer about ethics. PhilGoetz (top-poster) doesn't agree with Eliezer. Wei_Dai (top-poster) doesn't agree with Eliezer on AI issues. wedrifid (top-poster) doesn't agree with Eliezer on CEV and the interpretation of some game and decision theoretic thought experiments.

I am pretty sure Yvain doesn't agree with Eliezer on quite a few things too (too lazy to look it up now).

Generally there are a lot of top-notch people who don't agree with Eliezer. Robin Hanson for example. But also others who have read all of the Sequences, like Holden Karnofsky from GiveWell, John Baez or Katja Grace who has been a visiting fellow.

But even Rolf Nelson (a major donor and well-read Bayesian) disagrees about the Amanda Knox trial. Or take Peter Thiel (SI's top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.

Comment author: wallowinmaya 05 December 2011 12:26:38PM 6 points [-]

Holden Karnofsky has read all of the Sequences?

Comment author: XiXiDu 05 December 2011 06:39:35PM *  12 points [-]

Holden Karnofsky has read all of the Sequences?

I wrote him an email to make sure. Here is his reply:

I've read a lot of the sequences. Probably the bulk of them. Possibly all of them. I've also looked pretty actively for SIAI-related content directly addressing the concerns I've outlined (including speaking to different people connected with SIAI).

Comment author: beoShaffer 05 December 2011 08:04:27PM 5 points [-]

take Peter Thiel (SI's top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.

IIRC Peter Thiel can't give SIAI more than he currently does without causing some form of tax difficulties, and it has been implied that he would give significantly more if this were not the case.

Comment author: gwern 05 December 2011 08:25:24PM 5 points [-]

Right. I remember the fundraising appeals about this: if Thiel donates too much, SIAI begins to fail the 501c3 regs, that it "receives a substantial part of its income, directly or indirectly, from the general public or from the government. The public support must be fairly broad, not limited to a few individuals or families."

Comment author: Armok_GoB 05 December 2011 02:49:18PM 11 points [-]

I am extremely surprised by this, and very confused. This is strange because I technically knew each of those individual examples... I'm not sure what's going on, but I'm sure that whatever it is it's my fault and extremely unflattering to my ability as a rationalist.

How am I supposed to follow my consensus-trusting heuristics when no consensus exists? I'm to lazy to form my own opinions! :p

Comment author: NancyLebovitz 05 December 2011 04:07:30PM 7 points [-]

I just wait, especially considering that which interpretation of QM is correct doesn't have urgent practical consequences.

Comment author: satt 06 December 2011 03:08:09AM 2 points [-]

This is strange because I technically knew each of those individual examples... I'm not sure what's going on,

Sounds like plain old accidental compartmentalization. You didn't join the dots until someone else pointed out they made a line. (Admittedly this is just a description of your surprise and not an explanation, but hopefully slapping a familiar label on it makes it less opaque.)

Comment author: Kaj_Sotala 05 December 2011 10:33:19AM *  19 points [-]

I expected almost everyone to agree with Eliezer on most important things

That would have made my trust in the community go down a lot. Echo chambers rarely produce good results.

Comment author: komponisto 05 December 2011 11:02:26AM 5 points [-]

Surely it depends on which questions are meant by "important things".

Comment author: Kaj_Sotala 05 December 2011 12:41:53PM 4 points [-]

Granted.

Comment author: paper-machine 05 December 2011 03:15:19AM 3 points [-]

I expected almost everyone to agree with Eliezer on most important things

Why? Don't you encounter enough contrarians on LW?

Comment author: gwern 05 December 2011 04:33:45AM *  15 points [-]

You may think you encounter a lot of contrarians on LW, but I disagree - we're all sheep.

But seriously, look at that MWI poll result. How many LWers have ever seriously looked at all the competing theories, or could even name many alternatives? ('Collapse, MWI, uh...' - much less could discuss why they dislike pilot waves or whatever.) I doubt many fewer could do so than plumped for MWI - because Eliezer is such a fan...

Comment author: Armok_GoB 05 December 2011 02:52:10PM *  13 points [-]

I know I am a sheep and hero worshipper, and then the typical mind fallacy happened.

Comment author: paper-machine 05 December 2011 06:10:39AM 2 points [-]

Heh. The original draft of my comment above included just this example.

To be explicit, I don't believe that anyone with little prior knowledge about QM should update toward MWI by any significant amount after reading the QM sequence.

Comment author: ArisKatsaris 05 December 2011 03:06:17PM *  10 points [-]

I disagree. I updated significantly in favour of MWI just because the QM sequence helped me introspect and perceive that much of my prior prejudice against MWI were irrational biases such as "I don't think I would like it if MWI was true. Plus I find it a worn-out trope in science fiction. Also it feels like we live in a single world." or misapplications of rational ideas like "Wouldn't Occam's razor favor a single world?"

I still don't know much of the mathematics underpinning QM. I updated in favour of MWI simply by demolishing faulty arguments I had against it.

Comment author: paper-machine 05 December 2011 03:46:21PM 2 points [-]

I updated in favour of MWI simply by demolishing faulty arguments I had against it.

It seems like doing this would only restore you to a non-informative prior, which still doesn't cohere with the survey result. What positive evidence is there in the QM sequence for MWI?

Comment author: Luke_A_Somers 05 December 2011 04:31:08PM 3 points [-]

The positive evidence for WMI is that it's already there inside quantum mechanics until you change quantum mechanics in some specific way to get rid of it!

Comment author: kilobug 05 December 2011 04:36:23PM 2 points [-]

MWI, as beautiful as it is, won't fully convince me until it can explain the Born probability - other interpretations don't do it more, so it's not a point "against" MWI, but it's still an additional rule you need to make the "jump" between QM and what we actually see. As long you need that additional rule, I've a deep feeling we didn't reach the bottom.

Comment author: selylindi 05 December 2011 07:53:11PM 2 points [-]

Demographically, there is one huge cluster of Less Wrongers: 389 (42%) straight white (including Hispanics) atheist males (including FTM) under 48 who are in STEM. I don't actually know if that characterizes Eliezer.

It's slightly comforting to me to know that a majority of LWers are outside that cluster in one way or another.

Comment author: XiXiDu 04 December 2011 08:12:29PM 1 point [-]

Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader.

This doesn't look very good from the point of view of the Singularity Institute. While 38.5% of all people have read at least 75% of the Sequences only 16.5% think that unfriendly AI is the most worrisome existential risk.

Is the issue too hard to grasp for most people or has it so far been badly communicated by the Singularity Institute? Or is it simply the wisdom of crowds?

Comment author: TheOtherDave 04 December 2011 08:42:44PM 21 points [-]

The irony of this is that if, say, 83.5% of respondents instead thought UFAI was the most worrisome existential risk, that would likely be taken as evidence that the LW community was succumbing to groupthink.

Comment author: steven0461 04 December 2011 09:34:30PM *  7 points [-]

The sequences aren't necessarily claiming UFAI is the single most worrisome risk, just a seriously worrisome risk.

Comment author: thomblake 05 December 2011 03:55:19PM 4 points [-]

Don't forget - even if unfriendly AI wasn't a major existential risk, Friendly AI is still potentially the best way to combat other existential risks.

Comment author: kilobug 05 December 2011 04:24:56PM 3 points [-]

It's best long-term way, probably. But if you estimate it'll take 50 years to get a FAI and that some of the existential risks have a significant probability of happening in 10 or 20 years, then you better should try to address them without requiring FAI - or you're likely to never reach the FAI stage.

In 7 billions of humans, it's sane to have some individual to focus on FAI now, since it's a hard problem, so we have to start early; but it's also normal for not all of us to focus on FAI, but to focus also on other ways to mitigate the existential risks that we estimate are likely to occur before FAI/uFAI.

Comment author: Dorikka 05 December 2011 01:00:06AM 4 points [-]

More that I think there's a significant chance that we're going to get blown up by nukes or a bioweapon before then.

Comment author: kilobug 04 December 2011 10:32:27PM 4 points [-]

For me the issue with "the most". Unfriendly AI is a worrisome existential risk, but it still relies on technological breakthrough that we don't clearly estimate. While "bioengineered pandemic" is something that in the short-term future may very well be possible.

That doesn't mean SIAI isn't doing an important job - Friendly AI is a hard task. If you start to try to solve a hard problem when you're about to die if you don't, well, it's too late. So it's great SIAI people are here to hack away the edges on the problem now.

Comment author: army1987 04 December 2011 09:59:51PM *  8 points [-]

The question IIRC wasn't about the most worrisome, but about the most likely -- it is not inconsistent to assign to uFAI (say) 1000 times the disutility of nuclear war but only 0.5 times its probability.

(ETA: I'm assuming worrisomeness is defined as the product of probability times disutility, or a monotonic function thereof.)

Comment author: Giles 05 December 2011 08:42:11PM *  2 points [-]

I think that worrisomeness should also factor in our ability to do anything about the problem.

If I'm selfish, then I don't particularly need to worry about global catastrophic risks that will kill (almost) everyone - I'd just die and there's nothing I can do about it. I'd worry more about risks that are survivable, since they might require some preparation.

If I'm altruistic then I don't particularly need to worry about risks that are inevitable, or where there is already well-funded and sane mitigation effort going on (since I'd have very little individual ability to make a difference to the probability). I might worry more about risks that have a lower expected disutility but where the mitigation effort is drastically underfunded.

(This is assuming real-world decision theory degenerates into something like CDT; if instead we adopt a more sophisticated decision theory and suppose there are enough other people in our reference class then "selfish" people would behave more like the "altruistic" people in the above paragraph).

Comment author: J_Taylor 04 December 2011 08:35:47PM 1 point [-]

I have no idea if this is universal. (Probably not.) However, in my area, using the term "blacks" in certain social circles is not considered proper vocabulary.

I don't have any huge problem with using the term. However, using it may be bad signalling and leaves Lesswrong vulnerable to pattern-matching.

Comment author: Yvain 04 December 2011 09:15:54PM 10 points [-]

What would you prefer? "Blacks" is the way I've seen it used in medical and psychological journal articles.

Comment author: J_Taylor 04 December 2011 09:23:06PM 5 points [-]

Journals use "blacks"? I had no idea it was used in technical writing. In some of my social circles, it just happens to be considered, at best, grandma-talk.

Generally, within these circles, "black people" is used.

However, I have no real preference regarding this matter.

Comment author: Jack 04 December 2011 09:01:58PM 3 points [-]

What is your area?

Comment author: J_Taylor 04 December 2011 09:10:09PM 2 points [-]

Southern United States.

Comment author: Jack 04 December 2011 09:17:58PM 12 points [-]

The plural can look weird but as long as it doesn't come after a definite article, it's the standard term and I've never met anyone who was offended by it. The usual politically correct substitute, African-American, is offensive in an international context.

Comment author: [deleted] 05 December 2011 05:03:26AM 5 points [-]

For what it's worth, I'm also from the southern US, and I also have the impression that "blacks" is slightly cringey and "black people" is preferred.