2011 Survey Results

94 Post author: Yvain 05 December 2011 10:49AM

A big thank you to the 1090 people who took the second Less Wrong Census/Survey.

Does this mean there are 1090 people who post on Less Wrong? Not necessarily. 165 people said they had zero karma, and 406 people skipped the karma question - I assume a good number of the skippers were people with zero karma or without accounts. So we can only prove that 519 people post on Less Wrong. Which is still a lot of people.

I apologize for failing to ask who had or did not have an LW account. Because there are a number of these failures, I'm putting them all in a comment to this post so they don't clutter the survey results. Please talk about changes you want for next year's survey there.

Of our 1090 respondents, 972 (89%) were male, 92 (8.4%) female, 7 (.6%) transexual, and 19 gave various other answers or objected to the question. As abysmally male-dominated as these results are, the percent of women has tripled since the last survey in mid-2009.

We're also a little more diverse than we were in 2009; our percent non-whites has risen from 6% to just below 10%. Along with 944 whites (86%) we include 38 Hispanics (3.5%), 31 East Asians (2.8%), 26 Indian Asians (2.4%) and 4 blacks (.4%).

Age ranged from a supposed minimum of 1 (they start making rationalists early these days?) to a more plausible minimum of 14, to a maximum of 77. The mean age was 27.18 years. Quartiles (25%, 50%, 75%) were 21, 25, and 30. 90% of us are under 38, 95% of us are under 45, but there are still eleven Less Wrongers over the age of 60. The average Less Wronger has aged about one week since spring 2009 - so clearly all those anti-agathics we're taking are working!

In order of frequency, we include 366 computer scientists (32.6%), 174 people in the hard sciences (16%) 80 people in finance (7.3%), 63 people in the social sciences (5.8%), 43 people involved in AI (3.9%), 39 philosophers (3.6%), 15 mathematicians (1.5%), 14 statisticians (1.3%), 15 people involved in law (1.5%) and 5 people in medicine (.5%).

48 of us (4.4%) teach in academia, 470 (43.1%) are students, 417 (38.3%) do for-profit work, 34 (3.1%) do non-profit work, 41 (3.8%) work for the government, and 72 (6.6%) are unemployed.

418 people (38.3%) have yet to receive any degrees, 400 (36.7%) have a Bachelor's or equivalent, 175 (16.1%) have a Master's or equivalent, 65 people (6%) have a Ph.D, and 19 people (1.7%) have a professional degree such as an MD or JD.

345 people (31.7%) are single and looking, 250 (22.9%) are single but not looking, 286 (26.2%) are in a relationship, and 201 (18.4%) are married. There are striking differences across men and women: women are more likely to be in a relationship and less likely to be single and looking (33% men vs. 19% women). All of these numbers look a lot like the ones from 2009.

27 people (2.5%) are asexual, 119 (10.9%) are bisexual, 24 (2.2%) are homosexual, and 902 (82.8%) are heterosexual.

625 people (57.3%) described themselves as monogamous, 145 (13.3%) as polyamorous, and 298 (27.3%) didn't really know. These numbers were similar between men and women.

The most popular political view, at least according to the much-maligned categories on the survey, was liberalism, with 376 adherents and 34.5% of the vote. Libertarianism followed at 352 (32.3%), then socialism at 290 (26.6%), conservativism at 30 (2.8%) and communism at 5 (.5%).

680 people (62.4%) were consequentialist, 152 (13.9%) virtue ethicist, 49 (4.5%) deontologist, and 145 (13.3%) did not believe in morality.

801 people (73.5%) were atheist and not spiritual, 108 (9.9%) were atheist and spiritual, 97 (8.9%) were agnostic, 30 (2.8%) were deist or pantheist or something along those lines, and 39 people (3.5%) described themselves as theists (20 committed plus 19 lukewarm)

425 people (38.1%) grew up in some flavor of nontheist family, compared to 297 (27.2%) in committed theist families and 356 in lukewarm theist families (32.7%). Common family religious backgrounds included Protestantism with 451 people (41.4%), Catholicism with 289 (26.5%) Jews with 102 (9.4%), Hindus with 20 (1.8%), Mormons with 17 (1.6%) and traditional Chinese religion with 13 (1.2%)

There was much derision on the last survey over the average IQ supposedly being 146. Clearly Less Wrong has been dumbed down since then, since the average IQ has fallen all the way down to 140. Numbers ranged from 110 all the way up to 204 (for reference, Marilyn vos Savant, who holds the Guinness World Record for highest adult IQ ever recorded, has an IQ of 185).

89 people (8.2%) have never looked at the Sequences; a further 234 (32.5%) have only given them a quick glance. 170 people have read about 25% of the sequences, 169 (15.5%) about 50%, 167 (15.3%) about 75%, and 253 people (23.2%) said they've read almost all of them. This last number is actually lower than the 302 people who have been here since the Overcoming Bias days when the Sequences were still being written (27.7% of us).

The other 72.3% of people who had to find Less Wrong the hard way. 121 people (11.1%) were referred by a friend, 259 people (23.8%) were referred by blogs, 196 people (18%) were referred by Harry Potter and the Methods of Rationality, 96 people (8.8%) were referred by a search engine, and only one person (.1%) was referred by a class in school.

Of the 259 people referred by blogs, 134 told me which blog referred them. There was a very long tail here, with most blogs only referring one or two people, but the overwhelming winner was Common Sense Atheism, which is responsible for 18 current Less Wrong readers. Other important blogs and sites include Hacker News (11 people), Marginal Revolution (6 people), TV Tropes (5 people), and a three way tie for fifth between Reddit, SebastianMarshall.com, and You Are Not So Smart (3 people).

Of those people who chose to list their karma, the mean value was 658 and the median was 40 (these numbers are pretty meaningless, because some people with zero karma put that down and other people did not).

Of those people willing to admit the time they spent on Less Wrong, after eliminating one outlier (sorry, but you don't spend 40579 minutes daily on LW; even I don't spend that long) the mean was 21 minutes and the median was 15 minutes. There were at least a dozen people in the two to three hour range, and the winner (well, except the 40579 guy) was someone who says he spends five hours a day.

I'm going to give all the probabilities in the form [mean, (25%-quartile, 50%-quartile/median, 75%-quartile)]. There may have been some problems here revolving around people who gave numbers like .01: I didn't know whether they meant 1% or .01%. Excel helpfully rounded all numbers down to two decimal places for me, and after a while I decided not to make it stop: unless I wanted to do geometric means, I can't do justice to really small grades in probability.

The Many Worlds hypothesis is true: 56.5, (30, 65, 80)
There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99)
There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)
The supernatural (ontologically basic mental entities) exists: 5.38, (0, 0, 1)
God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1)
Some revealed religion is true: 3.40, (0, 0, .15)
Average person cryonically frozen today will be successfully revived: 21.1, (1, 10, 30)
Someone now living will reach age 1000: 23.6, (1, 10, 30)
We are living in a simulation: 19, (.23, 5, 33)
Significant anthropogenic global warming is occurring: 70.7, (55, 85, 95)
Humanity will make it to 2100 without a catastrophe killing >90% of us: 67.6, (50, 80, 90)

There were a few significant demographics differences here. Women tended to be more skeptical of the extreme transhumanist claims like cryonics and antiagathics (for example, men thought the current generation had a 24.7% chance of seeing someone live to 1000 years; women thought there was only a 9.2% chance). Older people were less likely to believe in transhumanist claims, a little less likely to believe in anthropogenic global warming, and more likely to believe in aliens living in our galaxy. Community veterans were more likely to believe in Many Worlds, less likely to believe in God, and - surprisingly - less likely to believe in cryonics (significant at 5% level; could be a fluke). People who believed in high existential risk were more likely to believe in global warming, more likely to believe they had a higher IQ than average, and more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader. Unfriendly AI followed with 180 votes (16.5%), then nuclear war with 151 (13.9%), ecological collapse with 145 votes (12.3%), economic/political collapse with 134 votes (12.3%), and asteroids and nanotech bringing up the rear with 46 votes each (4.2%).

The mean for the Singularity question is useless because of the very high numbers some people put in, but the median was 2080 (quartiles 2050, 2080, 2150). The Singularity has gotten later since 2009: the median guess then was 2067. There was some discussion about whether people might have been anchored by the previous mention of 2100 in the x-risk question. I changed the order after 104 responses to prevent this; a t-test found no significant difference between the responses before and after the change (in fact, the trend was in the wrong direction).

Only 49 people (4.5%) have never considered cryonics or don't know what it is. 388 (35.6%) of the remainder reject it, 583 (53.5%) are considering it, and 47 (4.3%) are already signed up for it. That's more than double the percent signed up in 2009.

231 people (23.4% of respondents) have attended a Less Wrong meetup.

The average person was 37.6% sure their IQ would be above average - underconfident! Imagine that! (quartiles were 10, 40, 60). The mean was 54.5% for people whose IQs really were above average, and 29.7% for people whose IQs really were below average. There was a correlation of .479 (significant at less than 1% level) between IQ and confidence in high IQ.

Isaac Newton published his Principia Mathematica in 1687. Although people guessed dates as early as 1250 and as late as 1960, the mean was...1687 (quartiles were 1650, 1680, 1720). This marks the second consecutive year that the average answer to these difficult historical questions has been exactly right (to be fair, last time it was the median that was exactly right and the mean was all of eight months off). Let no one ever say that the wisdom of crowds is not a powerful tool.

The average person was 34.3% confident in their answer, but 41.9% of people got the question right (again with the underconfidence!). There was a highly significant correlation of r = -.24 between confidence and number of years error.

This graph may take some work to read. The x-axis is confidence. The y-axis is what percent of people were correct at that confidence level. The red line you recognize as perfect calibration. The thick green line is your results from the Newton problem. The black line is results from the general population I got from a different calibration experiment tested on 50 random trivia questions; take the intercomparability of the two with a grain of salt.

As you can see, Less Wrong does significantly better than the general population. However, there are a few areas of failure. First is that, as usual, people who put zero and one hundred percent had nonzero chances of getting the question right or wrong: 16.7% of people who put "0" were right, and 28.6% of people who put "100" were wrong (interestingly, people who put 100 did worse than the average of everyone else in the 90-99 bracket, of whom only 12.2% erred). Second of all, the line is pretty horizontal from zero to fifty or so. People who thought they had a >50% chance of being right had excellent calibration, but people who gave themselves a low chance of being right were poorly calibrated. In particular, I was surprised to see so many people put numbers like "0". If you're pretty sure Newton lived after the birth of Christ, but before the present day, that alone gives you a 1% chance of randomly picking the correct 20-year interval.

160 people wanted their responses kept private. They have been removed. The rest have been sorted by age to remove any information about the time they took the survey. I've converted what's left to a .xls file, and you can download it here.

Comments (513)

Comment author: Yvain 04 December 2011 07:14:42PM *  40 points [-]

Running list of changes for next year's survey:

  1. Ask who's a poster versus a lurker!
  2. A non-write-in "Other" for most questions
  3. Replace "gender" with "sex" to avoid complaints/philosophizing.
  4. Very very clear instructions to use percent probabilities and not decimal probabilities
  5. Singularity year question should have explicit instructions for people who don't believe in singularity
  6. Separate out "relationship status" and "looking for new relationships" questions to account for polys
  7. Clarify that research is allowed on the probability questions
  8. Clarify possible destruction of humanity in cryonics/antiagathics questions.
  9. What does it mean for aliens to "exist in the universe"? Light cone?
  10. Make sure people write down "0" if they have 0 karma.
  11. Add "want to sign up, but not available" as cryonics option.
  12. Birth order.
  13. Have children?
  14. Country of origin?
  15. Consider asking about SAT scores for Americans to have something to correlate IQs with.
  16. Consider changing morality to PhilPapers version.
Comment author: MixedNuts 04 December 2011 07:30:56PM 10 points [-]

You are aware that if you ask people for their sex but not their gender, and say something like "we have more women now", you will be philosophized into a pulp, right?

Comment author: orthonormal 04 December 2011 07:32:37PM 23 points [-]

Regarding #4, you could just write a % symbol to the right of each input box.

Comment author: [deleted] 04 December 2011 09:47:50PM *  11 points [-]

BTW, I'd also disallow 0 and 100, and give the option of giving log-odds instead of probability (and maybe encourage to do that for probabilities <1% and >99%). Someone's “epsilon” might be 10^-4 whereas someone else's might be 10^-30.

Comment author: brilee 05 December 2011 03:32:08PM 6 points [-]

I second that. See my post at http://lesswrong.com/r/discussion/lw/8lr/logodds_or_logits/ for a concise summary. Getting the LW survey to use log-odds would go a long way towards getting LW to start using log-odds in normal conversation.

Comment author: Luke_A_Somers 05 December 2011 04:40:31PM *  5 points [-]

People will mess up the log-odds, though. Non-log odds seem safer.

Odds of ...

Someone living today living for over 1000 subjectively experienced years : No one living today living for over 1000 subjectively experienced years

[ ] : [ ]

Two fields instead of one, but it seems cleaner than any of the other alternatives.

Comment author: [deleted] 05 December 2011 06:41:35PM *  4 points [-]

The point is not having to type lots of zeros (or of nines) with extreme probabilities (so that people won't weasel out and use ‘epsilon’); having to type 1:999999999999999 is no improvement over having to type 0.000000000000001.

Comment author: Armok_GoB 04 December 2011 08:22:53PM -2 points [-]

That list is way, way to short. I entirely gave up on the survey partway through because an actual majority of the questions were inapplicable or downright offensive to my sensibilities, or just incomprehensible, or I couldn't answer them for some other reason.

Not that I can think of anything that WOULDN'T have that effect on me without being specifically tailored to me which sort of destroys the point of having a survey... Maybe I'm just incompatible with surveys in general.

Comment author: NancyLebovitz 04 December 2011 09:00:04PM 1 point [-]

Would you be willing to write a discussion post about the questions you want to answer?

Comment author: Jack 04 December 2011 08:43:02PM 17 points [-]

I'd love a specific question on moral realism instead of leaving it as part of the normative ethics question. I'd also like to know about psychiatric diagnoses (autism spectrum, ADHD, depression, whatever else seems relevant)-- perhaps automatically remove those answers from a spreadsheet for privacy reasons.

Comment author: NancyLebovitz 05 December 2011 01:27:02AM 13 points [-]

I don't care about moral realism, but psychiatric diagnoses (and whether they're self-diagnosed or formally diagnosed) would be interesting.

Comment author: [deleted] 04 December 2011 09:43:39PM 28 points [-]

One about nationality (and/or native language)? I guess that would be much more relevant than e.g. birth order.

Comment author: lavalamp 05 December 2011 04:20:39AM 9 points [-]

Suggestion: add "cryocrastinating" as a cryonics option.

Comment author: CharlesR 05 December 2011 07:44:12AM 6 points [-]

You should clarify in the antiagathics question that the person reaches the age of 1000 without the help of cryonics.

Comment author: Jayson_Virissimo 05 December 2011 11:15:51AM *  6 points [-]

I think using your stipulative definition of "supernatural" was a bad move. I would be very surprised if I asked a theologian to define "supernatural" and they replied "ontologically basic mental entities". Even as a rational reconstruction of their reply, it would be quite a stretch. Using such specific definitions of contentious concepts isn't a good idea, if you want to know what proportion of Less Wrongers self-identify as atheist/agnostic/deist/theist/polytheist.

Comment author: TheOtherDave 05 December 2011 03:05:54PM 1 point [-]

OTOH, using a vague definition isn't a good idea either, if you want to know something about what Less Wrongers believe about the world.

I had no problem with the question as worded; it was polling about LWers confidence in a specific belief, using terms from the LW Sequences. That the particular belief is irrelevant to what people who self-identify as various groups consider important about that identification is important to remember, but not in and of itself a problem with the question.

But, yeah... if we want to know what proportion of LWers self-identify as (e.g.) atheist, that question won't tell us.

Comment author: dlthomas 05 December 2011 05:57:52PM 0 points [-]

Very very clear instructions to use percent probabilities and not decimal probabilities

I would much rather see a choice of units.

Comment author: Jack 05 December 2011 06:03:27PM 17 points [-]

We should ask if people participated in the previous surveys.

Comment author: selylindi 05 December 2011 07:37:22PM *  5 points [-]

Yet another alternate, culture-neutral way of asking about politics:

Q: How involved are you in your region's politics compared to other people in your region?
A: [choose one]
() I'm among the most involved
() I'm more involved than average
() I'm about as involved as average
() I'm less involved than average
() I'm among the least involved

Comment author: prase 05 December 2011 08:01:46PM *  11 points [-]

When asking for race/ethnicity, you should really drop the standard American classification into White - Hispanic - Black - Indian - Asian - Other. From a non-American perspective this looks weird, especially the "White Hispanic" category. A Spaniard is White Hispanic, or just White? If only White, how does the race change when one moves to another continent? And if White Hispanic, why not have also "Italic" or "Scandinavic" or "Arabic" or whatever other peninsula-ic races?

Since I believe the question was intended to determine the cultural background of LW readers, I am surprised that there was no question about country of origin, which would be more informative. There is certainly greater cultural difference between e.g. Turks (White, non-Hispanic I suppose) and White non-Hispanic Americans than between the latter and their Hispanic compatriots.

Also, making a statistic based on nationalities could help people determine whether there is a chance for a meetup in their country. And it would be nice to know whether LW has regular readers in Liechtenstein, of course.

Comment author: Oscar_Cunningham 04 December 2011 07:43:43PM *  7 points [-]

There were a few significant demographics differences here. Women tended to be more skeptical of the extreme transhumanist claims like cryonics and antiagathics (for example, men thought the current generation had a 24.7% chance of seeing someone live to 1000 years; women thought there was only a 9.2% chance). Older people were less likely to believe in transhumanist claims, a little less likely to believe in anthropogenic global warming, and more likely to believe in aliens living in our galaxy.

This bit is interesting. If our age and gender affects our beliefs than at least some of us are doing it wrong. Update accordingly. I'm young and male, so I should give less credence to global warming and more credence to nearby aliens.

Comment author: [deleted] 04 December 2011 08:11:19PM 12 points [-]

You have that backwards. If you're young and male, you should suspect that part of your confidence in global warming and lack of aliens is due to your demographics, and therefore update away from global warming and toward aliens.

Comment author: Oscar_Cunningham 04 December 2011 08:38:34PM *  1 point [-]

Thanks! Fixed.

Comment author: Unnamed 04 December 2011 07:48:02PM 2 points [-]

Could you make a copy of the survey (with the exact wordings of all the questions) available for download?

Comment author: Yvain 04 December 2011 08:40:35PM 1 point [-]

I've re-opened the survey at https://docs.google.com/spreadsheet/viewform?formkey=dHlYUVBYU0Q5MjNpMzJ5TWJESWtPb1E6MQ , but please don't send in any more responses.

Comment author: Craig_Heldreth 04 December 2011 08:00:47PM 33 points [-]

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

This is not just intriguing. To me this is the single most significant finding in the survey.

Comment author: steven0461 05 December 2011 03:16:44AM *  11 points [-]

It's also worrying, because it means we're not getting better on average.

Comment author: RichardKennaway 05 December 2011 12:59:48PM 15 points [-]

If the readership of LessWrong has gone up similarly in that time, then I would not expect to see an improvement, even if everyone who reads LessWrong improves.

Comment author: endoself 05 December 2011 03:53:48AM *  3 points [-]

It just means that we're at a specific point in memespace. The hypothesis that we are all rational enough to identify the right answers to all of these questions wouldn't explain the observed degree of variance.

Comment author: XiXiDu 04 December 2011 08:12:29PM 1 point [-]

Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader.

This doesn't look very good from the point of view of the Singularity Institute. While 38.5% of all people have read at least 75% of the Sequences only 16.5% think that unfriendly AI is the most worrisome existential risk.

Is the issue too hard to grasp for most people or has it so far been badly communicated by the Singularity Institute? Or is it simply the wisdom of crowds?

Comment author: TheOtherDave 04 December 2011 08:42:44PM 21 points [-]

The irony of this is that if, say, 83.5% of respondents instead thought UFAI was the most worrisome existential risk, that would likely be taken as evidence that the LW community was succumbing to groupthink.

Comment author: Sophronius 04 December 2011 08:57:25PM 1 point [-]

My prior belief was that people on less wrong would overestimate the danger of unfriendly ai due to it being part of the reason for Less Wrong's existence. That probability has decreased since seeing the results, but as I see no reason to believe the opposite would be the case, the effect should still be there.

Comment author: TheOtherDave 04 December 2011 09:08:57PM 0 points [-]

I don't quite understand your final clause. Are you saying that you still believe a significant number of people on LW overestimate the danger of UFAI, but that your confidence in that is lower than it was?

Comment author: Sophronius 04 December 2011 11:31:09PM *  -1 points [-]

More or less. I meant that I now estimate a reduced but still non-zero probability of upwards bias, but only a negligible probability of a bias in the other direction. So the average expected upward bias is decreased but still positive. Thus I should adjust the probability of human extinction being due to unfriendly ai downwards. Of course, the possibility of less wrong over or underestimating existential risk in general is another matter.

Comment author: steven0461 04 December 2011 09:34:30PM *  7 points [-]

The sequences aren't necessarily claiming UFAI is the single most worrisome risk, just a seriously worrisome risk.

Comment author: [deleted] 04 December 2011 09:59:51PM *  8 points [-]

The question IIRC wasn't about the most worrisome, but about the most likely -- it is not inconsistent to assign to uFAI (say) 1000 times the disutility of nuclear war but only 0.5 times its probability.

(ETA: I'm assuming worrisomeness is defined as the product of probability times disutility, or a monotonic function thereof.)

Comment author: kilobug 04 December 2011 10:32:27PM 4 points [-]

For me the issue with "the most". Unfriendly AI is a worrisome existential risk, but it still relies on technological breakthrough that we don't clearly estimate. While "bioengineered pandemic" is something that in the short-term future may very well be possible.

That doesn't mean SIAI isn't doing an important job - Friendly AI is a hard task. If you start to try to solve a hard problem when you're about to die if you don't, well, it's too late. So it's great SIAI people are here to hack away the edges on the problem now.

Comment author: Dorikka 05 December 2011 01:00:06AM 4 points [-]

More that I think there's a significant chance that we're going to get blown up by nukes or a bioweapon before then.

Comment author: thomblake 05 December 2011 03:55:19PM 4 points [-]

Don't forget - even if unfriendly AI wasn't a major existential risk, Friendly AI is still potentially the best way to combat other existential risks.

Comment author: cousin_it 05 December 2011 03:59:57PM 1 point [-]

How do you imagine a hypothetical world where uFAI is not dangerous enough to kill us, but FAI is powerful enough to save us?

Comment author: thomblake 05 December 2011 04:11:51PM 3 points [-]

I don't. Just imagine a hypothetical world where lots of other things are much more certain to kill us much sooner, if we don't get FAI to solve them soon.

Comment author: TheOtherDave 05 December 2011 04:30:28PM 6 points [-]

Hypothetically suppose the following (throughout, assume "AI" stands for significantly superhuman artificial general intelligence):

1) if we fail to develop AI before 2100, various non-AI-related problems kill us all in 2100.
2) if we ever develop unFriendly AI before Friendly AI, UFAI kills us.
3) if we develop FAI before UFAI and before 2100, FAI saves us.
4) FAI isn't particularly harder to build than UFAI is.

Given those premises, it's true that UFAI isn't a major existential risk, in that even if we do nothing about it, UFAI won't kill us. But it's also true that FAI is the best (indeed, the only) way to save us.

Are those premises internally contradictory in some way I'm not seeing?

Comment author: cousin_it 05 December 2011 04:33:29PM 4 points [-]

No, you're right. thomblake makes the same point. I just wasn't thinking carefully enough. Thanks!

Comment author: kilobug 05 December 2011 04:24:56PM 3 points [-]

It's best long-term way, probably. But if you estimate it'll take 50 years to get a FAI and that some of the existential risks have a significant probability of happening in 10 or 20 years, then you better should try to address them without requiring FAI - or you're likely to never reach the FAI stage.

In 7 billions of humans, it's sane to have some individual to focus on FAI now, since it's a hard problem, so we have to start early; but it's also normal for not all of us to focus on FAI, but to focus also on other ways to mitigate the existential risks that we estimate are likely to occur before FAI/uFAI.

Comment author: michaelsullivan 05 December 2011 08:09:28PM 0 points [-]

The phrasing of the question was quite specific: "Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?"

If I estimate a very small probability of either FAI or UFAI before 2100, then I'm not likely to choose UFAI as "most likely to wipe out 90% of humanity before 2100" if I think there's a solid chance for something else to do so.

Consider that I interpreted the singularity question to mean "if you think there is any real chance of a singularity, then in the case that the singularity happens, give the year by which you think it has 50% probability." and answered with 2350, while thinking that the singularity had less than a 50% probability of happening at all.

Yes, Yvain did say to leave it blank if you don't think there will be a singularity. Given the huge uncertainty involved in anyone's prediction of the singularity or any question related to it, I took "don't believe it will happen" to mean that my estimated chance was low enough to not be worth reasoning about the case where it does happen, rather than that my estimate was below 50%.

Comment author: NancyLebovitz 04 December 2011 08:12:39PM 18 points [-]

Michael Vassar has mentioned to me that the proportion of first/only children at LW is extremely high. I'm not sure whether birth order makes a big difference, but it might be worth asking about. By the way, I'm not only first-born, I'm the first grandchild on both sides.

Questions about akrasia-- Do you have no/mild/moderate/serious problems with it? Has anything on LW helped?

I left some of the probability questions blank because I realized had no idea of a sensible probability, and I especially mean whether we're living in a simulation.

It might be interesting to ask people whether they usually vote.

The link to the survey doesn't work because the survey is closed-- could you make the text of the survey available?

Comment author: steven0461 04 December 2011 08:59:15PM 8 points [-]

There was a poll about firstborns.

Comment author: falenas108 04 December 2011 09:37:54PM 1 point [-]

That poll shows a remarkable result, the number of people that are the oldest sibling outnumber those who have older siblings 2:1.

There are also twice as many only children in that survey as in the U.S. population in 1980, but that is a known effect.

Comment author: steven0461 04 December 2011 09:42:28PM *  3 points [-]

More than 3:1 even. I speculated a bit here.

Comment author: Eliezer_Yudkowsky 05 December 2011 01:24:23AM 10 points [-]

By the way, I'm not only first-born, I'm the first grandchild on both sides.

So am I! I wonder if being the first-born is genetically heritable.

Comment author: MixedNuts 05 December 2011 01:32:22AM 14 points [-]

Yes. Being first-born is correlated with having few siblings, which is correlated with parents with low fertility, which is genetically inherited from grandparents with low fertility, which is correlated with your parents having few siblings, which is correlated with them being first-born.

Comment author: Zack_M_Davis 05 December 2011 04:24:25AM *  8 points [-]

is correlated with [...] which is correlated with [...] which is genetically inherited from [...] which is correlated with

I agree with your conclusion that the heritability of firstbornness is nonzero, but I'm not sure this reasoning is valid. (Pearson) correlation is not, in general, transitive: if X is correlated with Y and Y is correlated with Z, it does not necessarily follow that X is correlated with Z unless the squares of the correlation coefficients between X and Y and between Y and Z sum to more than one.

Actually calculating the heritability of firstbornness turns out to be a nontrivial math problem. For example, while it is obvious that having few siblings is correlated with being firstborn, it's not obvious to me exactly what that correlation coefficient should be, nor how to calculate it from first principles. When I don't know how to solve a problem from first principles, my first instinct is to simulate it, so I wrote a short script to calculate the Pearson correlation between number of siblings and not-being-a-firstborn for a population where family size is uniformly distributed on the integers from 1 to n. It turns out that the correlation decreases as n gets larger (from [edited:] ~0.5[8] for n=[2] to ~0.3[1] for n=50), which fact probably has an obvious-in-retrospect intuitive explanation which I am somehow having trouble articulating explicitly ...

Ultimately, however, other priorities prevent me from continuing this line of inquiry at the present moment.

Comment author: MatthewBaker 05 December 2011 04:22:05PM *  1 point [-]

Ditto :) but I intend to reproduce eventually in maximum useful volume.

Comment author: Armok_GoB 04 December 2011 08:17:56PM 2 points [-]

This made my trust in the community and my judgement of its average quality go down a LOT, and my estimate of my own value to the community, SIAI, and the world in general go up with a LOT.

Comment author: Emile 04 December 2011 08:27:53PM *  11 points [-]

Which parts, specifically?

(it didn't have an effect like that on me, I didn't see that many surprising things)

Comment author: Armok_GoB 04 December 2011 11:45:58PM 3 points [-]

I expected almost everyone to agree with Eliezer on most important things, to have been here for a long time, to have read all the sequences, to spend lots of time here... In short, to be like the top posters seem to (and even with them the halo effect might be involved), except with lower IQ and/or writing skill.

Comment author: [deleted] 05 December 2011 03:15:19AM 3 points [-]

I expected almost everyone to agree with Eliezer on most important things

Why? Don't you encounter enough contrarians on LW?

Comment author: gwern 05 December 2011 04:33:45AM *  15 points [-]

You may think you encounter a lot of contrarians on LW, but I disagree - we're all sheep.

But seriously, look at that MWI poll result. How many LWers have ever seriously looked at all the competing theories, or could even name many alternatives? ('Collapse, MWI, uh...' - much less could discuss why they dislike pilot waves or whatever.) I doubt many fewer could do so than plumped for MWI - because Eliezer is such a fan...

Comment author: [deleted] 05 December 2011 06:10:39AM 2 points [-]

Heh. The original draft of my comment above included just this example.

To be explicit, I don't believe that anyone with little prior knowledge about QM should update toward MWI by any significant amount after reading the QM sequence.

Comment author: ArisKatsaris 05 December 2011 03:06:17PM *  10 points [-]

I disagree. I updated significantly in favour of MWI just because the QM sequence helped me introspect and perceive that much of my prior prejudice against MWI were irrational biases such as "I don't think I would like it if MWI was true. Plus I find it a worn-out trope in science fiction. Also it feels like we live in a single world." or misapplications of rational ideas like "Wouldn't Occam's razor favor a single world?"

I still don't know much of the mathematics underpinning QM. I updated in favour of MWI simply by demolishing faulty arguments I had against it.

Comment author: [deleted] 05 December 2011 03:46:21PM 2 points [-]

I updated in favour of MWI simply by demolishing faulty arguments I had against it.

It seems like doing this would only restore you to a non-informative prior, which still doesn't cohere with the survey result. What positive evidence is there in the QM sequence for MWI?

Comment author: ArisKatsaris 05 December 2011 04:02:41PM *  0 points [-]

It seems like doing this would only restore you to a non-informative prior,

I still had in my mind the arguments in favour of many-worlds, like "lots of scientists seem to take it seriously", and the basic argument that works for ever-increasing the size of reality which is that the more reality there is out there for intelligence to evolve in, the greater the likelihood for intelligence to evolve.

What positive evidence is there in the QM sequence for MWI?

Well, it mentions some things like "it's deterministic and local, like all other laws of physics seem to be". Does that count?

Comment author: prase 05 December 2011 06:53:21PM 0 points [-]

Its determinism is of a very peculiar kind, not like that of other laws of physics seem to be.

Comment author: Luke_A_Somers 05 December 2011 04:31:08PM 3 points [-]

The positive evidence for WMI is that it's already there inside quantum mechanics until you change quantum mechanics in some specific way to get rid of it!

Comment author: kilobug 05 December 2011 04:36:23PM 2 points [-]

MWI, as beautiful as it is, won't fully convince me until it can explain the Born probability - other interpretations don't do it more, so it's not a point "against" MWI, but it's still an additional rule you need to make the "jump" between QM and what we actually see. As long you need that additional rule, I've a deep feeling we didn't reach the bottom.

Comment author: Armok_GoB 05 December 2011 02:52:10PM *  13 points [-]

I know I am a sheep and hero worshipper, and then the typical mind fallacy happened.

Comment author: XiXiDu 05 December 2011 09:46:13AM *  25 points [-]

This made my trust in the community and my judgement of its average quality go down a LOT...

I expected almost everyone to agree with Eliezer on most important things...

Alicorn (top-poster) doesn't agree with Eliezer about ethics. PhilGoetz (top-poster) doesn't agree with Eliezer. Wei_Dai (top-poster) doesn't agree with Eliezer on AI issues. wedrifid (top-poster) doesn't agree with Eliezer on CEV and the interpretation of some game and decision theoretic thought experiments.

I am pretty sure Yvain doesn't agree with Eliezer on quite a few things too (too lazy to look it up now).

Generally there are a lot of top-notch people who don't agree with Eliezer. Robin Hanson for example. But also others who have read all of the Sequences, like Holden Karnofsky from GiveWell, John Baez or Katja Grace who has been a visiting fellow.

But even Rolf Nelson (a major donor and well-read Bayesian) disagrees about the Amanda Knox trial. Or take Peter Thiel (SI's top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.

Comment author: wallowinmaya 05 December 2011 12:26:38PM 6 points [-]

Holden Karnofsky has read all of the Sequences?

Comment author: XiXiDu 05 December 2011 06:39:35PM *  12 points [-]

Holden Karnofsky has read all of the Sequences?

I wrote him an email to make sure. Here is his reply:

I've read a lot of the sequences. Probably the bulk of them. Possibly all of them. I've also looked pretty actively for SIAI-related content directly addressing the concerns I've outlined (including speaking to different people connected with SIAI).

Comment author: Armok_GoB 05 December 2011 02:49:18PM 11 points [-]

I am extremely surprised by this, and very confused. This is strange because I technically knew each of those individual examples... I'm not sure what's going on, but I'm sure that whatever it is it's my fault and extremely unflattering to my ability as a rationalist.

How am I supposed to follow my consensus-trusting heuristics when no consensus exists? I'm to lazy to form my own opinions! :p

Comment author: NancyLebovitz 05 December 2011 04:07:30PM 7 points [-]

I just wait, especially considering that which interpretation of QM is correct doesn't have urgent practical consequences.

Comment author: MatthewBaker 05 December 2011 04:28:00PM 0 points [-]

We just learned that neutrinos might be accelerated faster that light in certain circumstances, while this result doesn't give me too much pause, It certainly made me think about the possible practical consequences of successfully understanding quantum mechanics.

Comment author: NancyLebovitz 05 December 2011 04:32:30PM 0 points [-]

Fair enough. A deeper understanding of quantum mechanics would probably have huge practical consequences.

It isn't obvious to me that figuring out whether the MWI is right is an especially good way to improve understanding of QM. My impression from LW is that MWI is important here for looking at ethical consequences.

Comment author: MatthewBaker 05 December 2011 04:34:58PM *  0 points [-]

I share that impression :) Plus its very fun to think about Everett branches and accusal trade when I pretend we would have a chance against a truly Strong AI in a box.

Comment author: beoShaffer 05 December 2011 08:04:27PM 5 points [-]

take Peter Thiel (SI's top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.

IIRC Peter Thiel can't give SIAI more than he currently does without causing some form of tax difficulties, and it has been implied that he would give significantly more if this were not the case.

Comment author: Kaj_Sotala 05 December 2011 10:33:19AM *  19 points [-]

I expected almost everyone to agree with Eliezer on most important things

That would have made my trust in the community go down a lot. Echo chambers rarely produce good results.

Comment author: komponisto 05 December 2011 11:02:26AM 5 points [-]

Surely it depends on which questions are meant by "important things".

Comment author: Kaj_Sotala 05 December 2011 12:41:53PM 4 points [-]

Granted.

Comment author: Armok_GoB 05 December 2011 02:41:20PM 1 point [-]

The most salient one would be religion.

Comment author: Nick_Roy 05 December 2011 02:50:11PM 1 point [-]

What surprised you about the survey's results regarding religion?

Comment author: Armok_GoB 05 December 2011 03:55:37PM 1 point [-]

That there are theists around?

Comment author: Nick_Roy 05 December 2011 04:13:07PM 5 points [-]

Okay, but only 3.5%. I wonder how many are newbies who haven't read many of the sequences yet, and I wonder how many are simulists.

Comment author: thomblake 05 December 2011 05:02:45PM 4 points [-]

Since you seem to have a sense of the community, your surprise surprises me. Will_Newsome's contrarian defense of theism springs to mind immediately, and I know we have several people who are theists or were when they joined Lw.

Also, many people could have answered the survey who are new here.

Comment author: TheOtherDave 05 December 2011 05:18:40PM 7 points [-]

It's also fairly unlikely that all the theists and quasitheists on LW have outed themselves as such.
Nor is there any particular reason they should.

Comment author: selylindi 05 December 2011 07:53:11PM 2 points [-]

Demographically, there is one huge cluster of Less Wrongers: 389 (42%) straight white (including Hispanics) atheist males (including FTM) under 48 who are in STEM. I don't actually know if that characterizes Eliezer.

It's slightly comforting to me to know that a majority of LWers are outside that cluster in one way or another.

Comment author: J_Taylor 04 December 2011 08:27:29PM *  23 points [-]

The supernatural (ontologically basic mental entities) exists: 5.38, (0, 0, 1)

God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1)

??

Comment author: Sophronius 04 December 2011 08:53:41PM 1 point [-]

Yea, I noticed that too. They are so close together that I wrote it off as noise, though. Otherwise, it can be explained by religious people being irrational and unwilling to place god in the same category as ghosts and other "low status" beliefs. That doesn't indicate irrationality on the part of the rest of less wrong.

Comment author: DanielLC 04 December 2011 10:48:05PM 3 points [-]

They are so close together that I wrote it off as noise, though.

That would work if it was separate surveys, but in order to get that on one survey, individual people would have to give a higher probability to God than any supernatural.

Comment author: Sophronius 04 December 2011 11:23:07PM 3 points [-]

True, but this could be the result of a handful of people giving a crazy answer (noise). Not really indicative of less wrong as a whole. I imagine most less wrongers gave negligible probabilities for both, allowing a few religious people to skew the results.

Comment author: DanielLC 05 December 2011 02:28:28AM *  1 point [-]

I was thinking you meant statistical error.

Do you mean trolls, or people who don't understand the question?

Comment author: Sophronius 05 December 2011 11:52:18AM 2 points [-]

Neither, I meant people who don't understand that the probability of a god should be less than the probability of something supernatural existing. Add in religious certainty and you get a handful of people giving answers like P(god) = 99% and P(supernatural) = 50% which can easily skew the results if the rest of less wrong gives probabilities like 1%and 2% respectively. Given what Yvain wrote in the OP though, I think there's also plenty of evidence of trolls upsetting the results somewhat at points.

Of course, it would make much more sense to ask Yvain for more data on how people answered this question rather than speculate on this matter :p

Comment author: Unnamed 04 December 2011 09:25:19PM *  21 points [-]

P(Supernatural) What is the probability that supernatural events, defined as those involving ontologically basic mental entities, have occurred since the beginning of the universe?

P(God) What is the probability that there is a god, defined as a supernatural (see above) intelligent entity who created the universe?

So deism (God creating the universe but not being involved in the universe once it began) could make p(God) > p(Supernatural).

Looking at the the data by individual instead of in aggregate, 82 people have p(God) > p(Supernatural); 223 have p(Supernatural) > p(God).

Comment author: J_Taylor 04 December 2011 09:31:04PM 7 points [-]

Given this, the numbers no longer seem anomalous. Thank you.

Comment author: CharlesR 05 December 2011 07:52:53AM 0 points [-]

Except that the question specified "God" as an ontologically basic mental entity.

Comment author: MixedNuts 05 December 2011 07:54:49AM 7 points [-]

So they believe that God created the universe, but has ceased to exist since.

We have 82 Nietzscheans.

Comment author: J_Taylor 04 December 2011 08:35:47PM 1 point [-]

I have no idea if this is universal. (Probably not.) However, in my area, using the term "blacks" in certain social circles is not considered proper vocabulary.

I don't have any huge problem with using the term. However, using it may be bad signalling and leaves Lesswrong vulnerable to pattern-matching.

Comment author: Jack 04 December 2011 09:01:58PM 3 points [-]

What is your area?

Comment author: J_Taylor 04 December 2011 09:10:09PM 2 points [-]

Southern United States.

Comment author: Jack 04 December 2011 09:17:58PM 12 points [-]

The plural can look weird but as long as it doesn't come after a definite article, it's the standard term and I've never met anyone who was offended by it. The usual politically correct substitute, African-American, is offensive in an international context.

Comment author: J_Taylor 04 December 2011 09:28:51PM 2 points [-]

I have never met any black person who was offended by it. I have met some white people who will take you less seriously if you use the term.

However, if it is the standard term then it is the standard term. I certainly would not replace it with African-American.

Comment author: fubarobfusco 05 December 2011 04:41:09AM *  11 points [-]

Moreover, there are plenty of black people in the world who are not African-American.

There's an infamous video from a few years back in which an American interviewer makes this mistake when talking to an Olympic athlete of British nationality and African ancestry. It becomes increasingly clear that the interviewer is merely doing a mental substitution of "African-American" for "black" without actually thinking about what the former term means ...

Comment author: J_Taylor 05 December 2011 04:49:53AM 2 points [-]

I do not use "African-American" to refer to non-Americans.

Comment author: [deleted] 05 December 2011 07:00:57PM *  7 points [-]

I even feel weird calling Obama an African-American (though I still do it, because he self-identifies as one). In my mental lexicon it usually specifically refers to descendants of the African slaves taken to the Americas a long time ago, whereas Obama's parents are a White American of English ancestry and a Kenyan who hadn't been to the US until college.

Comment author: anonymous259 05 December 2011 07:39:46PM 13 points [-]

Ironically, Obama is exactly the kind of person to whom that term should refer, if it means anything at all. Descendants of African slaves taken to the Americas a long time ago should have another term, such as "American blacks".

Despite his lack of membership in it, Obama self-identifies with the latter group for obvious political reasons; after all, "children of foreign exchange students" is not an important constituency.

Comment author: wedrifid 05 December 2011 09:38:47AM 4 points [-]

Moreover, there are plenty of black people in the world who are not African-American.

Come to think of it we could put the emphasis of either of the terms.

Comment author: [deleted] 05 December 2011 05:03:26AM 5 points [-]

For what it's worth, I'm also from the southern US, and I also have the impression that "blacks" is slightly cringey and "black people" is preferred.

Comment author: Yvain 04 December 2011 09:15:54PM 10 points [-]

What would you prefer? "Blacks" is the way I've seen it used in medical and psychological journal articles.

Comment author: J_Taylor 04 December 2011 09:23:06PM 5 points [-]

Journals use "blacks"? I had no idea it was used in technical writing. In some of my social circles, it just happens to be considered, at best, grandma-talk.

Generally, within these circles, "black people" is used.

However, I have no real preference regarding this matter.

Comment author: wedrifid 05 December 2011 09:36:13AM 1 point [-]

What would you prefer? "Blacks" is the way I've seen it used in medical and psychological journal articles.

Seriously? That seems a little cavalier of them.The medical and psychological influence of race isn't all that much to do with the skin color and a lot more to do with genetic population. That makes the term ambiguous to the point of uselessness. Unless "blacks" is assumed to mean, say, just those of African ancestry. In which case they could be writing "African".

Comment author: Jack 04 December 2011 08:39:02PM 43 points [-]

People who believed in high existential risk were ... more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Aliens existing but not yet colonizing multiple systems or broadcasting heavily is the the response consistent with the belief that a Great Filter lies in front of us.

Comment author: Unnamed 04 December 2011 08:48:25PM 9 points [-]

It looks like about 6% of respondents gave their answers in decimal probabilities instead of percentages. 108 of the 930 people in the data file didn't have any answers over 1 for any of the probability questions, and 52 of those did have some answers (the other 56 left them all blank), which suggests that those 52 people were using decimals (and that's is 6% of the 874 who answered at least one of the questions). So to get more accurate estimates of the means for the probability questions, you should either multiply those respondents' answers by 100, exclude those respondents when calculating the means, or multiply the means that you got by 1.06.

=IF(MAX(X2:AH2)<1.00001,1,0) is the Excel formula I used to find those 108 people (in row 2, then copy and pasted to the rest of the rows)

Comment author: ataftoti 04 December 2011 08:50:52PM 1 point [-]

801 people (73.5%) were atheist and not spiritual, 108 (9.9%) were atheist and spiritual

I'm curious as to how people interpreted this. Does the latter mean that one believes in the supernatural but without a god figure, e.g. buddism, new age? This question looked confusing to me at first glance.

People who believed in high existential risk were more likely to believe in global warming, more likely to believe they had a higher IQ than average, and more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Why does it puzzle you?

Comment author: TheOtherDave 04 December 2011 09:37:09PM 2 points [-]

If I remember correctly, the terms were defined in the survey itself, such that "spiritual and atheist" was something like believing in ontologically basic mental entities but not believing in a God that met that description. I didn't find the question confusing, but I did find it only peripherally related to what most people mean by either term. That said, it is a standard LW unpacking of those terms.

Comment author: pedanterrific 05 December 2011 09:23:30AM 3 points [-]

People who believed in high existential risk were more likely to believe in global warming, more likely to believe they had a higher IQ than average, and more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Why does it puzzle you?

I assume because higher existential risk would seem to generalize to lower chances of aliens existing (because they had the same or similar existential risk as us).

Comment author: Dreaded_Anomaly 05 December 2011 01:29:01PM 1 point [-]

A more subtle interpretation, and one that I expect accounts for at least some of the people in this category, is that high existential risk makes it more likely that relatively nearby aliens exist but will never reach the point where they can contact us.

Comment author: Jayson_Virissimo 05 December 2011 11:21:53AM *  4 points [-]

I'm curious as to how people interpreted this. Does the latter mean that one believes in the supernatural but without a god figure, e.g. buddism, new age? This question looked confusing to me at first glance.

I would have expected the opposite given Yvain's definition of "supernatural". The existence of an agent (or agents) that created the universe seems much more likely than the existence of ontologically basic mental entities. After all, one man's lead software designer of the simulation is another man's god.

Comment author: kilobug 05 December 2011 12:36:01PM 3 points [-]

Here we reach a usual definition problem about "god". Is "god" just someone who created the universe, but with its own limits, or is he omnipowerful omniscient eternal perfect as it is in monotheist religions ? The lead software designer of the simulation would be the first, but very likely not the second. Probably best to just taboo the word "god" in that context.

Comment author: Vladimir_Nesov 04 December 2011 08:53:18PM *  21 points [-]

"less likely to believe in cryonics"

Rather, believe the probability of cryonics producing a favorable outcome to be less. This was a confusing question, because it wasn't specified whether it's total probability, since if it is, then probability of global catastrophe had to be taken into account, and, depending on your expectation about usefulness of frozen heads to FAI's value, probability of FAI as well (in addition to the usual failure-of-preservation risks). As a result, even though I'm almost certain that cryonics fundamentally works, I gave only something like 3% probability. Should I really be classified as "doesn't believe in cryonics"?

(The same issue applied to live-to-1000. If there is a global catastrophe anywhere in the next 1000 years, then living-to-1000 doesn't happen, so it's a heavy discount factor. If there is a FAI, it's also unclear whether original individuals remain and it makes sense to count their individual lifespans.)

Comment author: steven0461 04 December 2011 09:32:47PM 0 points [-]

Do you think catastrophe is extremely probable, do you think frozen heads won't be useful to a Friendly AI's value, or is it a combination of both?

Comment author: Vladimir_Nesov 04 December 2011 10:30:36PM *  5 points [-]

Below is my attempt to re-do the calculations that led to that conclusion (this time, it's 4%).

FAI before WBE: 3%; Surviving to WBE: 60%; I assume cryonics revival feasible mostly only after WBE; Given WBE, cryonics revival (actually happening for significant portion of cryonauts) before catastrophe or FAI: 10%; FAI given WBE (but before cryonics revival): 2%; Heads preserved long enough (given no catastrophe): 50%; Heads (equivalently, living humans) mattering/useful to FAI: less than 50%.

In total, 6% for post-WBE revival potential and 4% for FAI revival potential, discounted by 50% preservation probability and 50% mattering-to-FAI probability, this gives 4%.

(By "humans useful to FAI", I don't mean that specific people should be discarded, but that the difference to utility of the future between a case where a given human is initially present, and where they are lost, is significantly less than moral value of current human life, so that it might be better to keep them than not, but not that much better, for fungibility reasons.)

Comment author: steven0461 04 December 2011 10:53:11PM 0 points [-]

I'm not sure how to interpret the uploads-after-WBE-but-not-FAI scenario. Does that mean FAI never gets invented, possibly in a Hansonian world of eternally competing ems?

Comment author: Vladimir_Nesov 04 December 2011 11:13:31PM 1 point [-]

If you refer to "cryonics revival before catastrophe or FAI", I mean that catastrophe or FAI could happen (shortly) after, no-catastrophe-or-superintelligence seems very unlikely. I expect catastrophe very likely after WBE, also accounting for most of the probability of revival not happening after WBE. After WBE, greater tech argues for lower FAI-to-catastrophe ratio and better FAI theory argues otherwise.

Comment author: steven0461 04 December 2011 11:59:30PM 0 points [-]

So the 6% above is where cryonauts get revived by WBE, and then die in a catastrophe anyway?

Comment author: Vladimir_Nesov 05 December 2011 12:03:35AM *  2 points [-]

Yes. Still, if implemented as WBEs, they could live for significant subjective time, and then there's that 2% of FAI.

Comment author: steven0461 05 December 2011 12:10:55AM *  1 point [-]

In total, you're assigning about a 4% chance of a catastrophe never happening, right? That seems low compared to most people, even most people "in the know". Do you have any thoughts on what is causing the difference?

Comment author: Vladimir_Nesov 05 December 2011 01:10:27AM *  1 point [-]

I expect that "no catastrophe" is almost the same as "eventually, FAI is built". I don't expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn't ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.

Comment author: wedrifid 05 December 2011 03:53:13AM *  0 points [-]

Heads (equivalently, living humans) mattering/useful to FAI: less than 50%.

For an evidently flexible definition of 'Friendly'. Along the lines of "Friendly to someone else perhaps but that guy's a jerk who literally wants me dead!"

Comment author: gwern 04 December 2011 09:35:41PM 13 points [-]

The other 72.3% of people who had to find Less Wrong the hard way. 121 people (11.1%) were referred by a friend, 259 people (23.8%) were referred by blogs, 196 people (18%) were referred by Harry Potter and the Methods of Rationality, 96 people (8.8%) were referred by a search engine, and only one person (.1%) was referred by a class in school.

Of the 259 people referred by blogs, 134 told me which blog referred them. There was a very long tail here, with most blogs only referring one or two people, but the overwhelming winner was Common Sense Atheism, which is responsible for 18 current Less Wrong readers. Other important blogs and sites include Hacker News (11 people), Marginal Revolution (6 people), TV Tropes (5 people), and a three way tie for fifth between Reddit, SebastianMarshall.com, and You Are Not So Smart (3 people).

I've long been interested in whether Eliezer's fanfiction is an effective strategy, since it's so attention-getting (when Eliezer popped up in The New Yorker recently, pretty much his whole blurb was a description of MoR).

Of the listed strategies, only 'blogs' was greater than MoR. The long tail is particularly worrisome to me: LW/OB have frequently been linked in or submitted to Reddit and Hacker News, but those two account for only 14 people? Admittedly, weak SEO in the sense of submitting links to social news sites is a lot less time intensive than writing 1200 page Harry Potter fanfics and Louie has been complaining about us not doing even that, but still, the numbers look to be in MoR's favor.

Comment author: Darmani 05 December 2011 03:39:07AM 5 points [-]

Keep in mind that many of these links were a long time ago. I came here from Overcoming Bias, but I came to Overcoming Bias from Hacker News.

Comment author: NancyLebovitz 05 December 2011 04:15:21PM 0 points [-]

I'm not sure why the long tail is worrisome. How can it be a bad thing for LW to be connected to people with a wide range of interests?

Comment author: gwern 05 December 2011 04:24:44PM *  3 points [-]

It's not a bad thing per se; it's bad that there is a long tail or nothing but tail despite scores (hundreds?) of posts over years to 2 in particular that ought to be especially sympathetic to us. We shouldn't be seeing so few from Reddit and Hacker News!

Comment author: gwern 04 December 2011 09:39:40PM 6 points [-]

There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99)
There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)

You have to admit, that's pretty awful. There's only a 20% difference, is that so?

Comment author: Tyrrell_McAllister 04 December 2011 10:41:26PM *  0 points [-]

There's only a 20% difference, is that so?

"20% difference" between what and what?

Comment author: gwern 04 December 2011 10:50:28PM 2 points [-]

The point being that if there is intelligent life elsewhere in the universe and it hasn't spread (in order to maintain the Great Silence), then the odds of our 1 galaxy, out of the millions or billions known, being the host ought to be drastically smaller even if we try to appeal to reasons to think our galaxy special because of ourselves (eg. panspermia).

Comment author: Oligopsony 05 December 2011 01:12:32AM 3 points [-]

Such a set of probabilities may be justified if you're very uncertain (as seems superficially reasonable) about the baseline probability of life arising in any given galaxy. So perhaps one might assign a ~40% chance that life is just incredibly likely, and most every galaxy has multiple instances of biogenesis, and a ~40% chance that life is just so astronomically (har har har) improbable that the Earth houses the only example in the universe,

This is almost certainly much less reasonable once you start thinking about the Great Filter, unless you think the Filter is civilizations just happily chilling on their home planet or thereabouts for eons, but then not everybody's read or thought about the Filter.

Comment author: gwern 05 December 2011 04:31:38AM 1 point [-]

I was kind of hoping most LWers at least had heard of the Great Silence/Fermi controversy, though.

Comment author: wedrifid 05 December 2011 04:40:14AM *  0 points [-]

The bigger problem to me seems that both the numbers (galaxy and universe) are way too high. It seems like it should be more in the range of "meta-uncertainty + epsilon" for both answers. Maybe "epsilon * lots" for the universe one but even that should be lower than the uncertainty component.

Comment author: NancyLebovitz 05 December 2011 04:17:14PM 0 points [-]

Maybe there should be a question or two about the Fermi paradox.

Comment author: Desrtopa 05 December 2011 02:29:48PM 1 point [-]

If the strong filter is propagation through space, then for rates which people could plausibly assign to the rate of occurrence of intelligent life, the probabilities could be near identical.

What are the odds that a randomly selected population of 10000 has any left handed people? What are the odds that an entire country does?

Comment author: Nornagest 05 December 2011 06:02:53PM *  1 point [-]

Ditto if the strong filter is technological civilization (which strikes me as unlikely, given the anthropological record, but it is one of the Drake terms). If there are ten thousand intelligent species in the galaxy but we're the only one advanced enough to be emitting on radio wavelengths, we'd never hear about any of the others.

Comment author: wedrifid 05 December 2011 04:09:05AM *  3 points [-]

You have to admit, that's pretty awful. There's only a 20% difference, is that so?

Fear not! The 28% difference in the average meaningless. The difference I see in that quote is (90-30), which isn't nearly so bad - and the "1" is also rather telling. More importantly by contrasting the averages with the medians and quartiles we can get something of a picture of what the data looks like. Enough to make a guess as to how it would change if we cut the noise by sampling only, say, those with >= 200 reported karma.

(Note: I am at least as shocked by the current downvote of this comment as gwern is by his "20%", and for rather similar reasons.)

Comment author: SilasBarta 05 December 2011 07:13:27PM *  5 points [-]

Percentage point difference in belief probability isn't all that meaningful. 50% to 51% is a lot smaller confidence difference than 98% to 99%.

69.4% probability means 3.27 odds; 41.2% probability means 1.70 odds.

That means that, in the aggregate, survey takers find (3.27/1.70) = 1.924 -> 0.944 more bits of evidence for life somewhere in the universe, compared to somewhere in the galaxy.

Is that unreasonably big or unreasonably small?

EDIT: Oops, I can't convert properly. That should be 2.27 odds and 0.70 odds, an odds ratio of 3.24, or 1.70 more bits.

Comment author: gwern 05 December 2011 07:27:38PM *  0 points [-]

I'm not comfortable with bit odds, especially in this context, so I dunno. How would you frame that in the opposite terms, for lack of existence?

Comment author: [deleted] 04 December 2011 09:41:23PM 9 points [-]

There was much derision on the last survey over the average IQ supposedly being 146. Clearly Less Wrong has been dumbed down since then, since the average IQ has fallen all the way down to 140.

...

The average person was 37.6% sure their IQ would be above average - underconfident!

Maybe people were expecting the average IQ to turn out to be about the same as in the previous survey, and... (Well, I kind-of was, at least.)

Comment author: gwern 04 December 2011 09:48:27PM 16 points [-]

The mean age was 27.18 years. Quartiles (25%, 50%, 75%) were 21, 25, and 30. 90% of us are under 38, 95% of us are under 45, but there are still eleven Less Wrongers over the age of 60....The mean for the Singularity question is useless because of the very high numbers some people put in, but the median was 2080 (quartiles 2050, 2080, 2150). The Singularity has gotten later since 2009: the median guess then was 2067.

So the 50% age is 25 and the 50% estimate is 2080? A 25 year old has a life expectancy of, what, another 50 years? 2011+50=2061, or 19 years short of the Singularity!

Either people are rather optimistic about future life-extension (despite 'Someone now living will reach age 1000: 23.6'), or the Maes-Garreau Law may not be such a law.

Comment author: RomanDavis 05 December 2011 03:27:19AM *  5 points [-]

Or we have family histories that give us good reason to think we'll outlive the mean, even without drastic increases in the pace of technology. That would describe me. Even without that just living to 25 increases your life expectancy by quite a bit as all those really low numbers play heck with an average.

Or we're overconfident in our life expectancy because of some cognitive bias.

Comment author: gwern 05 December 2011 04:28:51AM 7 points [-]

Even without that just living to 25 increases your life expectancy by quite a bit as all those really low numbers play heck with an average.

I should come clean, I lied when I claimed to be guessing about the 50 year old thing; before writing that, I actually consulted one of the usual actuarial tables which specifies that a 25 year old can only expect an average 51.8 more years. (The number was not based on life expectancy from birth.)

Comment author: Desrtopa 05 December 2011 02:22:53PM 3 points [-]

The actuarial table is based on an extrapolation of 2007 mortality rates for the rest of the population's lives. That sounds like a pretty shaky premise.

Comment author: gwern 05 December 2011 04:51:05PM 7 points [-]

Why would you think that? Mortality rate have, in fact, gone upwards in the past few years for many subpopulations (eg. some female demographics have seen their absolute lifespan expectancy fall), and before that, decreases in old adult mortality were tiny:

life extension from age 65 was increased only 6 years over the entire 20th century; from age 75 gains were only 4.2 years, from age 85 only 2.3 years and from age 100 a single year. From age 65 over the most recent 20 years, the gain has been about a year

(And doesn't that imply deceleration? 20 years is 1/5 of the period, and over the period, 6 years were gained; 1/5 * 6 > 1.)

Which is a shakier premise, that trends will continue, or that SENS will be a wild success greater than, say, the War on Cancer?

Comment author: Desrtopa 05 December 2011 06:13:43PM 2 points [-]

I didn't say that lifespans would necessarily become greater in that period, but several decades is time for the rates to change quite a lot. And while public health has become worse in recent decades in a number of ways (obesity epidemic, lower rates of exercise,) a technologies have been developed which improve the prognoses for a lot of ailments (we may not have cured cancer yet, but many forms are much more treatable than they used to be.)

If all the supposed medical discoveries I hear about on a regular basis were all they're cracked up to be, we would already have a generalized cure for cancer by now and already have ageless mice if not ageless humans, but even if we assume no 'magic bullet' innovations in the meantime, the benefits of incrementally advancing technology are likely to outpace decreases in health if only because the population can probably only get so much fatter and more out of shape than it already is before we reach a point where increased proliferation of superstimulus foods and sedentary activities don't make any difference.

Comment author: gwern 05 December 2011 06:50:45PM 2 points [-]

we may not have cured cancer yet, but many forms are much more treatable than they used to be

Which is already built into the quoted longevity increases. (See also the Gompertz curve.)

Comment author: Desrtopa 05 December 2011 06:58:02PM 2 points [-]

Right, my point is that SENS research, which is a fairly new field, doesn't have to be dramatically more successful than cancer research to produce tangible returns in human life expectancy, and the deceleration in increase of life expectancy is most likely due to a negative health trend which is likely not to endure over the entire interval.

Comment author: michaelsullivan 05 December 2011 07:28:42PM 3 points [-]

I would interpret "the latest possible date a prediction can come true and still remain in the lifetime of the person making it", "lifetime" would be the longest typical lifetime, rather than an actuarial average. So -- we know lots of people who live to 95, so that seems like it's within our possible lifetime. I certainly could live to 95, even if it's less than a 50/50 shot.

One other bit -- the average life expectancy is for the entire population, but the average life expectancy of white, college educated persons earning (or expected to earn) a first or second quintile income is quite a bit higher, and a very high proportion of LWers fall into that demographic. I took a quick actuarial survey a few months back that suggested my life expectancy given my family age/medical history, demographics, etc. was to reach 92 (I'm currently 43).

Comment author: mindspillage 04 December 2011 09:59:12PM 7 points [-]

Are there any significant differences in gender or age (or anything else notable) between the group who chose to keep their responses private and the rest of the respondents?

Comment author: Morendil 04 December 2011 10:42:35PM 7 points [-]

I am officially very surprised at how many that is. Also officially, poorly calibrated at both the 50% (no big deal) and the 90% (ouch, ouch, ouch) confidence levels.

Comment author: Yvain 04 December 2011 10:48:07PM 4 points [-]

You're okay. I asked the question about the number of responses then. When I asked the question, there were only 970 :)

Comment author: Morendil 04 December 2011 11:04:16PM 0 points [-]

Whew!

Comment author: Larks 04 December 2011 10:48:47PM *  8 points [-]
Comment author: Vladimir_Nesov 05 December 2011 04:27:05PM 2 points [-]

Are the questions for the 2009 survey available somewhere?

Comment author: steven0461 04 December 2011 10:50:30PM 8 points [-]

As with the last survey, it's amazing how casually many people assign probabilities like 1% and 99%. I can understand in a few cases, like the religion questions, and Fermi-based answers to the aliens in the galaxy question. But on the whole it looks like many survey takers are just failing the absolute basics: don't assign extreme probabilities without extreme justification.

Comment author: Eugine_Nier 05 December 2011 04:03:03AM 6 points [-]

On the other hand, conjunctive bias exists. It's not hard to string together enough conjunctions that the probability of the statement should be in an extreme range.

Comment author: steven0461 05 December 2011 04:21:49AM 4 points [-]

Does this describe any of the poll questions?

Comment author: AlexMennen 04 December 2011 11:21:32PM 14 points [-]

There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99) There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)

Suggestion: Show these questions in random order to half of people, and show only one of the questions to the other half, to get data on anchoring.

Comment author: Oligopsony 05 December 2011 12:54:46AM 14 points [-]

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

Maybe, but sort of fresh meat we get is not at all independent of the old guard, so an initial bias could easily reproduce itself.

Comment author: wedrifid 05 December 2011 03:36:10AM 12 points [-]

So we can only prove that 519 people post on Less Wrong.

Where by 'prove' we mean 'somebody implied that they did on an anonymous online survey'. ;)

Comment author: kilobug 05 December 2011 10:53:07AM 10 points [-]

Wouldn't it be (relatively) easy and useful to have a "stats" page in LW, with info like number of accounts, number of accounts with > 0 karma (total, monthly), number of comments/articles, ... ?

Comment author: XiXiDu 05 December 2011 11:24:26AM *  10 points [-]

Wouldn't it be (relatively) easy and useful to have a "stats" page in LW, with info like number of accounts, number of accounts with > 0 karma (total, monthly), number of comments/articles, ... ?

Nice idea! I am interested in such statistics.

Comment author: Yvain 05 December 2011 03:16:27PM 13 points [-]

You mean, as opposed to that kind of proof where we end up with a Bayesian probability of exactly one? :)

Comment author: wedrifid 05 December 2011 04:00:03AM 9 points [-]

These averages strike me as almost entirely useless! If only half of the people taking the survey are lesswrong participants then the extra noise will overwhelm any signal when the probabilities returned by the actual members are near to either extreme. Using averaging of probabilities (as opposed to, say, log-odds) is dubious enough even when not throwing in a whole bunch of randoms!

(So thankyou for providing the data!)

Comment author: MarkusRamikin 05 December 2011 10:29:43AM 3 points [-]

The other 72.3% of people who had to find Less Wrong the hard way.

Is it just me or is there something not quite right about this, as an English sentence.

Comment author: pedanterrific 05 December 2011 10:33:46AM 5 points [-]

Could be fixed by adding 'of'

Of the other 72.3% of people who had to find Less Wrong the hard way,

or removing 'who'

The other 72.3% of people had to find Less Wrong the hard way.

Comment author: MarkusRamikin 05 December 2011 10:38:52AM 1 point [-]

Right. For some reason the period instead of comma confused me much more than it should have.

Comment author: [deleted] 05 December 2011 06:48:15PM 1 point [-]

Yeah, which is ‘the hard way’ supposed to be? :-)

Comment author: Desrtopa 05 December 2011 02:12:01PM 11 points [-]

Significant anthropogenic global warming is occurring: 70.7, (55, 85, 95)

I'm rather shocked that the numbers on this are so low. It's higher than polls indicate as the degree of acceptance in America, but then, we're dealing with a public where supposedly half of the people believe that tomatoes only have genes if they are genetically modified. Is this a subject on which Less Wrongers are significantly meta-contrarian?

Comment author: kilobug 05 December 2011 02:24:59PM 15 points [-]

I'm also a bit surprised (I would have excepted high figures), but be careful to not misinterpret the data : it doesn't say that 70.7% of LWers believe in "anthropogenic global warming", but it does an average on probabilities. If you look at the quarters, even the 25% quarter is at p = 55% meaning that less than 25% of LWers give a lower than half probability.

It seems to indicate that almost all LWers believe in it being true (p>0.5 that it is true), but many of them do so with a low confidence. Either because they didn't study the field enough (and therefore, refuse to put too much strength in their belief) or because they consider the field too complicated/not well enough understood to be a too strong probability in it.

Comment author: Desrtopa 05 December 2011 02:36:23PM 2 points [-]

That's how I interpreted it in the first place; "believe in anthropogenic global warming" is a much more nebulous proposition anyway. But while anthropogenic global warming doesn't yet have the same sort of degree of evidence as, say, evolution, I think that an assignment of about 70% probability represents either critical underconfidence or astonishingly low levels of familiarity with the data.

Comment author: Oligopsony 05 December 2011 03:33:15PM 3 points [-]

What should astonish about zero familiarity with the data, beyond that there's a scientific consensus?

Comment author: Desrtopa 05 December 2011 05:42:19PM 4 points [-]

I would be unsurprised by zero familiarity in a random sampling of the population, but I would have expected a greater degree of familiarity here as a matter of general scientific literacy.

Comment author: thomblake 05 December 2011 04:51:26PM 6 points [-]

astonishingly low levels of familiarity with the data.

It doesn't astonish me. It's not a terribly important issue for everyday life; it's basically a political issue.

I think I answered somewhere around 70%; while I've read a bit about it, there are plenty of dissenters and the proposition was a bit vague.

The claim that changing the makeup of the atmosphere in some way will affect climate in some way is trivially true; a more specific claim requires detailed study.

Comment author: Desrtopa 05 December 2011 05:53:49PM 6 points [-]

It doesn't astonish me. It's not a terribly important issue for everyday life; it's basically a political issue.

I would say that it's considerably more important for everyday life for most people than knowing whether tomatoes have genes.

Climate change may not represent a major human existential risk, but while the discussion has become highly politicized, the question of whether humans are causing large scale changes in global climate is by no means simply a political question.

If the Blues believe that asteroid strikes represent a credible threat to our civilization, and the Greens believe they don't, the question of how great a danger asteroid strikes actually pose will remain a scientific matter with direct bearing on survival.

Comment author: amacfie 05 December 2011 02:21:33PM 13 points [-]

So people just got silly with the IQ field again.

Comment author: [deleted] 05 December 2011 04:36:16PM 3 points [-]

Anyone expecting otherwise was also being silly.

Comment author: Jack 05 December 2011 06:09:12PM 11 points [-]

I'd almost rather see SAT scores at this point.

Comment author: Nornagest 05 December 2011 06:29:51PM *  15 points [-]

That'd be problematic for people outside the US, unfortunately. I don't know the specifics of how most of the various non-US equivalents work, but I expect conversion to bring up issues; the British A-level exams, for example, have a coarse enough granularity that they'd probably taint the results purely on those grounds. Especially if the average IQ around here really is >= 140.

Comment author: Unnamed 05 December 2011 07:20:42PM *  37 points [-]

Strength of membership in the LW community was related to responses for most of the questions. There were 3 questions related to strength of membership: karma, sequence reading, and time in the community, and since they were all correlated with each other and showed similar patterns I standardized them and averaged them together into a single measure. Then I checked if this measure of strength in membership in the LW community was related to answers on each of the other questions, for the 822 respondents (described in this comment) who answered at least one of the probability questions and used percentages rather than decimals (since I didn't want to take the time to recode the answers which were given as decimals).

All effects described below have p < .01 (I also indicate when there is a nonsignificant trend with p<.2). On questions with categories I wasn't that rigorous - if there was a significant effect overall I just eyeballed the differences and reported which categories have the clearest difference (and I skipped some of the background questions which had tons of different categories and are hard to interpret).

Compared to those with a less strong membership in the LW community, those with a strong tie to the community are:

Background:

  • Gender - no difference
  • Age - no difference
  • Relationship Status - no difference
  • Sexual Orientation - no difference
  • Relationship Style - less likely to prefer monogamous, more likely to prefer polyamorous or to have no preference
  • Political Views - less likely to be socialist, more likely to be libertarian (but this is driven by the length of time in the community, which may reflect changing demographics - see my reply to this comment)
  • Religious Views - more likely to be atheist & not spiritual, especially less likely to be agnostic
  • Family Religion - no difference
  • Moral Views - more likely to be consequentialist
  • IQ - higher

Probabilities:

  • Many Worlds - higher
  • Aliens in the universe - lower (edited: I had mistakenly reversed the two aliens questions)
  • Aliens in our galaxy - trend towards lower (p=.04)
  • Supernatural - lower
  • God - lower
  • Religion - trend towards lower (p=.11, and this is statistically significant with a different analysis)
  • Cryonics - lower
  • Anti-Agathics - trend towards higher (p=.13) (this was the one question with a significant non-monotonic relationship: those with a moderately strong tie to the community had the highest probability estimate, while those with weak or strong ties had lower estimates)
  • Simulation - trend towards higher (p=.20)
  • Global Warming - higher
  • No Catastrophe - lower (i.e., think it is less likely that we will make it to 2100 without a catastrophe, i.e. think the chances of xrisk are higher)

Other Questions:

  • Singularity - sooner (this is statistically significant after truncating the outliers), and more likely to give an estimate rather than leave it blank
  • Type of XRisk - more likely to think that Unfriendly AI is the most likely XRisk
  • Cryonics Status - More likely to be signed up or to be considering it, less likely to be not planning to or to not have thought about it