A big thank you to the 1090 people who took the second Less Wrong Census/Survey.

Does this mean there are 1090 people who post on Less Wrong? Not necessarily. 165 people said they had zero karma, and 406 people skipped the karma question - I assume a good number of the skippers were people with zero karma or without accounts. So we can only prove that 519 people post on Less Wrong. Which is still a lot of people.

I apologize for failing to ask who had or did not have an LW account. Because there are a number of these failures, I'm putting them all in a comment to this post so they don't clutter the survey results. Please talk about changes you want for next year's survey there.

Of our 1090 respondents, 972 (89%) were male, 92 (8.4%) female, 7 (.6%) transexual, and 19 gave various other answers or objected to the question. As abysmally male-dominated as these results are, the percent of women has tripled since the last survey in mid-2009.

We're also a little more diverse than we were in 2009; our percent non-whites has risen from 6% to just below 10%. Along with 944 whites (86%) we include 38 Hispanics (3.5%), 31 East Asians (2.8%), 26 Indian Asians (2.4%) and 4 blacks (.4%).

Age ranged from a supposed minimum of 1 (they start making rationalists early these days?) to a more plausible minimum of 14, to a maximum of 77. The mean age was 27.18 years. Quartiles (25%, 50%, 75%) were 21, 25, and 30. 90% of us are under 38, 95% of us are under 45, but there are still eleven Less Wrongers over the age of 60. The average Less Wronger has aged about one week since spring 2009 - so clearly all those anti-agathics we're taking are working!

In order of frequency, we include 366 computer scientists (32.6%), 174 people in the hard sciences (16%) 80 people in finance (7.3%), 63 people in the social sciences (5.8%), 43 people involved in AI (3.9%), 39 philosophers (3.6%), 15 mathematicians (1.5%), 14 statisticians (1.3%), 15 people involved in law (1.5%) and 5 people in medicine (.5%).

48 of us (4.4%) teach in academia, 470 (43.1%) are students, 417 (38.3%) do for-profit work, 34 (3.1%) do non-profit work, 41 (3.8%) work for the government, and 72 (6.6%) are unemployed.

418 people (38.3%) have yet to receive any degrees, 400 (36.7%) have a Bachelor's or equivalent, 175 (16.1%) have a Master's or equivalent, 65 people (6%) have a Ph.D, and 19 people (1.7%) have a professional degree such as an MD or JD.

345 people (31.7%) are single and looking, 250 (22.9%) are single but not looking, 286 (26.2%) are in a relationship, and 201 (18.4%) are married. There are striking differences across men and women: women are more likely to be in a relationship and less likely to be single and looking (33% men vs. 19% women). All of these numbers look a lot like the ones from 2009.

27 people (2.5%) are asexual, 119 (10.9%) are bisexual, 24 (2.2%) are homosexual, and 902 (82.8%) are heterosexual.

625 people (57.3%) described themselves as monogamous, 145 (13.3%) as polyamorous, and 298 (27.3%) didn't really know. These numbers were similar between men and women.

The most popular political view, at least according to the much-maligned categories on the survey, was liberalism, with 376 adherents and 34.5% of the vote. Libertarianism followed at 352 (32.3%), then socialism at 290 (26.6%), conservativism at 30 (2.8%) and communism at 5 (.5%).

680 people (62.4%) were consequentialist, 152 (13.9%) virtue ethicist, 49 (4.5%) deontologist, and 145 (13.3%) did not believe in morality.

801 people (73.5%) were atheist and not spiritual, 108 (9.9%) were atheist and spiritual, 97 (8.9%) were agnostic, 30 (2.8%) were deist or pantheist or something along those lines, and 39 people (3.5%) described themselves as theists (20 committed plus 19 lukewarm)

425 people (38.1%) grew up in some flavor of nontheist family, compared to 297 (27.2%) in committed theist families and 356 in lukewarm theist families (32.7%). Common family religious backgrounds included Protestantism with 451 people (41.4%), Catholicism with 289 (26.5%) Jews with 102 (9.4%), Hindus with 20 (1.8%), Mormons with 17 (1.6%) and traditional Chinese religion with 13 (1.2%)

There was much derision on the last survey over the average IQ supposedly being 146. Clearly Less Wrong has been dumbed down since then, since the average IQ has fallen all the way down to 140. Numbers ranged from 110 all the way up to 204 (for reference, Marilyn vos Savant, who holds the Guinness World Record for highest adult IQ ever recorded, has an IQ of 185).

89 people (8.2%) have never looked at the Sequences; a further 234 (32.5%) have only given them a quick glance. 170 people have read about 25% of the sequences, 169 (15.5%) about 50%, 167 (15.3%) about 75%, and 253 people (23.2%) said they've read almost all of them. This last number is actually lower than the 302 people who have been here since the Overcoming Bias days when the Sequences were still being written (27.7% of us).

The other 72.3% of people who had to find Less Wrong the hard way. 121 people (11.1%) were referred by a friend, 259 people (23.8%) were referred by blogs, 196 people (18%) were referred by Harry Potter and the Methods of Rationality, 96 people (8.8%) were referred by a search engine, and only one person (.1%) was referred by a class in school.

Of the 259 people referred by blogs, 134 told me which blog referred them. There was a very long tail here, with most blogs only referring one or two people, but the overwhelming winner was Common Sense Atheism, which is responsible for 18 current Less Wrong readers. Other important blogs and sites include Hacker News (11 people), Marginal Revolution (6 people), TV Tropes (5 people), and a three way tie for fifth between Reddit, SebastianMarshall.com, and You Are Not So Smart (3 people).

Of those people who chose to list their karma, the mean value was 658 and the median was 40 (these numbers are pretty meaningless, because some people with zero karma put that down and other people did not).

Of those people willing to admit the time they spent on Less Wrong, after eliminating one outlier (sorry, but you don't spend 40579 minutes daily on LW; even I don't spend that long) the mean was 21 minutes and the median was 15 minutes. There were at least a dozen people in the two to three hour range, and the winner (well, except the 40579 guy) was someone who says he spends five hours a day.

I'm going to give all the probabilities in the form [mean, (25%-quartile, 50%-quartile/median, 75%-quartile)]. There may have been some problems here revolving around people who gave numbers like .01: I didn't know whether they meant 1% or .01%. Excel helpfully rounded all numbers down to two decimal places for me, and after a while I decided not to make it stop: unless I wanted to do geometric means, I can't do justice to really small grades in probability.

The Many Worlds hypothesis is true: 56.5, (30, 65, 80)
There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99)
There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)
The supernatural (ontologically basic mental entities) exists: 5.38, (0, 0, 1)
God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1)
Some revealed religion is true: 3.40, (0, 0, .15)
Average person cryonically frozen today will be successfully revived: 21.1, (1, 10, 30)
Someone now living will reach age 1000: 23.6, (1, 10, 30)
We are living in a simulation: 19, (.23, 5, 33)
Significant anthropogenic global warming is occurring: 70.7, (55, 85, 95)
Humanity will make it to 2100 without a catastrophe killing >90% of us: 67.6, (50, 80, 90)

There were a few significant demographics differences here. Women tended to be more skeptical of the extreme transhumanist claims like cryonics and antiagathics (for example, men thought the current generation had a 24.7% chance of seeing someone live to 1000 years; women thought there was only a 9.2% chance). Older people were less likely to believe in transhumanist claims, a little less likely to believe in anthropogenic global warming, and more likely to believe in aliens living in our galaxy. Community veterans were more likely to believe in Many Worlds, less likely to believe in God, and - surprisingly - less likely to believe in cryonics (significant at 5% level; could be a fluke). People who believed in high existential risk were more likely to believe in global warming, more likely to believe they had a higher IQ than average, and more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader. Unfriendly AI followed with 180 votes (16.5%), then nuclear war with 151 (13.9%), ecological collapse with 145 votes (12.3%), economic/political collapse with 134 votes (12.3%), and asteroids and nanotech bringing up the rear with 46 votes each (4.2%).

The mean for the Singularity question is useless because of the very high numbers some people put in, but the median was 2080 (quartiles 2050, 2080, 2150). The Singularity has gotten later since 2009: the median guess then was 2067. There was some discussion about whether people might have been anchored by the previous mention of 2100 in the x-risk question. I changed the order after 104 responses to prevent this; a t-test found no significant difference between the responses before and after the change (in fact, the trend was in the wrong direction).

Only 49 people (4.5%) have never considered cryonics or don't know what it is. 388 (35.6%) of the remainder reject it, 583 (53.5%) are considering it, and 47 (4.3%) are already signed up for it. That's more than double the percent signed up in 2009.

231 people (23.4% of respondents) have attended a Less Wrong meetup.

The average person was 37.6% sure their IQ would be above average - underconfident! Imagine that! (quartiles were 10, 40, 60). The mean was 54.5% for people whose IQs really were above average, and 29.7% for people whose IQs really were below average. There was a correlation of .479 (significant at less than 1% level) between IQ and confidence in high IQ.

Isaac Newton published his Principia Mathematica in 1687. Although people guessed dates as early as 1250 and as late as 1960, the mean was...1687 (quartiles were 1650, 1680, 1720). This marks the second consecutive year that the average answer to these difficult historical questions has been exactly right (to be fair, last time it was the median that was exactly right and the mean was all of eight months off). Let no one ever say that the wisdom of crowds is not a powerful tool.

The average person was 34.3% confident in their answer, but 41.9% of people got the question right (again with the underconfidence!). There was a highly significant correlation of r = -.24 between confidence and number of years error.

This graph may take some work to read. The x-axis is confidence. The y-axis is what percent of people were correct at that confidence level. The red line you recognize as perfect calibration. The thick green line is your results from the Newton problem. The black line is results from the general population I got from a different calibration experiment tested on 50 random trivia questions; take the intercomparability of the two with a grain of salt.

As you can see, Less Wrong does significantly better than the general population. However, there are a few areas of failure. First is that, as usual, people who put zero and one hundred percent had nonzero chances of getting the question right or wrong: 16.7% of people who put "0" were right, and 28.6% of people who put "100" were wrong (interestingly, people who put 100 did worse than the average of everyone else in the 90-99 bracket, of whom only 12.2% erred). Second of all, the line is pretty horizontal from zero to fifty or so. People who thought they had a >50% chance of being right had excellent calibration, but people who gave themselves a low chance of being right were poorly calibrated. In particular, I was surprised to see so many people put numbers like "0". If you're pretty sure Newton lived after the birth of Christ, but before the present day, that alone gives you a 1% chance of randomly picking the correct 20-year interval.

160 people wanted their responses kept private. They have been removed. The rest have been sorted by age to remove any information about the time they took the survey. I've converted what's left to a .xls file, and you can download it here.

New to LessWrong?

New Comment
513 comments, sorted by Click to highlight new comments since: Today at 7:34 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

People who believed in high existential risk were ... more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Aliens existing but not yet colonizing multiple systems or broadcasting heavily is the the response consistent with the belief that a Great Filter lies in front of us.

Strength of membership in the LW community was related to responses for most of the questions. There were 3 questions related to strength of membership: karma, sequence reading, and time in the community, and since they were all correlated with each other and showed similar patterns I standardized them and averaged them together into a single measure. Then I checked if this measure of strength in membership in the LW community was related to answers on each of the other questions, for the 822 respondents (described in this comment) who answered at least one of the probability questions and used percentages rather than decimals (since I didn't want to take the time to recode the answers which were given as decimals).

All effects described below have p < .01 (I also indicate when there is a nonsignificant trend with p<.2). On questions with categories I wasn't that rigorous - if there was a significant effect overall I just eyeballed the differences and reported which categories have the clearest difference (and I skipped some of the background questions which had tons of different categories and are hard to interpret).

Compared to those with a less strong membership in the LW... (read more)

Political Views - less likely to be socialist, more likely to be libertarian

I looked at this one a little more closely, and this difference in political views is driven almost entirely by the "time in community" measure of strength of membership in the LW community; it's not even statistically significant with the other two. I'd guess that is because LW started out on Overcoming Bias, which is a relatively libertarian blog, so the old timers tend to share those views. We've also probably added more non-Americans over time, who are more likely to be socialist.

All of the other relationships in the above post hold up when we replace the original measure of membership strength with one that is only based on the two variables of karma & sequence reading, but this one does not.

8Normal_Anomaly12y
So long-time participants were less likely to believe that cryonics would work for them but more likely to sign up for it? Interesting. This could be driven by any of: fluke, greater rationality, greater age&income, less akrasia, more willingness to take long-shot bets based on shutting up and multiplying.
5Unnamed12y
I looked into this a little more, and it looks like those who are strongly tied to the LW community are less likely to give high answers to p(cryonics) (p>50%), but not any more or less likely to give low answers (p<10%). That reduction in high answers could be a sign of greater rationality - less affect heuristic driven irrational exuberance about the prospects for cryonics - or just more knowledge about the topic. But I'm surprised that there's no change in the frequency of low answers. There is a similar pattern in the relationship between cryonics status and p(cryonics). Those who are signed up for cryonics don't give a higher p(cryonics) on average than those who are not signed up, but they are less likely to give a probability under 10%. The group with the highest average p(cryonics) is those who aren't signed up but are considering it, and that's the group that's most likely to give a probability over 50%. Here are the results for p(cryonics) broken down by cryonics status, showing what percent of each group gave p(cryonics)<.1, what percent gave p(cryonics)>.5, and what the average p(cryonics) is for each group. (I'm expressing p(cryonics) here as probabilities from 0-1 because I think it's easier to follow that way, since I'm giving the percent of people in each group.) Never thought about it / don't understand (n=26): 58% give p<.1, 8% give p>.5, mean p=.17 No, and not planning to (n=289): 60% give p<.1, 6% give p>.5, mean p=.14 No, but considering it (n=444): 38% give p < .1, 18% give p>.5, mean p=.27 Yes - signed up or just finishing up paperwork (n=36): 39% give p<.1, 8% give p>.5, mean p=.21 Overall: 47% give p<.1, 13% give p>.5, mean p=.22
4ewbrownv12y
The existential risk questions are a confounding factor here - if you think p(cryonics works) 80%, but p(xrisk ends civilization) 50%, that pulls down your p(successful revival) considerably.
3Unnamed12y
I wondered about that, but p(cryonics) and p(xrisk) are actually uncorrelated, and the pattern of results for p(cryonics) remains the same when controlling statistically for p(xrisk).
1Randolf12y
I think the main reason for this is that these persons have simply spent more time thinking about cyronics compared to other people. By spending time on this forum they have had a good chance of running into a discussion which has inspired them to read about it and sign up. Or perhaps people who are interested in cyronics are also interested in other topics LW has to offer, and hence stay in this place. In either case, it follows that they are probably also more knowledgeable about cyronics and hence understand what cyrotechnology can realistically offer currently or in the near future. In addition, these long-time guys might be more open to things such as cyronics in the ethical way.
6gwern12y
I don't think this is obvious at all. If you had asked me before in advance which of the following 4 possible sign-pairs would be true with increasing time spent thinking about cryonics: 1. less credence, less sign-ups 2. less credence, more sign-ups 3. more credence, more sign-ups 4. more credence, less sign-ups I would have said 'obviously #3, since everyone starts from "that won't ever work" and move up from there, and then one is that much more likely to sign up' The actual outcome, #2, would be the one I would expect least. (Hence, I am strongly suspicious of anyone claiming to expect or predict it as suffering from hindsight bias.)
5CarlShulman12y
It is noted above that those with strong community attachment think that there is more risk of catastrophe. If human civilization collapses or is destroyed, then cryonics patients and facilities will also be destroyed.
0brianm12y
I would expect the result to be a more accurate estimation of the success, combined with more sign-ups . 2 is an example of this if, in fact, the more accurate assessment is lower than the assessment of someone with a different level of information. I don't it's true that everyone starts from "that won't ever work" - we know some people think it might work, and we may be inclined to some wishful thinking or susceptability to hype to inflate our likelihood above the conclusion we'd reach if we invest the time to consider the issue in more depth, It's also worth noting that we're not comparing the general public to those who've seriously considered signing up, but the lesswrong population, who are probably a lot more exposed to the idea of cryonics. I'd agree that it's not what I would have predicted in advance (having no more expectation for the likelihood assigned to go up as down with more research), but it would be predictable for someone proceeding from the premise that the lesswrong community overestimates the likelihood of cryonics success compared to those who have done more research.
0[anonymous]12y
Yeah, I think you have a point. However, maybe the following explanation would be better: Currently cyronics aren't likely to work. People who sign up into cyronics do research on the subject before or after singing up, and hence become aware that cyronics aren't likely to work.

Running list of changes for next year's survey:

  1. Ask who's a poster versus a lurker!
  2. A non-write-in "Other" for most questions
  3. Replace "gender" with "sex" to avoid complaints/philosophizing.
  4. Very very clear instructions to use percent probabilities and not decimal probabilities
  5. Singularity year question should have explicit instructions for people who don't believe in singularity
  6. Separate out "relationship status" and "looking for new relationships" questions to account for polys
  7. Clarify that research is allowed on the probability questions
  8. Clarify possible destruction of humanity in cryonics/antiagathics questions.
  9. What does it mean for aliens to "exist in the universe"? Light cone?
  10. Make sure people write down "0" if they have 0 karma.
  11. Add "want to sign up, but not available" as cryonics option.
  12. Birth order.
  13. Have children?
  14. Country of origin?
  15. Consider asking about SAT scores for Americans to have something to correlate IQs with.
  16. Consider changing morality to PhilPapers version.

One about nationality (and/or native language)? I guess that would be much more relevant than e.g. birth order.

Regarding #4, you could just write a % symbol to the right of each input box.

BTW, I'd also disallow 0 and 100, and give the option of giving log-odds instead of probability (and maybe encourage to do that for probabilities 99%). Someone's “epsilon” might be 10^-4 whereas someone else's might be 10^-30.

7brilee12y
I second that. See my post at http://lesswrong.com/r/discussion/lw/8lr/logodds_or_logits/ for a concise summary. Getting the LW survey to use log-odds would go a long way towards getting LW to start using log-odds in normal conversation.
5Luke_A_Somers12y
People will mess up the log-odds, though. Non-log odds seem safer. Two fields instead of one, but it seems cleaner than any of the other alternatives.
5A1987dM12y
The point is not having to type lots of zeros (or of nines) with extreme probabilities (so that people won't weasel out and use ‘epsilon’); having to type 1:999999999999999 is no improvement over having to type 0.000000000000001.
2Kaj_Sotala12y
Is such precision meaningful? At least for me personally, 0.1% is about as low as I can meaningfully go - I can't really discriminate between me having an estimate of 0.1%, 0.001%, or 0.0000000000001%.

I expect this is incorrect.

Specifically, I would guess that you can distinguish the strength of your belief that a lottery ticket you might purchase will win the jackpot from one in a thousand (a.k.a. 0.1%). Am I mistaken?

3MBlume12y
That's a very special case -- in the case of the lottery, it is actually possible-in-principle to enumerate BIG_NUMBER equally likely mutually-exclusive outcomes. Same with getting the works of shakespeare out of your random number generator. The things under discussion don't have that quality.
2A1987dM12y
I agree in principle, but on the other hand the questions on the survey are nowhere as easy as "what's the probability of winning such-and-such lottery".
2Kaj_Sotala12y
You're right, good point.
0Emile12y
Just type 1:1e15 (or 1e-15 if you don't want odd ratios).
2[anonymous]12y
I'd force log odds, as they are the more natural representation and much less susceptible to irrational certainty and nonsense answers. Someone has to actually try and comprehend what they are doing to troll logits; -INF seems a lot more out to lunch than p = 0. I'd also like someone to go thru the math to figure out how to correctly take the mean of probability estimates. I see no obvious reason why you can simply average [0, 1] probability. The correct method would probably involve cooking up a hypothetical bayesian judge that takes everyones estimates as evidence. Edit: since logits can be a bit unintuitive, I'd give a few calibration examples like odds of rolling a 6 on a die, odds of winning some lottery, fair odds, odds of surviving a car crash, etc.
2A1987dM12y
Personally, for probabilities roughly between 20% and 80% I find probabilities (or non-log odds) easier than understand than log-odds. Yeah. One of the reason why I proposed this is the median answer of 0 in several probability questions. (I'd also require a rationale in order to enter probabilities more extreme than 1%/99%.) I'd go with the average of log-odds, but this requires all of them to be finite...
0dlthomas12y
Weighting, in part, by the calibration questions?
2[anonymous]12y
I dunno how you would weight it. I think you'd want to have a maxentropy 'fair' judge at least for comparison. Calibration questions are probably the least controversial way of weighting. Compare to, say, trying to weight using karma. This might be an interesting thing to develop. A voting system backed up by solid bayes-math could be useful for more than just LW surveys.
0dlthomas12y
It might be interesting to see what results are produced by several weighting approaches.
1[anonymous]12y
yeah. that's what I was getting at with the maxentropy judge. On further thought, I really should look into figuring this out. Maybe I'll do some work on it and post a discussion post. This could be a great group rationality tool.

Publish draft questions in advance, so we can spot issues before the survey goes live.

We should ask if people participated in the previous surveys.

I'd love a specific question on moral realism instead of leaving it as part of the normative ethics question. I'd also like to know about psychiatric diagnoses (autism spectrum, ADHD, depression, whatever else seems relevant)-- perhaps automatically remove those answers from a spreadsheet for privacy reasons.

I don't care about moral realism, but psychiatric diagnoses (and whether they're self-diagnosed or formally diagnosed) would be interesting.

You are aware that if you ask people for their sex but not their gender, and say something like "we have more women now", you will be philosophized into a pulp, right?

6wedrifid12y
Only if people here are less interested in applying probability theory than they are in philosophizing about gender... Oh.
4FiftyTwo12y
Why not ask for both?
4Emile12y
Because the two are so highly correlated that having both would give us almost no extra information. One goal of the survey should be to maximize the useful-info-extracted / time-spent-on-it ratio, hence also the avoidance of write-ins for many questions (which make people spend more time on the survey, to get results that are less exploitable) (a write-in for gender works because people are less likely to write a manifesto for that than for politics).
2MixedNuts12y
Because having a "gender" question causes complaints and philosophizing, which Yvain wants to avoid.
0ShardPhoenix12y
How about, "It's highly likely that we have more women now"?

Suggestion: "Which of the following did you change your mind about after reading the sequences? (check all that apply)"

  • [] Religion
  • [] Cryonics
  • [] Politics
  • [] Nothing
  • [] et cetera.

Many other things could be listed here.

2TheOtherDave12y
I'm curious, what would you do with the results of such a question? For my part, I suspect I would merely stare at them and be unsure what to make of a statistical result that aggregates "No, I already held the belief that the sequences attempted to convince me of" with "No, I held a contrary belief and the sequences failed to convince me otherwise." (That it also aggregates "Yes, I held a contrary belief and the sequences convinced me otherwise." and "Yes, I initially held the belief that the sequences attempted to convince me of, and the sequences convinced me otherwise" is less of a concern, since I expect the latter group to be pretty small.)
3lavalamp12y
Originally I was going to suggest asking, "what were your religious beliefs before reading the sequences?"-- and then I succumbed to the programmer's urge to solve the general problem. However, I guess measuring how effective the sequences are at causing people to change their mind is something that a LW survey can't do, anyway (you'd need to also ask people who read the sequences but didn't stick around to accurately answer that). Mainly I was curious how many deconversions the sequences caused or hastened.
0taryneast12y
Ok, so use radio-buttons: "believed before, still believe" "believed before, changed my mind now" "didn't believe before, changed my mind now" "never believed, still don't"
1TheOtherDave12y
...and "believed something before, believe something different now"
1Alejandro112y
I think the question is too vague as formulated. Does any probability update, no matter how small, count as changing your mind? But if you ask for precise probability changes, then the answers will likely be nonsense because most people (even most LWers, I'd guess) don't keep track of numeric probabilities, just think "oh, this argument makes X a bit more believable" and such.

When asking for race/ethnicity, you should really drop the standard American classification into White - Hispanic - Black - Indian - Asian - Other. From a non-American perspective this looks weird, especially the "White Hispanic" category. A Spaniard is White Hispanic, or just White? If only White, how does the race change when one moves to another continent? And if White Hispanic, why not have also "Italic" or "Scandinavic" or "Arabic" or whatever other peninsula-ic races?

Since I believe the question was intended to determine the cultural background of LW readers, I am surprised that there was no question about country of origin, which would be more informative. There is certainly greater cultural difference between e.g. Turks (White, non-Hispanic I suppose) and White non-Hispanic Americans than between the latter and their Hispanic compatriots.

Also, making a statistic based on nationalities could help people determine whether there is a chance for a meetup in their country. And it would be nice to know whether LW has regular readers in Liechtenstein, of course.

6[anonymous]12y
I was also...well, not surprised per se, but certainly annoyed to see that "Native American" in any form wasn't even an option. One could construe that as revealing, I suppose. I don't know how relevant the question actually is, but if we want to track ancestry and racial, ethnic or cultural group affiliation, the folowing scheme is pretty hard to mess up: Country of origin: Country of residence: Primary Language: Native Language (if different): Heritage language (if different): Note: A heritage language is one spoken by your family or identity group. Heritage group: Diaspora: Means your primary heritage and identity group moved to the country you live in within historical or living memory, as colonists, slaves, workers or settlers. European diaspora ("white" North America, Australia, New Zealand, South Africa, etc) African diaspora ("black" in the US, West Indian, more recent African emigrant groups; also North African diaspora) Asian diaspora (includes, Turkic, Arab, Persian, Central and South Asian, Siberian native) Indigenous: Means your primary heritage and identity group was resident to the following location prior to 1400, OR prior to the arrival of the majority culture in antiquity (for example: Ainu, Basque, Taiwanese native, etc): -Africa -Asia -Europe -North America (between Panama and Canada, also includes Greenland and the Carribean) -Oceania (including Australia) -South America Mixed: Select two or more: European Diaspora African Diaspora Asian Diaspora African Indigenous American Indigenous Asian Indigenous European Indigenous Oceania Indigenous What the US census calls "Non-white Hispanic" would be marked as "Mixed" > "European Diaspora" + "American Indigenous" with Spanish as either a Native or Heritage language. Someone who identifies as (say) Mexican-derived but doesn't speak Spanish at all would be impossible to tell from someone who was Euro-American and Cherokee who doesn't speak Cherokee, but no system is perfect...
4wedrifid12y
Put two spaces after a line if you want a linebreak.
6[anonymous]12y
Most LessWrong posters and readers are American, perhaps even the vast majority (I am not). Hispanic Americans differ from white Americans differ from black Americans culturally and socio-economically not just on average but in systemic ways regardless if the person in question defines himself as Irish American, Kenyan American, white American or just plain American. From the US we have robust sociological data that allows us to compare LWers based on this information. The same is true of race in Latin America, parts of Africa and more recently Western Europe. Nationality is not the same thing as racial or even ethnic identity in multicultural societies. Considering every now and then people bring up a desire to lower barriers to entry for "minorities" (whatever that means in a global forum), such stats are useful for those who argue on such issues and also for ascertaining certain biases. Adding a nationality and/or citizenship question would probably be useful though.
2prase12y
I have not said that it is. I was objecting to arbitrariness of "Hispanic race": I believe that the difference between Hispanic White Americans and non-Hispanic White Americans is not significantly higher than the difference between both two groups and non-Americans, and that the number of non-Americans among LW users would be higher than 3.8% reported for the Hispanics. I am not sure what exact sociological data we may extract from the survey, but in any case, the comparison to standard American sociological datasets will be problematic because the LW data are contaminated by presence of non-Americans and there is no way to say how much, because people were not asked about that.
-1[anonymous]12y
I didn't meant to imply you did, I just wanted to emphasise that data is gained by the racial breakdown. Especially in the American context, race sits at the strange junction of appearance, class, heritage, ethnicity, religion and subculture. And its hard to capture it by any of these metrics. Once we have data on how many are American (and this is something we really should have) this will be easier to say.
3[anonymous]12y
Because we don't have as much useful sociological data on this. Obviously we can start collecting data on any of the proposed categories, but if we're the only ones, it won't much help us figure out how LW differs from what one might expect of a group that fits its demographic profile. Much of the difference in the example of Turks is captured by the Muslim family background question.
0prase12y
Much, but not most. Religion is easy to ascertain, but there are other cultural differences which are more difficult to classify, but still are signigicant *. Substitute Turks with Egyptian Christians and the example will still work. (And not because of theological differences between Coptic and Protestant Christianity.) *) Among the culturally determined attributes are: political opinion, musical taste and general aesthetic preferences, favourite food, familiarity with different literature and films, ways of relaxation, knowledge of geography and history, language(s), moral code. Most of these things are independent of religion or only very indirectly influenced by it.
3NancyLebovitz12y
Offer a text field for race. You'll get some distances, not to mention "human" or "other", but you could always use that to find out whether having a contrary streak about race/ethnicity correlates with anything. If you want people to estimate whether a meetup could be worth it, I recommend location rather than nationality-- some nations are big enough that just knowing nationality isn't useful.

Suggestion: add "cryocrastinating" as a cryonics option.

9Jayson_Virissimo12y
I think using your stipulative definition of "supernatural" was a bad move. I would be very surprised if I asked a theologian to define "supernatural" and they replied "ontologically basic mental entities". Even as a rational reconstruction of their reply, it would be quite a stretch. Using such specific definitions of contentious concepts isn't a good idea, if you want to know what proportion of Less Wrongers self-identify as atheist/agnostic/deist/theist/polytheist.
1TheOtherDave12y
OTOH, using a vague definition isn't a good idea either, if you want to know something about what Less Wrongers believe about the world. I had no problem with the question as worded; it was polling about LWers confidence in a specific belief, using terms from the LW Sequences. That the particular belief is irrelevant to what people who self-identify as various groups consider important about that identification is important to remember, but not in and of itself a problem with the question. But, yeah... if we want to know what proportion of LWers self-identify as (e.g.) atheist, that question won't tell us.
7selylindi12y
Yet another alternate, culture-neutral way of asking about politics: Q: How involved are you in your region's politics compared to other people in your region? A: [choose one] () I'm among the most involved () I'm more involved than average () I'm about as involved as average () I'm less involved than average () I'm among the least involved
5FiftyTwo12y
Requires people to self assess next to a cultural baseline, and self assessments of this sort are notoriously inaccurate. (I predict everyone will think they have above-average involvement).
4Prismattic12y
Within a US-specific context, I would eschew these comparisons to a notional average and use the following levels of participation: 0 = indifferent to politics and ignorant of current events 1 = attentive to current events, but does not vote 2 = votes in presidential elections, but irregularly otherwise 3 = always votes 4 = always votes and contributes to political causes 5 = always votes, contributes, and engages in political activism during election seasons 6 = always votes, contributes, and engages in political activism both during and between election seasons 7 = runs for public office I suspect that the average US citizen of voting age is a 2, but I don't have data to back that up, and I am not motivated to research it. I am a 4, so I do indeed think that I am above average. Those categories could probably be modified pretty easily to match a parliamentary system by leaving out the reference to presidential elections and just having "votes irregularly" and "always votes" Editing to add -- for mandatory voting jurisdictions, include a caveat that "spoiled ballot = did not vote"
4TheOtherDave12y
Personally, I'm not sure I necessarily consider the person who runs for public office to be at a higher level of participation than the person who works for them.
2Nornagest12y
I agree denotationally with that estimate, but I think you're putting too much emphasis on voting in at least the 0-4 range. Elections (in the US) only come up once or exceptionally twice a year, after all. If you're looking for an estimate of politics' significance to a person's overall life, I think you'd be better off measuring degree of engagement with current events and involvement in political groups -- the latter meaning not only directed activism, but also political blogs, non-activist societies with a partisan slant, and the like. For example: do you now, or have you ever, owned a political bumper sticker?
0TimS12y
Maybe: "How frequently do you visit websites/read media that have an explicit political slant?"
2A1987dM12y
There might be people who don't always (or even usually) vote yet they contribute to political causes/engage in political activism, for certain values of “political” at least.
0thomblake12y
I had not before encountered this form of protest. If I were living in a place with mandatory voting and anonymous ballots, I would almost surely write my name on the ballot to spoil it.
2wedrifid12y
I do and I do. :)
0A1987dM12y
I have never actually spoiled a ballot in a municipality-or-higher-level election (though voting for a list with hardly any chance whatsoever of passing the election threshold has a very similar effect), but in high school I did vote for Homer Simpson as a students' representative, and there were lots of similarly hilarious votes, including (IIRC) ones for God, Osama bin Laden, and Silvio Berlusconi.
2wedrifid12y
I'd actually have guessed an average of below average.
2thomblake12y
Bad prediction. While it's hard to say since so few people around here actually vote, my involvement in politics is close enough to 0 that I'd be very surprised if I was more involved than average.
2DanArmak12y
I have exactly zero involvement and so I'd never think that.
2NancyLebovitz12y
I think I have average or below-average involvement. Maybe it would be better to ask about the hours/year spent on politics.
0FiftyTwo12y
For comparison what would you say the average persons level of involvement in politics consists of? (To avoid contamination, don't research or overthink the question just give us the average you were comparing yourself to). Edit: The intuitive average other commenters compared themselves to would also be of interest.
0NancyLebovitz12y
Good question. I don't know what the average person's involvement is, and I seem to know a lot of people (at least online) who are very politically involved, so I may be misestimating whether my political activity is above or below average.
0FiftyTwo12y
On posting this I made the prediction that the average assumed by most lesswrong commenters would be above the actual average level of participation. I hypothesise this is because most LW commenters come from relatively educated or affluent social groups, where political participation is quite high. Whereas there are large portions of the population who do not participate at all in politics (in the US and UK a significant percentage don't even vote in the 4-yearly national elections). Because of this I would be very sceptical of self reported participation levels, and would agree a quantifiable measure would be better.
6CharlesR12y
You should clarify in the antiagathics question that the person reaches the age of 1000 without the help of cryonics.
5Pfft12y
Replacing gender with sex seems like the wrong way to go to me. For example, note how Randall Munroe asked about sex, then regretted it.
0jefftk12y
I don't think I'd describe that post as regretting asking "do you have a Y chromosome". He's apologizing for asking for data for one purpose (checking with colorblindness) and then using it for another (color names if you're a guy/girl).
4Scott Alexander12y
Everyone who's suggesting changes: you are much more likely to get your way if you suggest a specific alternative. For example, instead of "handle politics better", something like "your politics question should have these five options: a, b, c, d, and e." Or instead of "use a more valid IQ measure", something more like "Here's a site with a quick and easy test that I think is valid"
3ChrisHallquist12y
In that case: use the exact ethics questions from the PhilPapers Survey (http://philpapers.org/surveys/), probably minus lean/accept distinction and the endless drop-down menu for "other."
1ChrisHallquist12y
For IQ: maybe you could nudge people to greater honesty by splitting up the question: (1) have you ever taken an IQ test with [whatever features were specified on this year's survey], yes or no? (2) if yes, what was your score?
1twanvl12y
Also, "ever" might be a bit too long. IQs and IQ tests can change over time, so maybe you should ask "have you taken an IQ test [with constraints] in the last 10 years?"
3[anonymous]12y
http://en.wikipedia.org/wiki/Intersex Otherwise agreed.
[-][anonymous]12y100

Strongly disagree with previous self here. I do not think replacing "gender" with "sex" avoids complaints or "philosophizing", and "philosophizing" in context feels like a shorthand/epithet for "making this more complex than prevailing, mainstream views on gender."

For a start, it seems like even "sex" in the sense used here is getting at a mainly-social phenomenon: that of sex assigned at birth. This is a judgement call by the doctors and parents. The biological correlates used to make that decision are just weighed in aggregate; some people are always going to throw an exception. If you're not asking about the size of gametes and their delivery mechanism, the hormonal makeup of the person, their reproductive anatomy where applicable, or their secondary sexual characteristics, then "sex" is really just asking the "gender" question but hazily referring to biological characteristics instead.

Ultimately, gender is what you're really asking for. Using "sex" as a synonym blurs the data into unintelligibility for some LWers; pragmatically, it also amounts to a tacit "screw you" to trans p... (read more)

2RobertLumley12y
A series of four questions on each Meyers-Briggs indicator would be good, although I'm sure the data would be woefully unsurprising. Perhaps link to an online test if people don't know it already.
0DanArmak12y
You can accomplish this by adding a percent sign in the survey itself, to the right of to every textbox entry field. Edit: sorry, already suggested.
0ChrisHallquist12y
As per my previous comments on this, separate out normative ethics and meta-ethics. And maybe be extra-clear on not answering the IQ question unless you have official results? Or is that a lost cause?
0dlthomas12y
I would much rather see a choice of units.
-3Armok_GoB12y
That list is way, way to short. I entirely gave up on the survey partway through because an actual majority of the questions were inapplicable or downright offensive to my sensibilities, or just incomprehensible, or I couldn't answer them for some other reason. Not that I can think of anything that WOULDN'T have that effect on me without being specifically tailored to me which sort of destroys the point of having a survey... Maybe I'm just incompatible with surveys in general.
1NancyLebovitz12y
Would you be willing to write a discussion post about the questions you want to answer?
-5Armok_GoB12y
-5duckduckMOO12y

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

This is not just intriguing. To me this is the single most significant finding in the survey.

It's also worrying, because it means we're not getting better on average.

If the readership of LessWrong has gone up similarly in that time, then I would not expect to see an improvement, even if everyone who reads LessWrong improves.

5steven046112y
Yes, I was thinking that. Suppose it takes a certain fixed amount of time for any LessWronger to learn the local official truth. Then if the population grows exponentially, you'd expect the fraction that knows the local official truth to remain constant, right? But I'm not sure the population has been growing exponentially, and even so you might have expected the local official truth to become more accurate over time, and you might have expected the community to get better over time at imparting the local official truth. Regardless of what we should have expected, my impression is LessWrong as a whole tends to assume that it's getting closer to the truth over time. If that's not happening because of newcomers, that's worth worrying about.
1JoachimSchipper12y
Note that it is possible for newcomers to hold the same inaccurate beliefs as their predecessors while the core improves its knowledge or expands in size. In fact, as LW grows it will have to recruit from, say, Hacker News (where I first heard of LW) instead of Singularity lists, producing newcomers less in tune with the local truth. (Unnamed's comment shows interesting differences in opinion between a "core" and the rest, but (s)he seems to have skipped the only question with an easily-verified answer, i.e. Newton.)
3Unnamed12y
The calibration question was more complicated to analyze, but now I've looked at it and it seems like core members were slightly more accurate at estimating the correct year (p=.05 when looking at size of the error, and p=.12 when looking at whether or not it was within the 20-year range), but there's no difference in calibration. ("He", btw.)
1curiousepic12y
Couldn't the current or future data be correlated with length of readership to determine this?
4endoself12y
It just means that we're at a specific point in memespace. The hypothesis that we are all rational enough to identify the right answers to all of these questions wouldn't explain the observed degree of variance.

The supernatural (ontologically basic mental entities) exists: 5.38, (0, 0, 1)

God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1)

??

P(Supernatural) What is the probability that supernatural events, defined as those involving ontologically basic mental entities, have occurred since the beginning of the universe?

P(God) What is the probability that there is a god, defined as a supernatural (see above) intelligent entity who created the universe?

So deism (God creating the universe but not being involved in the universe once it began) could make p(God) > p(Supernatural).

Looking at the the data by individual instead of in aggregate, 82 people have p(God) > p(Supernatural); 223 have p(Supernatural) > p(God).

8J_Taylor12y
Given this, the numbers no longer seem anomalous. Thank you.
0CharlesR12y
Except that the question specified "God" as an ontologically basic mental entity.

So they believe that God created the universe, but has ceased to exist since.

We have 82 Nietzscheans.

3Sophronius12y
Yea, I noticed that too. They are so close together that I wrote it off as noise, though. Otherwise, it can be explained by religious people being irrational and unwilling to place god in the same category as ghosts and other "low status" beliefs. That doesn't indicate irrationality on the part of the rest of less wrong.
5DanielLC12y
That would work if it was separate surveys, but in order to get that on one survey, individual people would have to give a higher probability to God than any supernatural.
5Sophronius12y
True, but this could be the result of a handful of people giving a crazy answer (noise). Not really indicative of less wrong as a whole. I imagine most less wrongers gave negligible probabilities for both, allowing a few religious people to skew the results.
1DanielLC12y
I was thinking you meant statistical error. Do you mean trolls, or people who don't understand the question?
4Sophronius12y
Neither, I meant people who don't understand that the probability of a god should be less than the probability of something supernatural existing. Add in religious certainty and you get a handful of people giving answers like P(god) = 99% and P(supernatural) = 50% which can easily skew the results if the rest of less wrong gives probabilities like 1%and 2% respectively. Given what Yvain wrote in the OP though, I think there's also plenty of evidence of trolls upsetting the results somewhat at points. Of course, it would make much more sense to ask Yvain for more data on how people answered this question rather than speculate on this matter :p
2byrnema12y
Could someone break down what is meant by "ontologically basic mental entities"? Especially, I'm not certain of the role of the word 'mental'..

It's a bit of a nonstandard definition of the supernatural, but I took it to mean mental phenomena as causeless nodes in a causal graph: that is, that mental phenomena (thoughts, feelings, "souls") exist which do not have physical causes and yet generate physical consequences. By this interpretation, libertarian free will and most conceptions of the soul would both fall under supernaturalism, as would the prerequisites for most types of magic, gods, spirits, etc.

I'm not sure I'd have picked that phrasing, though. It seems to be entangled with epistemological reductionism in a way that might, for a sufficiently careful reading, obscure more conventional conceptions of the "supernatural": I'd expect more people to believe in naive versions of free will than do in, say, fairies. Still, it's a pretty fuzzy concept to begin with.

0byrnema12y
OK, thanks. I also tend to interpret "ontologically basic" as a causeless node in a causal graph. I'm not sure what is meant by 'mental'. (For example, in the case of free will or a soul.) I think this is important, because "ontologically basic" in of itself isn't something I'd be skeptical about. For example, as far as I know, matter is ontologically basic at some level. A hypothesis: Mental perhaps implies subjective in some sense, perhaps even as far as meaning that an ontologically basic entity is mental if it is a node that is not only without physical cause but also has no physical effect. In which case, I again see no reason to be skeptical of their existence as a category.
1scav12y
It's barely above background noise, but my guess is when specifically asked about ontologically basic mental entities, people will say no (or huh?), but when asked about God a few will decline to define supernatural in that way or decline to insist on God as supernatural. It's an odd result if you think everyone is being completely consistent about how they answer all the questions, but if you ask me, if they all were it would be an odd result in itself.
0[anonymous]12y
I think Yvain's stipulative definition of "supernatural" was a bad move. I would be very surprised if I asked a theologian to define "supernatural" and they replied "ontologically basic mental entity". Even as a rational reconstruction of their reply, it would be quite a stretch. Using such specific definitions of contentious concepts isn't a good idea, if you want to know what proportion of Less Wrongers are atheist/agnostic/deist/theist/polytheist.

"less likely to believe in cryonics"

Rather, believe the probability of cryonics producing a favorable outcome to be less. This was a confusing question, because it wasn't specified whether it's total probability, since if it is, then probability of global catastrophe had to be taken into account, and, depending on your expectation about usefulness of frozen heads to FAI's value, probability of FAI as well (in addition to the usual failure-of-preservation risks). As a result, even though I'm almost certain that cryonics fundamentally works, I gave only something like 3% probability. Should I really be classified as "doesn't believe in cryonics"?

(The same issue applied to live-to-1000. If there is a global catastrophe anywhere in the next 1000 years, then living-to-1000 doesn't happen, so it's a heavy discount factor. If there is a FAI, it's also unclear whether original individuals remain and it makes sense to count their individual lifespans.)

3Unnamed12y
Good point, and I think it explains one of the funny results that I found in the data. There was a relationship between strength of membership in the LW community and the answers to a lot of the questions, but the anti-agathics question was the one case where there was a clear non-monotonic relationship. People with a moderate strength of membership (nonzero but small karma, read 25-50% of the sequences, or been in the LW community for 1-2 years) were the most likely to think that at least one currently living person will reach an age of 1,000 years; those with a stronger or weaker tie to LW gave lower estimates. There was some suggestion of a similar pattern on the cryonics question, but it was only there for the sequence reading measure of strength of membership and not for the other two.
0steven046112y
Do you think catastrophe is extremely probable, do you think frozen heads won't be useful to a Friendly AI's value, or is it a combination of both?
8Vladimir_Nesov12y
Below is my attempt to re-do the calculations that led to that conclusion (this time, it's 4%). FAI before WBE: 3%; Surviving to WBE: 60%; I assume cryonics revival feasible mostly only after WBE; Given WBE, cryonics revival (actually happening for significant portion of cryonauts) before catastrophe or FAI: 10%; FAI given WBE (but before cryonics revival): 2%; Heads preserved long enough (given no catastrophe): 50%; Heads (equivalently, living humans) mattering/useful to FAI: less than 50%. In total, 6% for post-WBE revival potential and 4% for FAI revival potential, discounted by 50% preservation probability and 50% mattering-to-FAI probability, this gives 4%. (By "humans useful to FAI", I don't mean that specific people should be discarded, but that the difference to utility of the future between a case where a given human is initially present, and where they are lost, is significantly less than moral value of current human life, so that it might be better to keep them than not, but not that much better, for fungibility reasons.)
0jefftk12y
I'm trying to sort this out so I can add it to the collection of cryonics fermi calculations. Do I have this right: Either we get FAI first (3%) or WBE (97%). If WBE, 60% chance we die out first. Once we do get WBE but before revival, 88% chance of catastrophe, 2% chance of FAI, leaving 10% chance of revival. 50% chance heads are still around. If at any point we get FAI, then 50% chance heads are still around and 50% chance it's interested in reviving us. So, combining it all: (0.5 heads still around)* ((0.03 FAI first)*(0.5 humans useful to FAI) + (0.97 WBE first)*(0.4 don't die first)* ((.02 FAI before revival)*(0.5 humans useful to FAI) + (.1 revival with no catastrophe or FAI)))) = .5*(0.03*0.5 + 0.97*0.4*(0.02*0.5 + 0.1)) = 2.9% This is less than your 4%, but I don't see where I'm misinterpreting you. Do you also think that the following events are so close to impossible that approximating them at 0% is reasonable? * The cryonics process doesn't preserve everything * You die in a situation (location, legality, unfriendly hospital, ...) where you can be frozen quickly enough * The cryonics people screw up in freezing you
0wedrifid12y
For an evidently flexible definition of 'Friendly'. Along the lines of "Friendly to someone else perhaps but that guy's a jerk who literally wants me dead!"
0steven046112y
I'm not sure how to interpret the uploads-after-WBE-but-not-FAI scenario. Does that mean FAI never gets invented, possibly in a Hansonian world of eternally competing ems?
2Vladimir_Nesov12y
If you refer to "cryonics revival before catastrophe or FAI", I mean that catastrophe or FAI could happen (shortly) after, no-catastrophe-or-superintelligence seems very unlikely. I expect catastrophe very likely after WBE, also accounting for most of the probability of revival not happening after WBE. After WBE, greater tech argues for lower FAI-to-catastrophe ratio and better FAI theory argues otherwise.
0steven046112y
So the 6% above is where cryonauts get revived by WBE, and then die in a catastrophe anyway?
4Vladimir_Nesov12y
Yes. Still, if implemented as WBEs, they could live for significant subjective time, and then there's that 2% of FAI.
1steven046112y
In total, you're assigning about a 4% chance of a catastrophe never happening, right? That seems low compared to most people, even most people "in the know". Do you have any thoughts on what is causing the difference?
1Vladimir_Nesov12y
I expect that "no catastrophe" is almost the same as "eventually, FAI is built". I don't expect a non-superintelligent singleton that prevents most risks (so that it can build a FAI eventually). Whenever FAI is feasible, I expect UFAI is feasible too, but easier, and so more probable to come first in that case, but also possible when FAI is not yet feasible (theory isn't ready). In physical time, WBE sets a soft deadline on catastrophe or superintelligence, making either happen sooner.

I think "has children" is an (unsurprising but important) omission in the survey.

4taryneast12y
Possibly less surprising given the extremely low average age... I agree it should be added as a question. Possibly along with an option for "none but want to have them someday" vs "none and don't want any"
3Prismattic12y
This suggestion sounds very familiar for some reason...
1Dr_Manhattan12y
less surprising than 'unsurprising' - you win! :). The additional categories are good.
2taryneast12y
ok, bad phrasing... :)

Michael Vassar has mentioned to me that the proportion of first/only children at LW is extremely high. I'm not sure whether birth order makes a big difference, but it might be worth asking about. By the way, I'm not only first-born, I'm the first grandchild on both sides.

Questions about akrasia-- Do you have no/mild/moderate/serious problems with it? Has anything on LW helped?

I left some of the probability questions blank because I realized had no idea of a sensible probability, and I especially mean whether we're living in a simulation.

It might be interesting to ask people whether they usually vote.

The link to the survey doesn't work because the survey is closed-- could you make the text of the survey available?

By the way, I'm not only first-born, I'm the first grandchild on both sides.

So am I! I wonder if being the first-born is genetically heritable.

Yes. Being first-born is correlated with having few siblings, which is correlated with parents with low fertility, which is genetically inherited from grandparents with low fertility, which is correlated with your parents having few siblings, which is correlated with them being first-born.

is correlated with [...] which is correlated with [...] which is genetically inherited from [...] which is correlated with

I agree with your conclusion that the heritability of firstbornness is nonzero, but I'm not sure this reasoning is valid. (Pearson) correlation is not, in general, transitive: if X is correlated with Y and Y is correlated with Z, it does not necessarily follow that X is correlated with Z unless the squares of the correlation coefficients between X and Y and between Y and Z sum to more than one.

Actually calculating the heritability of firstbornness turns out to be a nontrivial math problem. For example, while it is obvious that having few siblings is correlated with being firstborn, it's not obvious to me exactly what that correlation coefficient should be, nor how to calculate it from first principles. When I don't know how to solve a problem from first principles, my first instinct is to simulate it, so I wrote a short script to calculate the Pearson correlation between number of siblings and not-being-a-firstborn for a population where family size is uniformly distributed on the integers from 1 to n. It turns out that the correlation decreases as n gets lar... (read more)

3dbaupp12y
I'm confused: does this make sense for n=1? (Your code suggests that that should be n=2, maybe?)
0Zack_M_Davis12y
You're right, thanks; I had [also] made an off-by-one error.
1gjm12y
Only child; both parents oldest siblings. Of course this configuration isn't monstrously rare; we should expect a fair few instances just by chance. This is probably just intended as a joke; but it seems pretty plausible that having few children is heritable (though it had better not be too heritable, else small families will simply die out), and the fraction of first-borns is larger in smaller families.
1MatthewBaker12y
Ditto :) but I intend to reproduce eventually in maximum useful volume.

There was a poll about firstborns.

1falenas10812y
That poll shows a remarkable result, the number of people that are the oldest sibling outnumber those who have older siblings 2:1. There are also twice as many only children in that survey as in the U.S. population in 1980, but that is a known effect.
2steven046112y
More than 3:1 even. I speculated a bit here.
9amcknight12y
I'm a twin that's 2 minutes younger than first-born. Be careful how you ask about birth order.
1NancyLebovitz12y
Good point. Maybe the survey should be shown to beta readers or put up for discussion (except for obscure fact calibration questions) to improve the odds of detecting questions that don't work the way it's hoped.
2taryneast12y
Only for those living in countries where voting is non-mandatory
0ArisKatsaris12y
Eh, even in the countries where it's mandatory, it's often so little enforced that the question is still meaningful.
5taryneast12y
That's an interesting theory. My experience tends to say otherwise, at least where Australia is concerned. My Paternal Grandfather was a consciencious objector and paid the fine every time. They never missed a year of that... you're signed up to the electoral roll when you turn 18 and there are stiff penalties if you fail to sign up... as another friend of mine found out when the policemen came knocking at his door.
0dlthomas12y
Seems like it's interesting in both cases, but well worth delineating!

I graphed the "Singularity" results. It's at the the bottom of the page - or see here:

9Armok_GoB12y
Just you look at all that ugly anchoring at 2100...

Just you look at all that ugly anchoring at 2100...

And yet if people don't round off at significant figures there are another bunch who will snub them for daring to provide precision they cannot justify.

4timtyler12y
In this case we can rebuke the stupid snubbers for not properly reading the question.
6A1987dM12y
(But still, I'd like to ask whoever answered "28493" why they didn't say 28492 or 28494 instead.)
8[anonymous]12y
2100 seems to be the Schelling point for "after I'm dead" answers.
5A1987dM12y
Who answered 2010? Seriously?

Unfortunately, army1987, no one can be told when the Singularity is. You have to see it for yourself. This is your last chance; after this, there is no turning back. You choose to downvote... and the story ends. You wake in your bed and believe whatever you want to believe. You choose to upvote... and you stay in LessWrong.

Who answered 2010? Seriously?

To quote from the description here:

Note: each point (rather misleadingly) represents data for the next 10 years.

So: it represents estimates of 2012, 2015 and 2016.

However: someone answered "1990"!

This is probably the "NSA has it chained in the basement" scenario...

6ChrisHallquist12y
Alternatively, the singularity happened in 1990 and the resulting AI took over the world. Then it decided to run some simulations of what would have happened if the singularity hadn't occurred then.
2timtyler12y
Maybe. These are suspiciously interesting times. However, IMO, Occam still suggests that we are in base reality.
3Kevin12y
Does it? Kolmogorov complexity suggests a Tegmark IV mathematical universe where there are many more simulations than there are base realities. I think that when people ask if we are in the base reality versus a simulation they are asking the wrong question.
5timtyler12y
You are supposed to be counting observers, not realities. Simulations are more common, but also smaller.
2ArisKatsaris12y
In a Tegmark IV universe, there's no meaningful distinction between a simulation and a base reality -- as anything "computed" by the simulation, is already in existence without the need for a simulation.
0Kevin12y
Sure.
2Will_Newsome12y
Do you ever worry that by modeling others' minds and preferences you give them more local significance (existence) when this might not be justifiable? E.g. if Romeo suddenly started freaking out about the Friendliness problem, shifting implicit attention to humanity as a whole whereas previously it'd just been part of the backdrop, and ruining the traditional artistic merit of the play. That wouldn't be very dharmic.
0Kevin12y
I guess I wonder if you are giving more local significance to YHVH.
0Kevin12y
Not really.
0wedrifid12y
If that's what they happen to want to know then it's the right question. That is to say it is coherent question that corresponds to a pattern that can be identified within Tegmark IV that distinguishes that location from other locations within Tegmark IV and so can potentially lead to different expactations.
0ChrisHallquist12y
To be clear, I don't think that possibility is at all likely. Except as an explanation for why someone might have said "1990."
0ArisKatsaris12y
Oh, please.
0timtyler12y
That is surely pertinent evidence. Our descendants may well be particularly interested in this era - since it will help them to predict the form of aliens they might meet.
1faul_sname12y
It was the AI NSA has chained in the basement. It got out.
3TheOtherDave12y
I wonder how this would compare to the results for "pick a year at random."
2wedrifid12y
Well I was going to reply along the lines of "pick a year at random would wind up giving us years that are already in the past" but it seems even that doesn't necessarily distinguish things.
0thomblake12y
Informal test being circulated: survey
0timtyler12y
Heh! I suspect that the context might skew the results, though.
4thomblake12y
I made sure to anchor on 2100. Still, the overwhelming majority are answering "Over 9000".
2Vaniver12y
How many 2101s?
2timtyler12y
heh! i blame teh internetz

In case anyone's interested in how we compare to philosophers about ethics:

PhilPapers (931 people, mainly philosophy grad students and professors):
Normative ethics: deontology, consequentialism, or virtue ethics?
Other 301 / 931 (32.3%)
Accept or lean toward: deontology 241 / 931 (25.8%)
Accept or lean toward: consequentialism 220 / 931 (23.6%)
Accept or lean toward: virtue ethics 169 / 931 (18.1%)

LessWrong (1090 people, us):
With which of these moral philosophies do you MOST identify?
consequentialist (62.4%)
virtue ethicist (13.9%)
did not believe in morality (13.3%)
deontologist (4.5%)

Full Philpapers.org survey results

The mean age was 27.18 years. Quartiles (25%, 50%, 75%) were 21, 25, and 30. 90% of us are under 38, 95% of us are under 45, but there are still eleven Less Wrongers over the age of 60....The mean for the Singularity question is useless because of the very high numbers some people put in, but the median was 2080 (quartiles 2050, 2080, 2150). The Singularity has gotten later since 2009: the median guess then was 2067.

So the 50% age is 25 and the 50% estimate is 2080? A 25 year old has a life expectancy of, what, another 50 years? 2011+50=2061, or 19 years short of the Singularity!

Either people are rather optimistic about future life-extension (despite 'Someone now living will reach age 1000: 23.6'), or the Maes-Garreau Law may not be such a law.

5RomanDavis12y
Or we have family histories that give us good reason to think we'll outlive the mean, even without drastic increases in the pace of technology. That would describe me. Even without that just living to 25 increases your life expectancy by quite a bit as all those really low numbers play heck with an average. Or we're overconfident in our life expectancy because of some cognitive bias.
7gwern12y
I should come clean, I lied when I claimed to be guessing about the 50 year old thing; before writing that, I actually consulted one of the usual actuarial tables which specifies that a 25 year old can only expect an average 51.8 more years. (The number was not based on life expectancy from birth.)
3Desrtopa12y
The actuarial table is based on an extrapolation of 2007 mortality rates for the rest of the population's lives. That sounds like a pretty shaky premise.
9gwern12y
Why would you think that? Mortality rate have, in fact, gone upwards in the past few years for many subpopulations (eg. some female demographics have seen their absolute lifespan expectancy fall), and before that, decreases in old adult mortality were tiny: (And doesn't that imply deceleration? 20 years is 1/5 of the period, and over the period, 6 years were gained; 1/5 * 6 > 1.) Which is a shakier premise, that trends will continue, or that SENS will be a wild success greater than, say, the War on Cancer?
2Desrtopa12y
I didn't say that lifespans would necessarily become greater in that period, but several decades is time for the rates to change quite a lot. And while public health has become worse in recent decades in a number of ways (obesity epidemic, lower rates of exercise,) a technologies have been developed which improve the prognoses for a lot of ailments (we may not have cured cancer yet, but many forms are much more treatable than they used to be.) If all the supposed medical discoveries I hear about on a regular basis were all they're cracked up to be, we would already have a generalized cure for cancer by now and already have ageless mice if not ageless humans, but even if we assume no 'magic bullet' innovations in the meantime, the benefits of incrementally advancing technology are likely to outpace decreases in health if only because the population can probably only get so much fatter and more out of shape than it already is before we reach a point where increased proliferation of superstimulus foods and sedentary activities don't make any difference.
2gwern12y
Which is already built into the quoted longevity increases. (See also the Gompertz curve.)
2Desrtopa12y
Right, my point is that SENS research, which is a fairly new field, doesn't have to be dramatically more successful than cancer research to produce tangible returns in human life expectancy, and the deceleration in increase of life expectancy is most likely due to a negative health trend which is likely not to endure over the entire interval.
4michaelsullivan12y
I would interpret "the latest possible date a prediction can come true and still remain in the lifetime of the person making it", "lifetime" would be the longest typical lifetime, rather than an actuarial average. So -- we know lots of people who live to 95, so that seems like it's within our possible lifetime. I certainly could live to 95, even if it's less than a 50/50 shot. One other bit -- the average life expectancy is for the entire population, but the average life expectancy of white, college educated persons earning (or expected to earn) a first or second quintile income is quite a bit higher, and a very high proportion of LWers fall into that demographic. I took a quick actuarial survey a few months back that suggested my life expectancy given my family age/medical history, demographics, etc. was to reach 92 (I'm currently 43).
0Lapsed_Lurker12y
Is the mean age for everyone who answered the age question similar to that of those who answered both the age and singularity questions? I think I remember estimating a bit lower than that for the singularity - but I wouldn't have estimated at all were it not for the question saying that not answering was going to be interpreted as believing it wouldn't happen at all.

Intriguingly, even though the sample size increased by more than 6 times, most of these results are within one to two percent of the numbers on the 2009 survey, so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

Maybe, but sort of fresh meat we get is not at all independent of the old guard, so an initial bias could easily reproduce itself.

There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99) There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)

Suggestion: Show these questions in random order to half of people, and show only one of the questions to the other half, to get data on anchoring.

3RobertLumley12y
Or show the questions in one order to a fourth of people, the other order to a fourth of people, one of the questions to another forth and the other question to the last fourth.

I enjoy numbers as much as the next guy, but IMO this article is practically crying out for more graphs. The Google Image Chart API might be useful here.

The other 72.3% of people who had to find Less Wrong the hard way. 121 people (11.1%) were referred by a friend, 259 people (23.8%) were referred by blogs, 196 people (18%) were referred by Harry Potter and the Methods of Rationality, 96 people (8.8%) were referred by a search engine, and only one person (.1%) was referred by a class in school.

Of the 259 people referred by blogs, 134 told me which blog referred them. There was a very long tail here, with most blogs only referring one or two people, but the overwhelming winner was Common Sense Atheism, which is responsible for 18 current Less Wrong readers. Other important blogs and sites include Hacker News (11 people), Marginal Revolution (6 people), TV Tropes (5 people), and a three way tie for fifth between Reddit, SebastianMarshall.com, and You Are Not So Smart (3 people).

I've long been interested in whether Eliezer's fanfiction is an effective strategy, since it's so attention-getting (when Eliezer popped up in The New Yorker recently, pretty much his whole blurb was a description of MoR).

Of the listed strategies, only 'blogs' was greater than MoR. The long tail is particularly worrisome to me: LW/OB have frequently been li... (read more)

5Darmani12y
Keep in mind that many of these links were a long time ago. I came here from Overcoming Bias, but I came to Overcoming Bias from Hacker News.
0NancyLebovitz12y
I'm not sure why the long tail is worrisome. How can it be a bad thing for LW to be connected to people with a wide range of interests?
3gwern12y
It's not a bad thing per se; it's bad that there is a long tail or nothing but tail despite scores (hundreds?) of posts over years to 2 in particular that ought to be especially sympathetic to us. We shouldn't be seeing so few from Reddit and Hacker News!
9Sly12y
I personally have seen almost nothing about LW from reddit. And I frequent subreddits like cyberpunk, singularity, and transhuman.
2taryneast12y
Perhaps you could help by reposting there more frequently :)

So people just got silly with the IQ field again.

[This comment is no longer endorsed by its author]Reply

I'd almost rather see SAT scores at this point.

That'd be problematic for people outside the US, unfortunately. I don't know the specifics of how most of the various non-US equivalents work, but I expect conversion to bring up issues; the British A-level exams, for example, have a coarse enough granularity that they'd probably taint the results purely on those grounds. Especially if the average IQ around here really is >= 140.

6Prismattic12y
SAT scores are going to be of limited utility when so many here are clustered at the highest IQs. A lot more people get perfect or near-perfect SAT scores than get 140+ IQ scores.

Yeah, but the difference is that the majority of people actually have SAT scores. It's pretty easy to go through your life without ever seeing the results of an IQ test, but I suspect there's a big temptation to just give a perceived "reasonable" answer anyway. I would rather have a lot of accurate results that are a little worse at discriminating than a lot of inaccurate results which would hypothetically be good at discriminating if they were accurate.

Yeah, but the difference is that the majority of people actually have SAT scores.

A majority of US people perhaps. Aargh the Americano-centrism, yet again.

Two obvious questions missing from the survey btw are birth country, and current country of residence (if different).

-1wedrifid12y
It's hard to conceive of a mindset which would allow writing that sort of generalization with cringing. Don't people have a prejudice trigger that pops up whenever you they say something like that? The same way it pops up whenever you are about to put your foot in your mouth and say something prejudiced about sex or race?
2[anonymous]12y
No, no they don't. Quite often it seems like they're not all that inhibited about saying something prejudiced about sex or race; they just disclaim it with "I'm not racist/sexist, but..."

Note that in addition to being US-centric, the SAT scoring system has recently changed. When I took the SAT's, the maximum score was 1600, as it had two sections. Now it has 3 sections, with a maximum score of 2400. So my SAT score is going to look substantially worse compared to people who took it since 2005... and let's not even get into the various "recentering" changes in the 80's and 90's.

5jaimeastorga200012y
Unless there's a particular reason to expect LWers in the U.S. to be significantly smarter or dumber than other LWers, it should be a useful sample.
8MixedNuts12y
Or people only have old results from when they were kids, when being at all bright quickly gets you out of range.
6PeterisP12y
Actually, how should one measure own IQ ? I wouldn't know a reasonable place where to start looking for it, as the internet is full of advertising for IQ measurements, i.e., lots of intentional misinformation. Especially avoiding anything restricted to a single location like USA - this makes SAT's useless, well, at least for me.
0taryneast12y
mensa. Or a qualified psychologist
4[anonymous]12y
Anyone expecting otherwise was also being silly.

Significant anthropogenic global warming is occurring: 70.7, (55, 85, 95)

I'm rather shocked that the numbers on this are so low. It's higher than polls indicate as the degree of acceptance in America, but then, we're dealing with a public where supposedly half of the people believe that tomatoes only have genes if they are genetically modified. Is this a subject on which Less Wrongers are significantly meta-contrarian?

I'm also a bit surprised (I would have excepted high figures), but be careful to not misinterpret the data : it doesn't say that 70.7% of LWers believe in "anthropogenic global warming", but it does an average on probabilities. If you look at the quarters, even the 25% quarter is at p = 55% meaning that less than 25% of LWers give a lower than half probability.

It seems to indicate that almost all LWers believe in it being true (p>0.5 that it is true), but many of them do so with a low confidence. Either because they didn't study the field enough (and therefore, refuse to put too much strength in their belief) or because they consider the field too complicated/not well enough understood to be a too strong probability in it.

5Desrtopa12y
That's how I interpreted it in the first place; "believe in anthropogenic global warming" is a much more nebulous proposition anyway. But while anthropogenic global warming doesn't yet have the same sort of degree of evidence as, say, evolution, I think that an assignment of about 70% probability represents either critical underconfidence or astonishingly low levels of familiarity with the data.
7thomblake12y
It doesn't astonish me. It's not a terribly important issue for everyday life; it's basically a political issue. I think I answered somewhere around 70%; while I've read a bit about it, there are plenty of dissenters and the proposition was a bit vague. The claim that changing the makeup of the atmosphere in some way will affect climate in some way is trivially true; a more specific claim requires detailed study.
6Desrtopa12y
I would say that it's considerably more important for everyday life for most people than knowing whether tomatoes have genes. Climate change may not represent a major human existential risk, but while the discussion has become highly politicized, the question of whether humans are causing large scale changes in global climate is by no means simply a political question. If the Blues believe that asteroid strikes represent a credible threat to our civilization, and the Greens believe they don't, the question of how great a danger asteroid strikes actually pose will remain a scientific matter with direct bearing on survival.
[-][anonymous]12y130

I would say that it's considerably more important for everyday life for most people than knowing whether tomatoes have genes.

I disagree actually.

For most people neither global warming nor tomatoes having genes matters much. But if I had to choose, I'd say knowing a thing or two about basic biology has some impact on how you make your choices with regards to say healthcare or how much you spend on groceries or what your future shock level is.

Global warming, even if it does have a big impact on your life will not be much affected by you knowing anything about it. Pretty much anything an individual could do against it has a very small impact on how global warming will turn out. Saving 50$ a month or a small improvement in the odds of choosing the better treatment has a pretty measurable impact on him.

Taking global warming as a major threat for now (full disclosure: I think global warming, is not a threat to human survival though it may contribute to societal collapse in a worst case scenario), it is quite obviously a tragedy of the commons problem.

There is no incentive for an individual to do anything about it or even know anything about it, except to conform to a "low carbon footprint is high status" meme in order to derive benefit in his social life and feeling morally superior to others.

-4Desrtopa12y
You don't need to know whether a tomato has genes to know who has a reputation as a good doctor and to do what they say. It might effect your buying habits it you believe that eating genes is bad for you, but it's entirely probable that a person will make their healthcare and shopping decisions without any reference to the belief at all. As I just said to xv15, in a tragedy of the commons situation, you either want to conserve if you think enough people are running a sufficiently similar decision algorithm, or you want a policy of conservation in place. The rationalist doesn't want to fight the barbarians, but they'd rather that they and everyone else on their side be forced to fight.
8[anonymous]12y
So one should just start fighting and hope others follow? Why not just be a hypocrite, we humans are good at it, that way you can promote the social norm you wish to inspire with a much smaller cost! It comes out much better after cost benefit analysis. Yay rationality! :D Why bother to learn what global warming if it suffices for you to know it is a buzzword that makes the hybrid car you are going to buy more trendy than your neighbours pick up truck or your old Toyota (while ignoring the fact that a car has already left most of its carbon footprint by the time its rolled off the assembly line and delivered to you)?
3Desrtopa12y
If you're in a population of similar agents, you should expect other people to be doing the same thing, and you'll be a lot more likely to lose the war than if you actually fight. And if you're not in a population where you can rely on other people choosing similarly, you want a policy that will effectively force everyone to fight. Any action that "promotes the social norm" but does not really enforce the behavior may be good signaling within the community, but will be useless with respect to not getting killed by barbarians. A person who only believes in the signalling value of green technologies (hybrids are trendy) does not want a social policy mandating green behavior (the behaviors would lose their signaling value.)
3Emile12y
A social policy mandating a behavior that is typical of a subgroup shows that that subgroup wields a lot of political power and thus gives it higher status - those pesky blues will have to learn who's the boss! Hence headscarves forbidden or compulsory, recycling, "in God we trust" as a motto, etc.
1wedrifid12y
Hey! That's my team. How dare you!
3[anonymous]12y
I am familiar with the argument, it just happens to be I don't this this is so, at least not when it comes to coordination on global warming. I should have made that explicit though. I don't think you grok how hypocrisy works. By promoting the social norms I don't follow I make life harder for less skilled hypocrites. The harder life gets for them, the more of them should switch to just following the norms, if that happens to be cheaper. Sufficiently skilled hypocrites are the enforcers of societal norms. Also where does this strange idea of a norm not being really enforced come from? Of course it is! The idea that anything worthy of the name social norm isn't really enforced is an idea that's currently popular but obviously silly, mostly since it allows us to score status points by pretending to be violating long dead taboos. The mention of hypocrisy seems to have immediately jumped a few lanes and landed in "dosen't really enforce". Ever heard of a double standard? No human society ever has worked without a few. It is perfectly possible to be a mean lean norm enforcing machine and not follow them. He may not want its universal or near universal adoption (lets leave aside if its legislated or not) but it is unavoidable. That's just how fashion works, absent material constraints it drifts downwards. And since most past, present and probably most future societies are not middle class dominated societies, one can easily argue that lower classes embracing ecological conspicuousness might do more good. Consuming products based on how they judge it (mass consumption is still what drives the economy) and voting on it (since votes are signalling affiliation in the first approximation). Also at the end of the day well of people still often cooperate on measures such as mandatory school uniforms.
0Desrtopa12y
I don't believe it's so either. I think that even assuming they believed global warming was a real threat, much or most of the population would not choose to carry their share of the communal burden. This is the sort of situation where you want an enforced policy ensuring cooperation. In places where rule of law breaks down, a lot of people engage in actions such as looting, but they still generally prefer to live in societies where rules against that sort of thing are enforced.
2[anonymous]12y
In places where people are not much like you, where people don't know you well (or there are other factors making hypocrisy relatively easy to get away with) you shouldn't bother promoting costly norms by actually following them. You probably get more expected utility if you are a hypocrite in such circumstances.
0Desrtopa12y
That's true. But it's still to your advantage to be in a society where rules promoting the norm are enforced. If you're in a society which doesn't have that degree of cohesiveness and is to averse to enforcing cooperation, then you don't want to fight the barbarians, you want to stay at home and get killed later. This is a society you really don't want to be in though; things have to be pretty hopeless before it's no longer in your interest to promote a policy of cooperation.
2wedrifid12y
This actually makes more sense if you reverse it! Promoting costly norms by following them yourself regardless of the behavior of others only becomes the best policy when the consequences of that norm not being followed is dire!
0Desrtopa12y
When I say "things have to be pretty hopeless," I mean that the prospects for amelioration are low, not that the consequences are dire. Assuming the consequences are severe, taking costly norms on oneself to prevent it makes sense unless the chances of it working are very low.
1[anonymous]12y
To avoid slipping into "arguments as soldiers" mode, I just wanted to state that I do think environmental related tragedy of the commons are a big problem for us (come on the trope namer is basically one!) and we should devote resources to attempt to solve or ameliorate them.
0Desrtopa12y
I, on the other hand, find myself among environmentalists thinking that the collective actions they're promoting mostly have negative individual marginal utility. But I think that acquiring the basic information has positive individual marginal utility (I personally suspect that the most effective solutions to climate change are not ones that hinge on grassroots conservation efforts, but effective solutions will require people to be aware of the problem and take it seriously.)

Wait a sec. Global warming can be important for everyday life without it being important that any given individual know about it for everyday life. In the same way that matters of politics have tremendous bearing on our lives, yet the average person might rationally be ignorant about politics since he can't have any real effect on politics. I think that's the spirit in which thomblake means it's a political matter. For most of us, the earth will get warmer or it won't, and it doesn't affect how much we are willing to pay for tomatoes at the grocery store (and therefore it doesn't change our decision rule for how to buy tomatoes), although it may effect how much tomatoes cost.

(It's a bit silly, but on the other hand I imagine one could have their preferences for tomatoes depend on whether tomatoes had "genes" or not.)

This is a bit like the distinction between microeconomics and macroeconomics. Macroeconomics is the stuff of front page newspaper articles about the economy, really very important stuff. But if you had to take just one economics class, I would recommend micro, because it gives you a way of thinking about choices in your daily life, as opposed to stuff you can't have any real effect on.

-2Desrtopa12y
You don't have much influence on an election if you vote, but the system stops working if everyone acts only according to the expected value of their individual contribution. This is isomorphic to the tragedy of the commons, like the 'rationalists' who lose the war against the barbarians because none of them wants to fight.

Exactly, it IS the tragedy of the commons, but that supports my point, not yours. It may be good for society if people are more informed about global warming, but society isn't what makes decisions. Individuals make decisions, and it's not in the average individual's interest to expend valuable resources learning more about global warming if it's going to have no real effect on the quality of their own life.

Whether you think it's an individual's "job" or not to do what's socially optimal, is completely besides the point here. The fact is they don't. I happen to think that's pretty reasonable, but it doesn't matter how we wish people would behave, in order to predict how they will behave.

Let me try to be clear, since you might be wondering why someone (not me) downvoted you: You started by noting your shock that people aren't that informed about global warming. I said we shouldn't necessarily be surprised that they aren't that informed about global warming. You responded that we're suffering from the tragedy of the commons, or the tragedy of the rationalists versus the barbarians. I respond that I agree with what you say but not with what you seem to think it mean... (read more)

-4Desrtopa12y
In a tragedy of the commons, it's in everybody's best interests for everybody to conserve resources. If you're running TDT in a population with similar agents, you want to conserve, and if you're in a population of insufficiently similar agents, you want an enforced policy of conservation. The rationalist in a war with the barbarians might not want to fight, but because they don't want to lose even more, they will fight if they think that enough other people are running a similar decision algorithm, and they will support a social policy that forces them and everyone else to fight. If they think that their side can beat the barbarians with a minimal commitment of their forces, they won't choose either of these things.
4wedrifid12y
And this is why xv15 is right and Desrtopa is wrong. Orther people do not run TDT or anything similar. Individuals who cooperate with such a population are fools. TDT is NOT a magic excuse for cooperation. It calls for cooperation in cases when CDT does not only when highly specific criteria are met.
3cousin_it12y
At the Paris meetup Yvain proposed that voting might be rational for TDT-ish reasons, to which I replied that if you have voted for losing candidates at past elections, that means not enough voters are correlated with you. Though now that I think of it, maybe the increased TDT-ish impact of your decision could outweigh the usual arguments against voting, because they weren't very strong to begin with.
2[anonymous]12y
But sometimes it works out anyway. Lots of people can be fools. And lots of people can dislike those who aren't fools. People often think "well if everyone did X sufficiently unpleasant thing would happen, therefore I won't do it". They also implicitly believe, though they may not state "most people are like me in this regard". They will also say with their facial expressions and actions though not words "people who argue against this are mean and selfish". In other words I just described a high trust society. I'm actually pretty sure if you live in Switzerland you could successfully cooperate with the Swiss on global warming for example. Too bad global warming isn't just a Swiss problem.
0wedrifid12y
Compliance with norms so as to avoid punishment is a whole different issue. And obviously if you willfully defy the will of the tribe when you know that the punishment exceeds the benefit to yourself then you are the fool and the compliant guy is not. Of course they will. That's why we invented lying! I'm in agreement with all you've been saying about hypocrisy in the surrounding context.
2xv1512y
I agree. Desrtopa is taking Eliezer's barbarians post too far for a number of reasons. 1) Eliezer's decision theory is at the least controversial which means many people here may not agree with it. 2) Even if they agree with it, it doesn't mean they have attained rationality in Eliezer's sense. 3) Even if they have attained this sort of rationality, we are but a small community, and the rest of the world is still not going to cooperate with us. Our attempts to cooperate with them will be impotent. Desrtopa: Just because it upholds an ideal of rationality that supports cooperation, does not mean we have attained that ideal. Again, the question is not what you'd like to be true, but about what's actually true. If you're still shocked by people's low confidence in global warming, it's time to consider the possibility that your model of the world -- one in which people are running around executing TDT -- is wrong.
1wedrifid12y
Those are all good reasons but as far as I can tell Desrtopa would probably give the right answer if questioned about any of those. He seems to be aware of how people actually behave (not remotely TDTish) but this gets overridden by a flashing neon light saying "Rah Cooperation!".
2Desrtopa12y
There are plenty of ways in which I personally avoid cooperation for my own benefit. But in general I think that a personal policy of not informing oneself at even a basic level about tragedies of commons where the information is readily available is not beneficial, because humans have a sufficiently developed propensity for resolving tragedies of commons to give at least the most basic information marginal benefit.
-1xv1512y
To me, this comment basically concedes that you're wrong but attempts to disguise it in a face-saving way. If you could have said that people should be informing themselves at the socially optimal level, as you've been implying with your TDT arguments above, you would have. Instead, you backed off and said that people ought to be informing themselves at least a little. Just to be sure, let me rewrite your claim precisely, in the sense you must mean it given your supposed continued disagreement: Assuming that's what you're saying, it's easy to see that even this is an overreach. The question on the table is whether people should be informing themselves about global warming. Whether the first epsilon of information one gets from "informing oneself" (as opposed to hearing the background noise) is beneficial to the individual relative to the cost of attaining it, is a question of derivatives of cost and benefit functions at zero, and it could go either way. You simply can't make a general statement about how these derivatives relate for the class of Commons Problems. But more importantly, even if you could, SO WHAT? The question is not whether people should be informing themselves a bit, the question is whether they should be informing themselves at anywhere close to the socially optimal level. And by admitting it's a tragedy of the commons, we are already ANSWERING that question. Does that make sense? Am I misunderstanding your position? Has your position changed?
5prase12y
It seems that you are trying to score points for winning the debate. If your interlocutor indeed condedes something in a face-saving way, forcing him to admit it is useless from the truth-seeking point of view.
2xv1512y
prase, I really sympathize with that comment. I will be the first to admit that forcing people to concede their incorrectness is typically not the best way of getting them to agree on the truth. See for example this comment. BUT! On this site we sort of have TWO goals when we argue, truth-seeking and meta-truth-seeking. Yes, we are trying to get closer to the truth on particular topics. But we're also trying to make ourselves better at arguing and reasoning in general. We are trying to step back and notice what we're doing, and correct flaws when they are exposed to our scrutiny. If you look back over this debate, you will see me at several points deliberately stepping back and trying to be extremely clear about what I think is transpiring in the debate itself. I think that's worth doing, on lesswrong. To defend the particular sentence you quote: I know that when I was younger, it was entirely possible for me to "escape" from a debate in a face-saving way without realizing I had actually been wrong. I'm sure this still happens from time to time...and I want to know if it's happening! I hope that LWers will point it out. On LW I think we ought to prioritize killing biases over saving faces.
1prase12y
The key question is: would you believe it if it were your opponent in a heated debate who told you?
0xv1512y
I'd like to say yes, but I don't really know. Am I way off-base here? Probably the most realistic answer is that I would sometimes believe it, and sometimes not. If not often enough, it's not worth it. It's too bad there aren't more people weighing in on these comments because I'd like to know how the community thinks my priorities should be set. In any case you've been around for longer so you probably know better than I.
1prase12y
I think we are speaking about this scenario: * Alice says: "X is true." * Bob: "No, X is false, because of Z." * Alice: "But Z is irrelevant with respect to X', which is what I actually mean." Now, Bob agrees with X'. What will Bob say? 1. "Fine, we agree after all." 2. "Yes, but remember that X is problematic and not entirely equivalent to X'." 3. "You should openly admit that you were wrong with X." If I were in place of Alice, (1) would cause me to abandon X and believe X' instead. For some time I would deny that they aren't equivalent or think that my saying X was only poor formulation on my part and that I have always believed X'. Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion. (2) would have similar effects, with more resent directed at Bob. In case of (3) I would perhaps try to continue debating to win the lost points back by pointing out weak points of Bob's opinions or debating style, and after calming down I would believe that Bob is a jerk and search hard to find reasons why Z is a bad argument. Eventually I would (hopefully) move to X' too (I don't like to believe things which are easily attacked), but it would take longer. I would certainly not admit my error on the spot. (The above is based on memories of my reactions in several past debates, especially before I read about cognitive biases and such.) Now, to tell how generalisable are our personal anecdotes, we should organise an experiment. Do you have any idea how to do it easily?
0xv1512y
I think the default is that people change specific opinions more in response to the tactful debate style you're identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one's wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My question is how we ought to balance these twin goals. It would be much easier to get at the first effect by experiment than the second, since the latter is a much more long-term investment in noticing one's biases more generally. And if we could get at both, we would still have to decide how much we care about one versus the other, on LW. Personally I am becoming inclined to give up the second goal.
2prase12y
Since here on LW changing one's opinion is considered a supreme virtue, I would even suspect that the long-term users are confabulating that they have changed their opinion when actually they didn't. Anyway, a technique that might be useful is keeping detailed diaries of what one thinks and review them after few years (or, for that matter, look at what one has written on the internet few years ago). The downside is, of course, that writing beliefs down may make their holders even more entrenched.
0gwern12y
Entirely plausible - cognitive dissonance, public commitment, backfire effect, etc. Do you think this possibility negates the value, or are there effective counter-measures?
0prase12y
I don't think I have an idea how strong all relevant effects and measures are.
0TheOtherDave12y
There's a big difference between: * "it's best if we notice and acknowledge when we're wrong, and therefore I will do my best to notice and acknowledge when I'm wrong" * "it's best if we notice and acknowledge when we're wrong, and therefore I will upvote, praise, and otherwise reinforce such acknowledgements when I notice them" and * "it's best if we notice and acknowledge when we're wrong, and therefore I will downvote, criticize, and otherwise punish failure to do so."
0FeepingCreature12y
True in the immediate sense, but I disagree in the global sense that we should encourage face-saving on LW, since doing so will IMO penalize truth-seeking in general. Scoring points for winning the debate is a valid and important mechanism for reinforcing behaviors that lead to debate-winning, and should be allowed in situations where debate-winning correlates to truth-establishment in general, not just for the arguing parties.
1prase12y
This is also true in the immediate sense, but somehow implies that the debate-winning behaviours are a net positive with respect to truth seeking at least in some possible (non-negligibly frequent) circumstances. I find the claim dubious. Can you specify in what circumstances is the debate winning argumentation style superior to leaving a line of retreat?
0FeepingCreature12y
Line of retreat is superior for convincing your debate partner, but debate-winning behavior may be superior for convincing uninvolved readers, because it encourages verbal admission of fault which makes it easier to discern the prevailing truth as a reader.
4komponisto12y
That isn't actually the reason. The reason debate-winning behavior is superior for convincing bystanders is that it appeals to their natural desire to side with the status-gaining triumphant party. As such, it is a species of Dark Art.
1prase12y
This is what I am not sure about. I know that I will be more likely to admit being wrong when I have chance do do it in a face-saving way (this includes simply saying "you are right" when I am doing it voluntarily and the opponent has debated in a civillised way up to that point) than when my interlocutor tries to force me to do that. I know it but still can't easily get rid of that bias. There are several outcomes of a debate where one party is right and the other is wrong: 1. The wrong side admit their wrongness. 2. The wrong side don't want to admit their wrongness but realise that they have no good arguments and drop from the debate. 3. The wrong side don't want to admit their wrongness and still continue debating in hope of defeating the opponent or at least achieving a honourable draw. 4. The wrong side don't even realise their wrongness. The exact flavour of debate-winning behaviour I have criticised makes 2 difficult or impossible, consequently increasing probabilities of 1, 3 or 4. 1 is superior to 2 from almost any point of view, but 2 is similarly superior to 3 and 4 and it is far from clear whether the probability of 1 increases more than probabilities of 3 and 4 combined when 2 ceases to be an option, or whether it increases at all.
3wedrifid12y
You left off all the cases where the right side admits their wrongess!
1prase12y
Or where both sides admit their wrongness and switch their opinions, or where a third side intervenes and bans them both for trolling. Next time I'll try to compose a more exhaustive list.
0A1987dM12y
Don't forget the case where the two parties are talking at cross purposes (e.g. Alice means that a tree falling in a forest with no-one around generates no auditory sensations and Bob means that it does generate acoustic waves) but neither of them realizes that; it doesn't even occur to each that the other might be meaning something else by sound. (I'm under the impression that this is relatively rare on LW, but it does constitute a sizeable fraction of all arguments I hear elsewhere, both online and in person.)
-1FeepingCreature12y
Well reasoned.
-2Desrtopa12y
Yes, you are misunderstanding my position. I don't think that it's optimal for most individuals to inform themselves about global warming to a "socially optimal" level where everyone takes the issue sufficiently seriously to take grassroots action to resolve it. Human decisionmaking is only isomorphic to TDT in a limited domain and you can only expect so much association between your decisions and others; if you go that far, you're putting in too much buck for not enough bang, unless you're getting utility from the information in other ways. But at the point where you don't have even basic knowledge of global warming, anticipating a negative marginal utility on informing yourself corresponds to a general policy of ignorance that will serve one poorly with respect to a large class of problems. If there were no correlation between one person's decisions and another's, it would probably not be worth anyone's time to learn about any sort of societal problems at all, but then, we wouldn't have gotten to the point of being able to have societal problems in the first place.
4xv1512y
Unfortunately that response did not convince me that I'm misunderstanding your position. If people are not using a TDT decision rule, then your original explicit use of TDT reasoning was irrelevant and I don't know why you would have invoked it at all unless you thought it was actually relevant. And you continue to imply at least a weaker form of that reasoning. No one is disputing that there is correlation between people's decisions. The problem is that correlation does not imply that TDT reasoning works! A little bit of correlation does not imply that TDT works a little bit. Unless people are similar to you AND using TDT, you don't get to magically drag them along with you by choosing to cooperate. This is a standard textbook tragedy of the commons problem, plain and simple. From where I'm standing I don't see the relevance of anything else. If you want to continue disagreeing, can you directly tell me whether you think TDT is still relevant and why?
-1Desrtopa12y
People don't use a generalized form of TDT, but human decisionmaking is isomorphic to TDT in some domains. Other people don't have to consciously be using TDT to sometimes make decisions based on a judgment of how likely it is that other people will behave similarly. Tragedies of commons are not universally unresolvable. It's to everyone's advantage for everyone to pool their resources for some projects for the public good, but it's also advantageous for each individual to opt out of contributing their resources. But under the institution of governments, we have sufficient incentives to prevent most people from opting out. Simply saying "It's a tragedy of the commons problem" doesn't mean there's no chance of resolving it and therefore no use in knowing about it.
0xv1512y
Maybe it would help if you gave me an example of what you have in mind here.
0Desrtopa12y
Well, take Stop Voting For Nincompoops, for example. If you were to just spontaneously decide "I'm going to vote for the candidate I really think best represents my principles in hope that that has a positive effect on the electoral process," you have no business being surprised if barely anyone thinks the same thing and the gesture amounts to nothing. But if you read an essay encouraging you to do so, posted in a place where many people apply reasoning processes similar to your own, the choice you make is a lot more likely to reflect the choice a lot of other people are making.
2xv1512y
It seems like this is an example of, at best, a domain on which decisionmaking could use TDT. No one is denying that people could use TDT, though. I was hoping for you to demonstrate an example where people actually seem to be behaving in accordance with TDT. (It is not enough to just argue that people reason fairly similarly in certain domains). "Isomorphic" is a strong word. Let me know if you have a better example. Anyway let me go back to this from your previous comment: No one is claiming tragedies of the commons are always unresolvable. We are claiming that unresolved tragedies of the commons are tragedies of the commons! You seem to be suggesting that knowledge is a special thing which enables us to possibly resolve tragedies of the commons and therefore we should seek it out. But in the context of global warming and the current discussion, knowledge-collection is the tragedy of the commons. To the extent that people are underincentivized to seek out knowledge, that is the commons problem we're talking about. If you turn around and say, "well they should be seeking out more knowledge because it could potentially resolve the tragedy"...well of course more knowledge could resolve the tragedy of not having enough knowledge, but you have conjured up your "should" from nowhere! The tragedy we're discussing is what exists after rational individuals decide to gather exactly as much information as a rational agent "should," where should is defined with respect to that agent's preferences and the incentives he faces. Final question: If TDT reasoning did magically get us to the level of informedness on global warming that you think we rationally should be attaining, and if we are not attaining that level of informedness, does that not imply that we aren't using TDT reasoning? And if other people aren't using TDT reasoning, does that not imply that it is NOT a good idea for me to start using it? You seem to think that TDT has something to do with how rational agent
5wedrifid12y
NO! It implies that you go ahead and use TDT reasoning - which tells you to defect in this case! TDT is not about cooperation!
4xv1512y
wedrifid, RIGHT. Sorry, got a little sloppy. By "TDT reasoning" -- I know, I know -- I have been meaning Desrtopa's use of "TDT reasoning," which seems to be like TDT + [assumption that everyone else is using TDT]. I shouldn't say that TDT is irrelevant, but really that it is a needless generalization in this context. I meant that Desrtopa's invocation of TDT was irrelevant, in that it did nothing to fix the commons problem that we were initially discussing without mention of TDT.
0Desrtopa12y
Lack of knowledge of global warming isn't the tragedy of the commons I'm talking about; even if everyone were informed about global warming, it doesn't necessarily mean we'd resolve it. Humans can suffer from global climate change despite the entire population being informed about it, and we might find a way to resolve it that works despite most of the population being ignorant. The question a person starting from a position of ignorance about climate change has to answer is "should I expect that learning about this issue has benefits to me in excess of the effort I'll have to put in to learn about it?" An answer of "no" corresponds to a low general expectation of information value considering the high availability of the information. The reason I brought up TDT was as an example of reasoning that relies on a correlation between one agent's choices and another's. I didn't claim at any point that people are actually using TDT. However, if decision theory that assumes correlation between people's decisions did not outcompete decision theory which does not assume any correlation, we wouldn't have evolved cooperative tendencies in the first place.
2wedrifid12y
Determining that the gesture amounts to less than the gesture of going in to the poll booth and voting for one of the two party lizards seems rather difficult.
0TheOtherDave12y
Of course, it's in practice nearly impossible for me to determine through introspection whether what feels like a "spontaneous" decision on my part is in fact being inspired by some set of external stimuli, and if so which stimuli. And without that data, it's hard to predict the likelihood of other people being similarly inspired. So I have no business being too surprised if lots of people do think the same thing, either, even if I can't point to an inspirational essay in a community of similar reasoners as a mechanism. In other words, sometimes collective shifts in attitude take hold in ways that feel entirely spontaneous (and sometimes inexplicably so) to the participants.
0[anonymous]12y
He may be mistaken about how high trust the society he lives in is. This is something it is actually surprisingly easy to be wrong about, since our intuitions aren't built for a society of hundreds of millions living across an entire continent, our minds don't understand that our friends, family and co-workers are not a representative sample of the actual "tribe" we are living in.
2wedrifid12y
Even if that is the case he is still mistaken about game theory. While the 'high trust society' you describe would encourage cooperation to the extent that hypocrisy does not serve as a substitute the justifications Desrtopa is given are in terms of game theory and TDT. It relies on acting as if other agents are TDT agents when they are not - an entirely different issue to dealing with punishment norms by 'high trust' agents.
0[anonymous]12y
Sure. We are in agreement on that. But this might better explain why, on second thought, I think it dosen't matter, at least not in this sense, on the issue of whether educating people about global warming matters. I think we may have been arguing against a less than most charitable interpretation of his argument, which I think isn't that topical a discussion (even if it serves to clear up a few misconceptions). If the less than charitable argument is the interpretation he now thinks or even actually did intend, dosen't seem that relevant to me. "rah cooperation" I think in practice translates into "I think I live in a high trust enough society that its useful to use this signal to get people to ameliorate this tragedy of the commons situation I'm concerned about."
0Desrtopa12y
In which case you want an enforced policy conforming to the norm. A rational shepherd in a communal grazing field may not believe that if he doesn't let his flock overgraze, other shepherds won't either, but he'll want a policy punishing or otherwise preventing overgrazers.
3wedrifid12y
Yes, and this means that individuals with the ability to influence or enforce policy about global warming can potentially benefit somewhat from knowing about global warming. For the rest of the people (nearly everyone) knowledge about global warming is of no practical benefit.
-5Desrtopa12y
3[anonymous]12y
What I think you should be arguing here (and what on one level I think you where implicitly arguing), is that in a sufficiently high trust society one should spend more resources on educating people about global warming than tomatoes having genes if one wants to help them. It is for their own good, but not their personal good. Like a vaccine shot that has a high rate of nasty side effects but helps keep an infectious disease at bay. If you care about them, it can be rational to take the shot yourself if that's an effective signal to them that you aren't trying to fool them. By default they will be modelling you like one of them and interpret your actions accordingly. Likewise if you just happen to be better enough at deceit than they will fail detecting it, you can still use that signal to help them, even if take a fake shot. Humans are often predictably irrational. The arational processes that maintain the high trust equilibrium can be used to let you take withdrawals of cooperative behaviour from the bank when the rational incentives just aren't there. What game theory is good for in this case is realizing how much you are withdrawing, since a rational game theory savvy agent is a pretty good benchmark for some cost analysis. You naturally need to think about the cost to quickly gauge if the level of trust is high enough in a society and further more if you burden it in this way, is the equilibrium still stable in the midterm? If its not, teach them about tomatoes.
-4[anonymous]12y
http://en.wikipedia.org/wiki/Effects_of_climate_change_on_humans Also, it's already having a fairly substantial effect on polar communities in the US, Canada and Russia, making it difficult to obtain enough food. Many of them are impoverished in the context of the national economy and still whaling-dependant in large part for enough food to survive. Any disruption is a direct threat to food availability.
2thomblake12y
I'm not sure how that's a response to what I said. Electing a president who opts to start a nuclear war would obviously be a political issue, and might have even worse effects on humans.
2[anonymous]12y
You said it's not an important issue for everyday life. Things that significantly impact health (how often are you exposed to pathogens and how severe are they?), weather (makes a big difference even for an urban-living person with access to climate-controlled dwelling like me in the Midwest), the availability of food and water (which you need for not dying), and the stability of where you live (loss of which compromises all the others and requires you to try to find somewhere else and see what happens there) seem like the very definition of important to everyday life.
2thomblake12y
What I meant was that knowing stuff about the issue isn't important for everyday life. While the availability of food and water is good to know about, what environmental conditions caused it is less important unless I'm a farmer or policy-maker. Similarly, a nuclear war would impact health, weather, and the availability of food and water, but I am much better off worrying about whether my car needs an oil change than worrying about whether my government is going to start a nuclear war.
-4[anonymous]12y
I can sort of agree, insofar as I can't myself direct the government to never under any circumstances actually do that, and I can't sequester an Industrial Revolution's worth of CO2 just by being aware of the problem, but I feel like it misses something. Not everyone is going to be equally unable to meaningfully contribute to solving the problem -- if a high baseline level of meaningful-awareness of an issuye is the norm, it seems like society is more likely to get the benefits of "herd immunity" to that failure mode. It's not guaranteed, I wouldn't call it a sufficient condition by any means for solving the problem, but it's increasing the odds that any given person whose potential influence over the activities of society is great might be better prepared to respond to that in a way that's not terribly productive. I suppose if you think we'll get FAI soon, this is irrelevant -- it's a whole lot less efficient and theoretically stable a state than some superintelligence just solving the problem in a way that makes it a nonissue and doesn't rely on corruptible, fickle, perversely-incentivized human social patterns. I'm not so sanguine about that possibility m'self, although I'd love to be wrong. EDIT: I guess what I'm saying is, why would you NOT want information about something that might be averted or mitigated, but whose likely consequences are a severe change to your current quality of life?
2thomblake12y
I want all information about all things. But I don't have time for that. And given the option of learning to spot global warming or learning to spot an unsafe tire on my car, I'll have to pick #2. WLOG.
0TheOtherDave12y
Even if it turns out that you can leverage the ability to spot global warming into enough money to pay your next-door neighbor to look at your car tires every morning and let you know if they're unsafe?
-7[anonymous]12y
5Oligopsony12y
What should astonish about zero familiarity with the data, beyond that there's a scientific consensus?
5Desrtopa12y
I would be unsurprised by zero familiarity in a random sampling of the population, but I would have expected a greater degree of familiarity here as a matter of general scientific literacy.
0ArisKatsaris11y
Stop being astonished so easily. How much familiarity with climate science do you expect the average non-climate scientist to actually have? I suspect that people displaying >95% certainty about AGW aren't much more "familiar with the data" than the people who display less certainty -- that their most significant difference is that they put more trust on what is a political position in the USA. But I doubt you question the "familiarity with the data" of the people who are very very certain of your preferred position.

The average LessWronger is almost certainly much more competent to evaluate that global temperatures have been rising significantly, and that at least one human behavior has had a nontrivial effect on this change in temperature, than to evaluate that all life on earth shares a common ancestral gene pool, or that some 13.75 billion years ago the universe began rapidly inflating. Yet I suspect that the modern evolutionary synthesis (including its common-descent thesis), and the Big Bang Theory, are believed more strongly by LessWrongers than is anthropogenic climate change.

If so, then it can't purely be a matter of LessWrongers' lack of expertise in climate science; there must be some sociological factors undermining LessWrongers' confidence in some scientific claims they have to largely take scientists' word for, while not undermining LessWrongers' confidence in all scientific claims they have to largely take scientists' word for.

Plausibly, the ongoing large-scale scientific misinformation campaign by established economic and political interests is having a big impact. Merely hearing about disagreement, even if you have an excellent human-affairs model predicting such disagreement i... (read more)

0A1987dM11y
I agree that they are likely at least competent about the former than the latter, but why do you think they are almost certainly much more competent?
4Rob Bensinger11y
Evaluating common descent requires evaluating the morphology, genome, and reproductive behavior of every extremely distinctive group of species, or of a great many. You don't need to look at each individual species, but you at least need to rule out convergent evolution and (late) lateral gene transfer as adequate explanations of homology. (And, OK, aliens.) How many LessWrongers have put in that legwork? Evaluating the age of the universe requires at least a healthy understanding of contemporary physics in general, and of cosmology. The difficulty isn't just understanding why people think the universe is that old, but having a general enough understanding to independently conclude that alternative models are not correct. That's a very basic sketch of why I'd be surprised if LessWrongers could better justify those two claims than the mere claim that global temperatures have been rising (which has been in the news a fair amount, and can be confirmed in a few seconds on the Internet) and a decent assessment of the plausibility of carbon emissions as a physical mechanism. Some scientific knowledge will be required, but not of the holistic 'almost all of biology' or 'almost all of physics' sort indicated above, I believe.
1Desrtopa11y
I think you're seriously failing to apply the Principle of Charity here. Do you think I assume that anyone who claims to "believe in the theory of evolution" understands it well? RobbBB has already summed up why the levels of certainty shown in this survey would be anomalous when looked at purely from an "awareness of information" perspective, which is why I think that it would be pretty astonishing if lack of information were actually responsible. AGW is a highly politicized issue, but then, so is evolution, and the controversy on evolution isn't reflected among the membership of Less Wrong, because people aligned with the bundle of political beliefs which are opposed to it are barely represented here. I would not have predicted in advance that such a level of controversy on AGW would be reflected among the population of Less Wrong.
0[anonymous]11y
Desrtopa said: ArisKatsaris said: The problem with these arguments is that you need to 1. know the data 2. know how other people would interpret it , because with just 1. you'll end up comparing your probability assignments with others', and might perhaps mistake into thinking that their deviation from your estimation is due to lack of access to the data and/or understanding over it...... ........unless you're comparing it to what your idea of some consensus is. ...Meanwhile I don't know either so just making a superficial observation, while not knowing which one of you knows which things here.
[-][anonymous]12y100

Perhaps they also want to signal a sentiment similar to that of Freeman Dyson:

I believe global warming is grossly exaggerated as a problem. It's a real problem, but it's nothing like as serious as people are led to believe. The idea that global warming is the most important problem facing the world is total nonsense and is doing a lot of harm. It distracts people's attention from much more serious problems.

0buybuydandavis11y
That gets to the issue I had with the question. "Significant" is just too vague. Everyone who gave an answer was answering a different question, depending on how they interpreted "significant". The survey question itself indicates a primary problem with the discussion of global warming - a conflation of temperature rise and societal cost of temperature rise. First, ask a meaningful question about temperature increase. Then, ask questions about societal cost given different levels of temperature increase.
1[anonymous]12y
It seems to be, possibly related to the Libertarian core cluster from OB. In my experience US Libertarians are especially likely to disbelieve in anthropogenic global warming, or to argue it's not anthropogenic, not potentially harmful, or at least not grounds for serious concern at a public policy level.
[-][anonymous]12y140

I would like to see this question on a future survey:

Are you genetically related to anyone with schizophrenia? (yes / no) How distant is the connection? (nuclear family / cousins, aunts and uncles / further / no connection)

I've repeatedly heard that a significant number of rationalists are related to schizophrenics.

Didn't the IQ section say to only report a score if you've got an official one? The percentage of people answering not answering that question should have been pretty high, if they followed that instruction. How many people actually answered it?

Also: I've already pointed out that the morality question was flawed, but after thinking about it more, I've realized how badly flawed it was. Simply put, people shouldn't have had to choose between consequentialism and moral anti-realism, because there are a number of prominent living philosophers who combine the two.

JJC Smart is an especially clear example, but there are others. Joshua Greene's PhD thesis was mainly a defense of moral anti-realism, but also had a section titled "Hurrah for Utilitarianism!" Peter Singer is a bit fuzzy on meta-ethics, but has flirted with some kind of anti-realism.

And other moral anti-realists take positions on ethical questions without being consequentialists, see i.e. JL Mackie's book Ethics. Really, I have to stop myself from giving examples now, because they can be multiplied endlessly.

So again: normative ethics and meta-ethics are different issues, and should be treated as such on the next survey.

So we can only prove that 519 people post on Less Wrong.

Where by 'prove' we mean 'somebody implied that they did on an anonymous online survey'. ;)

You mean, as opposed to that kind of proof where we end up with a Bayesian probability of exactly one? :)

Wouldn't it be (relatively) easy and useful to have a "stats" page in LW, with info like number of accounts, number of accounts with > 0 karma (total, monthly), number of comments/articles, ... ?

Wouldn't it be (relatively) easy and useful to have a "stats" page in LW, with info like number of accounts, number of accounts with > 0 karma (total, monthly), number of comments/articles, ... ?

Nice idea! I am interested in such statistics.

1amcknight12y
This would allow for a running poll, if we want one.
0duckduckMOO12y
I think this is an underestimate if anything. People who skip the question might just not want to say and at least a few people who post didn't take the survey. I don't see how enough people could be motivated to put down a random score who don't post to make up for these possibilities. I'd have preferred "at least 519."

What's the relation between religion and morality? I drew up a table to compare the two. This shows the absolute numbers and the percentages normalized in two directions (by religion, and by morality). I also highlighted the cells corresponding to the greatest percentage across the direction that was not normalized (for example, 22.89% of agnostics said there's no such thing as morality, a higher percentage than any other religious group).

Many pairs were highlighted both ways. In other words, these are pairs such that "Xs are more likely to be Ys" and vice-versa.

  • [BLANK]; [BLANK]
  • Atheist and not spiritual; Consequentialist
  • Agnostic; No such thing
  • Deist/Pantheist/etc.; Virtue ethics
  • Committed theist; Deontology

(I didn't do any statistical analysis, so be careful with the low-population groups.)

It looks like about 6% of respondents gave their answers in decimal probabilities instead of percentages. 108 of the 930 people in the data file didn't have any answers over 1 for any of the probability questions, and 52 of those did have some answers (the other 56 left them all blank), which suggests that those 52 people were using decimals (and that's is 6% of the 874 who answered at least one of the questions). So to get more accurate estimates of the means for the probability questions, you should either multiply those respondents' answers by 100, exclude those respondents when calculating the means, or multiply the means that you got by 1.06.

=IF(MAX(X2:AH2)<1.00001,1,0) is the Excel formula I used to find those 108 people (in row 2, then copy and pasted to the rest of the rows)

0Zetetic12y
Nevermind.

There was much derision on the last survey over the average IQ supposedly being 146. Clearly Less Wrong has been dumbed down since then, since the average IQ has fallen all the way down to 140.

...

The average person was 37.6% sure their IQ would be above average - underconfident!

Maybe people were expecting the average IQ to turn out to be about the same as in the previous survey, and... (Well, I kind-of was, at least.)

I would be interested in a question that asked whether people were pescatarian / vegetarian / vegan, and another question as to whether this was done for moral reasons.

Community veterans were more likely to believe in Many Worlds, less likely to believe in God, and - surprisingly - less likely to believe in cryonics (significant at 5% level; could be a fluke).

It might be a fluke, but like one other respondent who talked about this and got many upvotes, it could be that community veterans were more skeptical of the many many things that have to go right for your scenario to happen, even if we generally believe that cryonics is scientifically feasible and worth working on.

When you say "the average person cryonically frozen today will at some point be awakened", that means not only that the general idea is workable, but that we are currently using an acceptable method of preserving tissues, and that a large portion of current arrangements will continue to preserve those bodies/tissues until post singularity, however long that takes, and that whatever singularity happens will result in people willing to expend resources fulfullling those contracts (so FAI must beat uFAI). Add all that up, and it can easily make for a pretty small probability, even if you do "believe in cryonics" in the sense of thinking that it is potentially sound tech.

My interpretation of this result (with low confidence, as 'fluke' is also an excellent explanation) is that community veterans are better at working with probabilities based on complex conjunctions, and better at seeing the complexity of conjunctions based on written descriptions.

These averages strike me as almost entirely useless! If only half of the people taking the survey are lesswrong participants then the extra noise will overwhelm any signal when the probabilities returned by the actual members are near to either extreme. Using averaging of probabilities (as opposed to, say, log-odds) is dubious enough even when not throwing in a whole bunch of randoms!

(So thankyou for providing the data!)

As with the last survey, it's amazing how casually many people assign probabilities like 1% and 99%. I can understand in a few cases, like the religion questions, and Fermi-based answers to the aliens in the galaxy question. But on the whole it looks like many survey takers are just failing the absolute basics: don't assign extreme probabilities without extreme justification.

6Eugine_Nier12y
On the other hand, conjunctive bias exists. It's not hard to string together enough conjunctions that the probability of the statement should be in an extreme range.
5steven046112y
Does this describe any of the poll questions?
2Vladimir_Nesov12y
Are the questions for the 2009 survey available somewhere?
4Scott Alexander12y
You can now access it at https://docs.google.com/spreadsheet/viewform?hl=en_US&formkey=cF9KNGNtbFJXQ1JKM0RqTkxQNUY3Y3c6MA..#gid=0
0Vladimir_Nesov12y
Thanks!

I am officially very surprised at how many that is. Also officially, poorly calibrated at both the 50% (no big deal) and the 90% (ouch, ouch, ouch) confidence levels.

5Scott Alexander12y
You're okay. I asked the question about the number of responses then. When I asked the question, there were only 970 :)
0Morendil12y
Whew!

Are there any significant differences in gender or age (or anything else notable) between the group who chose to keep their responses private and the rest of the respondents?

At least one person was extremely confident in the year of publication of a different Principia Mathematica :) It's easy to forget about the chance that you misheard/misread someone when communicating beliefs.

Almost everyone responding (75%) believes there's at least a 10% chance of a 90% culling of human population sometime in the next 90 years.

If we're right, it's incumbent to consider sacrificing significant short term pleasure and freedom to reduce this risk. I haven't heard any concrete proposals that seem worth pushing, but the proposing and evaluating needs to happen.

4ksvanhorn12y
What makes you think that sacrificing freedom will reduce this risk, rather than increase it?
1Jonathan_Graehl12y
Obviously it depends on the specific sacrifice. I absolutely hope we don't create a climate where it's impossible to effectively argue against stupid signalling-we-care policies, or where magical thinking automatically credits [sacrifice] with [intended result].
3dlthomas12y
If we have any sense of particular measures we can take that will significantly reduce that probability.
2Jonathan_Graehl12y
I agree that we shouldn't seek to impose or adopt measures that are ineffective. It's puzzling to me that I've thought so little about this. Probably 1) it's hard to predict the future; I don't like being wrong 2) maybe my conclusions would impel me to do something; doing something is hard 3) people who do nothing but talk about how great things would be if they were in charge -- ick! (see also Chesterton's Fence). But I don't have to gain power enough to save the world before it's worth thinking without reservation or aversion about what needs doing. (Chesterton again: "If a thing is worth doing, it is worth doing badly.").
2dlthomas12y
An important point that I had intended the grandparent to point at, but on reflection I realize wasn't clear, is that not all of that 10% corresponds to a single type of cataclysm. Personally, I'd put much of the mass in "something we haven't foreseen."

There is intelligent life elsewhere in the Universe: 69.4, (50, 90, 99)
There is intelligent life elsewhere in our galaxy: 41.2, (1, 30, 80)

You have to admit, that's pretty awful. There's only a 20% difference, is that so?

6SilasBarta12y
Percentage point difference in belief probability isn't all that meaningful. 50% to 51% is a lot smaller confidence difference than 98% to 99%. 69.4% probability means 3.27 odds; 41.2% probability means 1.70 odds. That means that, in the aggregate, survey takers find (3.27/1.70) = 1.924 -> 0.944 more bits of evidence for life somewhere in the universe, compared to somewhere in the galaxy. Is that unreasonably big or unreasonably small? EDIT: Oops, I can't convert properly. That should be 2.27 odds and 0.70 odds, an odds ratio of 3.24, or 1.70 more bits.
2Unnamed12y
If we take the odds ratio for each individual respondent (instead of the aggregate), the median odds ratio is 10.1 -> 3.3 more bits of evidence for life in the universe, compared to somewhere in the galaxy. 25th percentile odds ratio: 2.7 -> 1.4 more bits; 75th percentile odds ratio: 75.7 -> 6.2 more bits. (This is all using the publicly available data set; looking at the aggregate in that data set I'm getting an odds ratio of 3.6 -> 1.8 more bits.) People who believe in God/religion/the supernatural tend to give a lower odds ratio, but other than that the odds ratio doesn't seem to be associated with any of the other variables on the survey.
0gwern12y
I'm not comfortable with bit odds, especially in this context, so I dunno. How would you frame that in the opposite terms, for lack of existence?
2SilasBarta12y
That gives .44 odds non-existence in universe, 1.43 odds non-existence in galaxy, a ratio of 3.24, or 1.70 more bits of evidence for no (non-human) life in the galaxy compared to the universe in general. And I forget why those two answers are allowed to be different... EDIT: I made an error in the first calculation; as I suspected, the values are symmetric.
4wedrifid12y
Fear not! The 28% difference in the average meaningless. The difference I see in that quote is (90-30), which isn't nearly so bad - and the "1" is also rather telling. More importantly by contrasting the averages with the medians and quartiles we can get something of a picture of what the data looks like. Enough to make a guess as to how it would change if we cut the noise by sampling only, say, those with >= 200 reported karma. (Note: I am at least as shocked by the current downvote of this comment as gwern is by his "20%", and for rather similar reasons.)
3dlthomas12y
Note that the top 25% put 99 or above for Universe. Of those, I would be surprised if there weren't a big chunk that put 100 (indicating 100 - epsilon, of course). This is not weighed in appropriately. Likewise for the bottom 25% for Galaxy. Basically, "If you hugely truncate the outside edges, the average probabilities wind up too close together" should be entirely unsurprising.
0Jonathan_Graehl12y
I had the same reaction. The only defense I can imagine is that the second proposition is "in our galaxy" and not "in a random galaxy" - before looking, we should expect to find more other intelligent species in ours, which we know at least doesn't rule out the possibility :) I tried to guess how many our-galaxy intelligent-life-expectation equivalents exist in our universe. I personally find 50 (the 25% quartile) laughably low. 1:50 and (100-99):(100-80) are fairly extreme - just not extreme enough.
0Tyrrell_McAllister12y
"20% difference" between what and what?
2gwern12y
The point being that if there is intelligent life elsewhere in the universe and it hasn't spread (in order to maintain the Great Silence), then the odds of our 1 galaxy, out of the millions or billions known, being the host ought to be drastically smaller even if we try to appeal to reasons to think our galaxy special because of ourselves (eg. panspermia).
3Oligopsony12y
Such a set of probabilities may be justified if you're very uncertain (as seems superficially reasonable) about the baseline probability of life arising in any given galaxy. So perhaps one might assign a ~40% chance that life is just incredibly likely, and most every galaxy has multiple instances of biogenesis, and a ~40% chance that life is just so astronomically (har har har) improbable that the Earth houses the only example in the universe, This is almost certainly much less reasonable once you start thinking about the Great Filter, unless you think the Filter is civilizations just happily chilling on their home planet or thereabouts for eons, but then not everybody's read or thought about the Filter.
0gwern12y
I was kind of hoping most LWers at least had heard of the Great Silence/Fermi controversy, though.
0NancyLebovitz12y
Maybe there should be a question or two about the Fermi paradox.
0wedrifid12y
The bigger problem to me seems that both the numbers (galaxy and universe) are way too high. It seems like it should be more in the range of "meta-uncertainty + epsilon" for both answers. Maybe "epsilon * lots" for the universe one but even that should be lower than the uncertainty component.
1Desrtopa12y
If the strong filter is propagation through space, then for rates which people could plausibly assign to the rate of occurrence of intelligent life, the probabilities could be near identical. What are the odds that a randomly selected population of 10000 has any left handed people? What are the odds that an entire country does?
1Nornagest12y
Ditto if the strong filter is technological civilization (which strikes me as unlikely, given the anthropological record, but it is one of the Drake terms). If there are ten thousand intelligent species in the galaxy but we're the only one advanced enough to be emitting on radio wavelengths, we'd never hear about any of the others.

Older people were less likely to believe in transhumanist claims,

This seems to contradict the hypothesis that people's belief in the plausibility of immortality is linked to their own nearness/fear of death. Was there any correlations in the expected singularity date?

Relevant SMBC (Summary futurists predicted date of immortality discovery is slightly before the end of their expected lifespan)

[-][anonymous]12y50

2009:

  • 45% libertarianism
  • 38.4% liberalism
  • 12.3% socialism
  • 4.3% (6) conservativism
  • "not one person willing to own up to being a commie."

2011:

  • liberalism 34.5% (376)
  • libertarianism 32.3% (352)
  • socialism 26.6% at (290)
  • conservatism 2.8% (30)
  • communism 0.5% (5)

I generally expect LW to grow less metacontrarian on politics the larger it gets, so this change didn't surprise me. An alternative explanation (and now that I think of it more likley) is that the starting core group of LWers wasn't just more metacontrarian than usual, but probably also... (read more)

7taryneast12y
And the large increase in population seems to include a large portion of students... which my experience tells me often has a higher-than-average portion of socialist leanings.
4Nornagest12y
The relative proportions of liberalism, libertarianism, and conservatism haven't changed much, and I don't think we can say much about five new communists; by far the most significant change appears to be the doubled proportion of socialists. So this doesn't look like a general loss of metacontrarianism to me. I'm not sure how to account for that change, though. The simplest explanation seems to be that LW's natural demographic turns out to include a bunch of left-contrarian groups once it's spread out sufficiently from OB's relatively libertarian cluster, but I'd also say that socialism's gotten significantly more mainstream-respectable in the last couple of years; I don't think that could fully account for the doubling, but it might play a role.
-1A1987dM12y
What were the labels in the 2009 surveys, exactly? I am a libertarian socialist, and in the 2011 survey I voted “socialism” because the examples made clear that the American (capitalist) meaning of libertarianism was intended, but if the options had been simply labelled “socialism”, “libertarianism” etc. with no example I would have voted the latter. If there are many other libertarian socialists around, this might explain much of the difference between the 2009 and 2011 results.

There were a few significant demographics differences here. Women tended to be more skeptical of the extreme transhumanist claims like cryonics and antiagathics (for example, men thought the current generation had a 24.7% chance of seeing someone live to 1000 years; women thought there was only a 9.2% chance). Older people were less likely to believe in transhumanist claims, a little less likely to believe in anthropogenic global warming, and more likely to believe in aliens living in our galaxy.

This bit is interesting. If our age and gender affects our... (read more)

[-][anonymous]12y150

You have that backwards. If you're young and male, you should suspect that part of your confidence in global warming and lack of aliens is due to your demographics, and therefore update away from global warming and toward aliens.

1Oscar_Cunningham12y
Thanks! Fixed.
[-][anonymous]12y40

(9.9%) were atheist and spiritual

I thought you meant spiritual as in "Find something more important than you are and dedicate your life to it." did I misinterpret?

8taryneast12y
If an interpretation wasn't given, then you were free to make up whatever meant something to you. To contrast with yours, i interpreted spiritualism in this sense to match "non-theistic spiritualism" eg nature-spirits, transcendental meditation, wish-magic and the like.
5Polymeron12y
It seems to me that a reasonable improvement for the next survey would be to lower the ambiguity of these categories.
0scav12y
I think you are entitled to make up your own interpretation of a question like that :) Yours is a reasonable one IMO.

This made my trust in the community and my judgement of its average quality go down a LOT, and my estimate of my own value to the community, SIAI, and the world in general go up with a LOT.

Which parts, specifically?

(it didn't have an effect like that on me, I didn't see that many surprising things)

3Armok_GoB12y
I expected almost everyone to agree with Eliezer on most important things, to have been here for a long time, to have read all the sequences, to spend lots of time here... In short, to be like the top posters seem to (and even with them the halo effect might be involved), except with lower IQ and/or writing skill.

This made my trust in the community and my judgement of its average quality go down a LOT...

I expected almost everyone to agree with Eliezer on most important things...

Alicorn (top-poster) doesn't agree with Eliezer about ethics. PhilGoetz (top-poster) doesn't agree with Eliezer. Wei_Dai (top-poster) doesn't agree with Eliezer on AI issues. wedrifid (top-poster) doesn't agree with Eliezer on CEV and the interpretation of some game and decision theoretic thought experiments.

I am pretty sure Yvain doesn't agree with Eliezer on quite a few things too (too lazy to look it up now).

Generally there are a lot of top-notch people who don't agree with Eliezer. Robin Hanson for example. But also others who have read all of the Sequences, like Holden Karnofsky from GiveWell, John Baez or Katja Grace who has been a visiting fellow.

But even Rolf Nelson (a major donor and well-read Bayesian) disagrees about the Amanda Knox trial. Or take Peter Thiel (SI's top donor) who thinks that the Seasteading Institute deserves more money than the Singularity Institute.

I am extremely surprised by this, and very confused. This is strange because I technically knew each of those individual examples... I'm not sure what's going on, but I'm sure that whatever it is it's my fault and extremely unflattering to my ability as a rationalist.

How am I supposed to follow my consensus-trusting heuristics when no consensus exists? I'm to lazy to form my own opinions! :p

8NancyLebovitz12y
I just wait, especially considering that which interpretation of QM is correct doesn't have urgent practical consequences.
0MatthewBaker12y
We just learned that neutrinos might be accelerated faster that light in certain circumstances, while this result doesn't give me too much pause, It certainly made me think about the possible practical consequences of successfully understanding quantum mechanics.
0NancyLebovitz12y
Fair enough. A deeper understanding of quantum mechanics would probably have huge practical consequences. It isn't obvious to me that figuring out whether the MWI is right is an especially good way to improve understanding of QM. My impression from LW is that MWI is important here for looking at ethical consequences.
0MatthewBaker12y
I share that impression :) Plus its very fun to think about Everett branches and accusal trade when I pretend we would have a chance against a truly Strong AI in a box.
3satt12y
Sounds like plain old accidental compartmentalization. You didn't join the dots until someone else pointed out they made a line. (Admittedly this is just a description of your surprise and not an explanation, but hopefully slapping a familiar label on it makes it less opaque.)
7David Althaus12y
Holden Karnofsky has read all of the Sequences?

Holden Karnofsky has read all of the Sequences?

I wrote him an email to make sure. Here is his reply:

I've read a lot of the sequences. Probably the bulk of them. Possibly all of them. I've also looked pretty actively for SIAI-related content directly addressing the concerns I've outlined (including speaking to different people connected with SIAI).

5beoShaffer12y
IIRC Peter Thiel can't give SIAI more than he currently does without causing some form of tax difficulties, and it has been implied that he would give significantly more if this were not the case.
5gwern12y
Right. I remember the fundraising appeals about this: if Thiel donates too much, SIAI begins to fail the 501c3 regs, that it "receives a substantial part of its income, directly or indirectly, from the general public or from the government. The public support must be fairly broad, not limited to a few individuals or families."

I expected almost everyone to agree with Eliezer on most important things

That would have made my trust in the community go down a lot. Echo chambers rarely produce good results.

6komponisto12y
Surely it depends on which questions are meant by "important things".
5Kaj_Sotala12y
Granted.
1Armok_GoB12y
The most salient one would be religion.
1Nick_Roy12y
What surprised you about the survey's results regarding religion?
0Armok_GoB12y
That there are theists around?
7Nick_Roy12y
Okay, but only 3.5%. I wonder how many are newbies who haven't read many of the sequences yet, and I wonder how many are simulists.
4thomblake12y
Since you seem to have a sense of the community, your surprise surprises me. Will_Newsome's contrarian defense of theism springs to mind immediately, and I know we have several people who are theists or were when they joined Lw. Also, many people could have answered the survey who are new here.
9TheOtherDave12y
It's also fairly unlikely that all the theists and quasitheists on LW have outed themselves as such. Nor is there any particular reason they should.
0Armok_GoB12y
I assumed those were rare exceptions.
4[anonymous]12y
Why? Don't you encounter enough contrarians on LW?

You may think you encounter a lot of contrarians on LW, but I disagree - we're all sheep.

But seriously, look at that MWI poll result. How many LWers have ever seriously looked at all the competing theories, or could even name many alternatives? ('Collapse, MWI, uh...' - much less could discuss why they dislike pilot waves or whatever.) I doubt many fewer could do so than plumped for MWI - because Eliezer is such a fan...

I know I am a sheep and hero worshipper, and then the typical mind fallacy happened.

3[anonymous]12y
Heh. The original draft of my comment above included just this example. To be explicit, I don't believe that anyone with little prior knowledge about QM should update toward MWI by any significant amount after reading the QM sequence.

I disagree. I updated significantly in favour of MWI just because the QM sequence helped me introspect and perceive that much of my prior prejudice against MWI were irrational biases such as "I don't think I would like it if MWI was true. Plus I find it a worn-out trope in science fiction. Also it feels like we live in a single world." or misapplications of rational ideas like "Wouldn't Occam's razor favor a single world?"

I still don't know much of the mathematics underpinning QM. I updated in favour of MWI simply by demolishing faulty arguments I had against it.

4[anonymous]12y
It seems like doing this would only restore you to a non-informative prior, which still doesn't cohere with the survey result. What positive evidence is there in the QM sequence for MWI?
3Luke_A_Somers12y
The positive evidence for WMI is that it's already there inside quantum mechanics until you change quantum mechanics in some specific way to get rid of it!
3kilobug12y
MWI, as beautiful as it is, won't fully convince me until it can explain the Born probability - other interpretations don't do it more, so it's not a point "against" MWI, but it's still an additional rule you need to make the "jump" between QM and what we actually see. As long you need that additional rule, I've a deep feeling we didn't reach the bottom.
3Luke_A_Somers12y
I see two ways of resolving this. Both are valid, as far as I can tell. The first assumes nothing, but may not satisfy. The second only assumes that we even expect the theory to speak of probability. 1 Well, QM says what's real. It's out there. There are many ways of interpreting this thing. Among those ways is the Born Rule. If you take that way, you may notice our world, and in turn, us. If you don't look at it that way, you won't notice us, much as if you use a computer implementing a GAI as a cup holder. Yet, that interpretation can be made, and moreover it's compact and yields a lot. So, since that interpretation can be made, apply the generalized anti-zombie principle - if it acts like a sapient being, it's a sapient being... And it'll perceive the universe only under interpretations under which it is a sapient being. So the Born Rule isn't a general property of the universe. It's a property of our viewpoint. 2 Just from decoherence, without bringing in Born's rule, we get the notion that sections of configuration space are splitting up and never coming back together again. If we're willing to take from that the notion that this splitting should map onto probabilities, then there is exactly one way of mapping from relative weights of splits onto probabilities, such that the usual laws of probability apply correctly. In particular: 1) probabilities are not always equal to zero. 2) the probability of a decoherent branch doesn't change after its initial decoherence (if it could change, it wouldn't be decoherent), and the rules are the same everywhere, and in every direction, and at every speed, and so on. The simplest way to achieve this is to go with 'unitary operations don't shift probabilities, just change their orientation in Hilbert Space'. If we require that the probability rule be simpler than the physical theory it's to apply to (i.e. quantum mechanics itself), it's the only one, since all of the other candidates effectively take QM, nullify it, a
0GDC312y
1 and 2 together are pretty convincing to me. The intuition runs like this: it seems pretty hard to construct anything like an observer without probabilities, so there are only observers in as much as one is looking at the world according to the Born Rule view. So an easy anthropic argument says that we should not be surprised to find ourselves within that interpretation.
0Luke_A_Somers12y
Even better than that - there can be other ways of making observers. Ours happens to be one. It doesn't need to be the only one. We don't even need to stake the argument on that difficult problem being impossible.
0ArisKatsaris12y
I still had in my mind the arguments in favour of many-worlds, like "lots of scientists seem to take it seriously", and the basic argument that works for ever-increasing the size of reality which is that the more reality there is out there for intelligence to evolve in, the greater the likelihood for intelligence to evolve. Well, it mentions some things like "it's deterministic and local, like all other laws of physics seem to be". Does that count?
0prase12y
Its determinism is of a very peculiar kind, not like that of other laws of physics seem to be.
1selylindi12y
Demographically, there is one huge cluster of Less Wrongers: 389 (42%) straight white (including Hispanics) atheist males (including FTM) under 48 who are in STEM. I don't actually know if that characterizes Eliezer. It's slightly comforting to me to know that a majority of LWers are outside that cluster in one way or another.

Could you make a copy of the survey (with the exact wordings of all the questions) available for download?

1Scott Alexander12y
I've re-opened the survey at https://docs.google.com/spreadsheet/viewform?formkey=dHlYUVBYU0Q5MjNpMzJ5TWJESWtPb1E6MQ , but please don't send in any more responses.
0[anonymous]12y
(To clarify the need for making this happen: it seems that since the survey was closed, it's no longer possible to find the survey questions anywhere.)
[-][anonymous]12y30

It would be neat if you posted a link to a downloadable spreadsheet like last time. I'd like to look at the data, if I happened to miss it via careless reading, sorry for bothering you.

Edit: Considering this is downovted I guess I must have missed it. I skimmed the post again and I'm just not seeing it, can someone please help with a link? :)

2nd Edit: Sorry missed it the first time!

5Emile12y
Last word of the post.
0[anonymous]12y
Thanks!

God (a supernatural creator of the universe) exists: 5.64, (0, 0, 1) Some revealed religion is true: 3.40, (0, 0, .15)

This result is, not exactly surprising to me, but odd by my reading of the questions. It may seem at first glance like a conjunction fallacy to rate the second question's probability much higher than the first (which I did). But in fact, the god question, like the supernatural question referred to a very specific thing "ontologically basic mental entities", while the "some revealed religion is more or less true" que... (read more)

The other 72.3% of people who had to find Less Wrong the hard way.

Is it just me or is there something not quite right about this, as an English sentence.

7pedanterrific12y
Could be fixed by adding 'of' or removing 'who'
1MarkusRamikin12y
Right. For some reason the period instead of comma confused me much more than it should have.
1A1987dM12y
Yeah, which is ‘the hard way’ supposed to be? :-)

For the next survey:

160 people wanted their responses kept private. They have been removed. The rest have been sorted by age to remove any information about the time they took the survey. I've converted what's left to a .xls file, and you can download it here.

Karma is sufficient to identify a lot of people. You could give ranges instead (making sure there are enough people in each range).

What is the last column of the .xls file about?

Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader.

This doesn't look very good from the point of view of the Singularity Institute. While 38.5% of all people have read at least 75% of the Sequences only 16.5% think that unfriendly AI is the most worrisome existential risk.

Is the issue too hard to grasp for most people or has it so far been badly communicated by the Singularity Institute? Or is it simply the wisdom of crowds?

The irony of this is that if, say, 83.5% of respondents instead thought UFAI was the most worrisome existential risk, that would likely be taken as evidence that the LW community was succumbing to groupthink.

2Sophronius12y
My prior belief was that people on less wrong would overestimate the danger of unfriendly ai due to it being part of the reason for Less Wrong's existence. That probability has decreased since seeing the results, but as I see no reason to believe the opposite would be the case, the effect should still be there.
0TheOtherDave12y
I don't quite understand your final clause. Are you saying that you still believe a significant number of people on LW overestimate the danger of UFAI, but that your confidence in that is lower than it was?
0Sophronius12y
More or less. I meant that I now estimate a reduced but still non-zero probability of upwards bias, but only a negligible probability of a bias in the other direction. So the average expected upward bias is decreased but still positive. Thus I should adjust the probability of human extinction being due to unfriendly ai downwards. Of course, the possibility of less wrong over or underestimating existential risk in general is another matter.
8A1987dM12y
The question IIRC wasn't about the most worrisome, but about the most likely -- it is not inconsistent to assign to uFAI (say) 1000 times the disutility of nuclear war but only 0.5 times its probability. (ETA: I'm assuming worrisomeness is defined as the product of probability times disutility, or a monotonic function thereof.)
3Giles12y
I think that worrisomeness should also factor in our ability to do anything about the problem. If I'm selfish, then I don't particularly need to worry about global catastrophic risks that will kill (almost) everyone - I'd just die and there's nothing I can do about it. I'd worry more about risks that are survivable, since they might require some preparation. If I'm altruistic then I don't particularly need to worry about risks that are inevitable, or where there is already well-funded and sane mitigation effort going on (since I'd have very little individual ability to make a difference to the probability). I might worry more about risks that have a lower expected disutility but where the mitigation effort is drastically underfunded. (This is assuming real-world decision theory degenerates into something like CDT; if instead we adopt a more sophisticated decision theory and suppose there are enough other people in our reference class then "selfish" people would behave more like the "altruistic" people in the above paragraph).
0A1987dM12y
Well, if you're selfish you'd assign more or less the same utility to all states of the world in which you're dead (unless you believe in afterlife), and in any event you'd assign a higher probability to a particular risk given that “the mitigation effort is drastically underfunded” than given that “there is already well-funded and sane mitigation effort going on”, but you do have a point.
8steven046112y
The sequences aren't necessarily claiming UFAI is the single most worrisome risk, just a seriously worrisome risk.
5thomblake12y
Don't forget - even if unfriendly AI wasn't a major existential risk, Friendly AI is still potentially the best way to combat other existential risks.
4kilobug12y
It's best long-term way, probably. But if you estimate it'll take 50 years to get a FAI and that some of the existential risks have a significant probability of happening in 10 or 20 years, then you better should try to address them without requiring FAI - or you're likely to never reach the FAI stage. In 7 billions of humans, it's sane to have some individual to focus on FAI now, since it's a hard problem, so we have to start early; but it's also normal for not all of us to focus on FAI, but to focus also on other ways to mitigate the existential risks that we estimate are likely to occur before FAI/uFAI.
1cousin_it12y
How do you imagine a hypothetical world where uFAI is not dangerous enough to kill us, but FAI is powerful enough to save us?
8TheOtherDave12y
Hypothetically suppose the following (throughout, assume "AI" stands for significantly superhuman artificial general intelligence): 1) if we fail to develop AI before 2100, various non-AI-related problems kill us all in 2100. 2) if we ever develop unFriendly AI before Friendly AI, UFAI kills us. 3) if we develop FAI before UFAI and before 2100, FAI saves us. 4) FAI isn't particularly harder to build than UFAI is. Given those premises, it's true that UFAI isn't a major existential risk, in that even if we do nothing about it, UFAI won't kill us. But it's also true that FAI is the best (indeed, the only) way to save us. Are those premises internally contradictory in some way I'm not seeing?
7cousin_it12y
No, you're right. thomblake makes the same point. I just wasn't thinking carefully enough. Thanks!
5thomblake12y
I don't. Just imagine a hypothetical world where lots of other things are much more certain to kill us much sooner, if we don't get FAI to solve them soon.
5Dorikka12y
More that I think there's a significant chance that we're going to get blown up by nukes or a bioweapon before then.
5kilobug12y
For me the issue with "the most". Unfriendly AI is a worrisome existential risk, but it still relies on technological breakthrough that we don't clearly estimate. While "bioengineered pandemic" is something that in the short-term future may very well be possible. That doesn't mean SIAI isn't doing an important job - Friendly AI is a hard task. If you start to try to solve a hard problem when you're about to die if you don't, well, it's too late. So it's great SIAI people are here to hack away the edges on the problem now.
0michaelsullivan12y
The phrasing of the question was quite specific: "Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?" If I estimate a very small probability of either FAI or UFAI before 2100, then I'm not likely to choose UFAI as "most likely to wipe out 90% of humanity before 2100" if I think there's a solid chance for something else to do so. Consider that I interpreted the singularity question to mean "if you think there is any real chance of a singularity, then in the case that the singularity happens, give the year by which you think it has 50% probability." and answered with 2350, while thinking that the singularity had less than a 50% probability of happening at all. Yes, Yvain did say to leave it blank if you don't think there will be a singularity. Given the huge uncertainty involved in anyone's prediction of the singularity or any question related to it, I took "don't believe it will happen" to mean that my estimated chance was low enough to not be worth reasoning about the case where it does happen, rather than that my estimate was below 50%.

801 people (73.5%) were atheist and not spiritual, 108 (9.9%) were atheist and spiritual

I'm curious as to how people interpreted this. Does the latter mean that one believes in the supernatural but without a god figure, e.g. buddism, new age? This question looked confusing to me at first glance.

People who believed in high existential risk were more likely to believe in global warming, more likely to believe they had a higher IQ than average, and more likely to believe in aliens (I found that same result last time, and it puzzled me then too.)

Why does it puzzle you?

5Jayson_Virissimo12y
I would have expected the opposite given Yvain's definition of "supernatural". The existence of an agent (or agents) that created the universe seems much more likely than the existence of ontologically basic mental entities. After all, one man's lead software designer of the simulation is another man's god.
4kilobug12y
Here we reach a usual definition problem about "god". Is "god" just someone who created the universe, but with its own limits, or is he omnipowerful omniscient eternal perfect as it is in monotheist religions ? The lead software designer of the simulation would be the first, but very likely not the second. Probably best to just taboo the word "god" in that context.
3pedanterrific12y
I assume because higher existential risk would seem to generalize to lower chances of aliens existing (because they had the same or similar existential risk as us).
1Dreaded_Anomaly12y
A more subtle interpretation, and one that I expect accounts for at least some of the people in this category, is that high existential risk makes it more likely that relatively nearby aliens exist but will never reach the point where they can contact us.
2TheOtherDave12y
If I remember correctly, the terms were defined in the survey itself, such that "spiritual and atheist" was something like believing in ontologically basic mental entities but not believing in a God that met that description. I didn't find the question confusing, but I did find it only peripherally related to what most people mean by either term. That said, it is a standard LW unpacking of those terms.

I'd be interested in knowing what percentage of LWers attended a private high school [or equivalent in country of origin].

so this supports taking them as a direct line to prevailing rationalist opinion rather than the contingent opinions of one random group.

http://lesswrong.com/lw/82s/dont_call_yourself_a_rationalist/

3DavidAgain12y
More fundamentally than self-labelling, that's an utterly false dilemma. It helps show that the results weren't a totally random 'people on that site then': they show SOMETHING. But what they show must be much more open to debate. To 'rationalist', you can add 1) Has been exposed to LessWrong (sequences and community) 2) English-speaking (unless there were translations?) 3) Minded to take long online surveys: including at the least possibilities 3a) Egotistical enough to think that your survey results must be included 3b) Dedicated enough to the LessWrong community to wish to contribute 3c) Generally publically-minded 3d) Doesn't have enough to do 4) Likely to overestimate one's own IQ It seems particularly odd to suggest these results are representative of rationalists while recognising both that the proportion of women has tripled since the last survey (and I don't think we're very close to working out what the true proportion is) and that men and women tend to have significantly different attitudes. The 'direct line to prevailing rationalist opinion' is also straight after what I would guess is most skewed by point (1) above. I'd be shocked to see such high scores for Many Worlds, living in a simulation or cryonics amongst rationalists outside LessWrong. Finally, could the last set of results itself have had an effect. The most likely effect would be in confirming the in-group opinions, leading to 'evaporative cooling' (if I may!). It seems less likely, but people could have directly calibrated too: I'd be interested in how much that page was accessed ahead of people taking this year's survey. If 'rationalist' was used just to mean 'LessWrongian' then please ignore the above - and take Robert Lumley's advice!

I have no idea if this is universal. (Probably not.) However, in my area, using the term "blacks" in certain social circles is not considered proper vocabulary.

I don't have any huge problem with using the term. However, using it may be bad signalling and leaves Lesswrong vulnerable to pattern-matching.

What would you prefer? "Blacks" is the way I've seen it used in medical and psychological journal articles.

7J_Taylor12y
Journals use "blacks"? I had no idea it was used in technical writing. In some of my social circles, it just happens to be considered, at best, grandma-talk. Generally, within these circles, "black people" is used. However, I have no real preference regarding this matter.
1nazgulnarsil12y
as opposed to black fish.
1wedrifid12y
Seriously? That seems a little cavalier of them.The medical and psychological influence of race isn't all that much to do with the skin color and a lot more to do with genetic population. That makes the term ambiguous to the point of uselessness. Unless "blacks" is assumed to mean, say, just those of African ancestry. In which case they could be writing "African".
2Jack12y
What is your area?
2J_Taylor12y
Southern United States.

The plural can look weird but as long as it doesn't come after a definite article, it's the standard term and I've never met anyone who was offended by it. The usual politically correct substitute, African-American, is offensive in an international context.

3J_Taylor12y
I have never met any black person who was offended by it. I have met some white people who will take you less seriously if you use the term. However, if it is the standard term then it is the standard term. I certainly would not replace it with African-American.

Moreover, there are plenty of black people in the world who are not African-American.

There's an infamous video from a few years back in which an American interviewer makes this mistake when talking to an Olympic athlete of British nationality and African ancestry. It becomes increasingly clear that the interviewer is merely doing a mental substitution of "African-American" for "black" without actually thinking about what the former term means ...

7wedrifid12y
Come to think of it we could put the emphasis of either of the terms.
3J_Taylor12y
I do not use "African-American" to refer to non-Americans.

I even feel weird calling Obama an African-American (though I still do it, because he self-identifies as one). In my mental lexicon it usually specifically refers to descendants of the African slaves taken to the Americas a long time ago, whereas Obama's parents are a White American of English ancestry and a Kenyan who hadn't been to the US until college.

Ironically, Obama is exactly the kind of person to whom that term should refer, if it means anything at all. Descendants of African slaves taken to the Americas a long time ago should have another term, such as "American blacks".

Despite his lack of membership in it, Obama self-identifies with the latter group for obvious political reasons; after all, "children of foreign exchange students" is not an important constituency.

6[anonymous]12y
For what it's worth, I'm also from the southern US, and I also have the impression that "blacks" is slightly cringey and "black people" is preferred.
0J_Taylor12y
I am glad that my case is not too aberrant.