TheOtherDave comments on 2011 Survey Results - Less Wrong

94 Post author: Yvain 05 December 2011 10:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 04 December 2011 08:12:29PM 1 point [-]

Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader.

This doesn't look very good from the point of view of the Singularity Institute. While 38.5% of all people have read at least 75% of the Sequences only 16.5% think that unfriendly AI is the most worrisome existential risk.

Is the issue too hard to grasp for most people or has it so far been badly communicated by the Singularity Institute? Or is it simply the wisdom of crowds?

Comment author: TheOtherDave 04 December 2011 08:42:44PM 21 points [-]

The irony of this is that if, say, 83.5% of respondents instead thought UFAI was the most worrisome existential risk, that would likely be taken as evidence that the LW community was succumbing to groupthink.

Comment author: Sophronius 04 December 2011 08:57:25PM 1 point [-]

My prior belief was that people on less wrong would overestimate the danger of unfriendly ai due to it being part of the reason for Less Wrong's existence. That probability has decreased since seeing the results, but as I see no reason to believe the opposite would be the case, the effect should still be there.

Comment author: TheOtherDave 04 December 2011 09:08:57PM 0 points [-]

I don't quite understand your final clause. Are you saying that you still believe a significant number of people on LW overestimate the danger of UFAI, but that your confidence in that is lower than it was?

Comment author: Sophronius 04 December 2011 11:31:09PM *  -1 points [-]

More or less. I meant that I now estimate a reduced but still non-zero probability of upwards bias, but only a negligible probability of a bias in the other direction. So the average expected upward bias is decreased but still positive. Thus I should adjust the probability of human extinction being due to unfriendly ai downwards. Of course, the possibility of less wrong over or underestimating existential risk in general is another matter.