Gedusa comments on Existential Risk - Less Wrong

28 Post author: lukeprog 15 November 2011 02:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (108)

You are viewing a single comment's thread. Show more comments above.

Comment author: katydee 15 November 2011 11:55:37PM 7 points [-]

SL0 people think "hacker" refers to a special type of dangerous criminal and don't know or have extremely confused ideas of what synthetic biology, nanotechnology, and artificial intelligence are.

Comment author: Gedusa 16 November 2011 12:11:47AM 2 points [-]

Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0's - maybe without mentioning exotic technologies? And would they change their charitable behavior?

I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).

Comment author: katydee 16 November 2011 02:51:59AM 3 points [-]

I agree with your estimates/answers. There are certainly SL0 existential risks (most people in the US understand nuclear war), but I think the issue in question is that the risks most targeted by the "x-risks community" are above those levels-- asteroid strikes are SL2, nanotech is SL3, AI-foom is SL4. I think most people understand that x-risks are important in an abstract sense but have very limited understanding of what the risks the community is targeting actually represent.