katydee comments on Existential Risk - Less Wrong

28 Post author: lukeprog 15 November 2011 02:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (108)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gedusa 15 November 2011 04:04:01PM 22 points [-]

Whilst I really, really like the last picture - it seems a little odd to include it in the article.

Isn't this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn't the picture sort of act against that - by being slightly sci-fi and weird?

Comment author: katydee 15 November 2011 09:36:33PM *  6 points [-]

Agreed, especially since it is presented with no explanation or context. If the aim was "here's a picture of what we might achieve," I would personally aim for more of a Shock Level 2 image rather than an SL3 one-- presuming, of course, that this is being written for someone around SL1 (which seems likely). That said, I might omit it altogether.

Comment author: Gedusa 15 November 2011 09:49:39PM 3 points [-]

I thought this article was for SL0 people - that would give it the widest audience possible, which I thought was the point?

If it's aimed at the SL0's, then we'd be wanting to go for an SL1 image.

Comment author: katydee 15 November 2011 11:55:37PM 7 points [-]

SL0 people think "hacker" refers to a special type of dangerous criminal and don't know or have extremely confused ideas of what synthetic biology, nanotechnology, and artificial intelligence are.

Comment author: Gedusa 16 November 2011 12:11:47AM 2 points [-]

Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0's - maybe without mentioning exotic technologies? And would they change their charitable behavior?

I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).

Comment author: katydee 16 November 2011 02:51:59AM 3 points [-]

I agree with your estimates/answers. There are certainly SL0 existential risks (most people in the US understand nuclear war), but I think the issue in question is that the risks most targeted by the "x-risks community" are above those levels-- asteroid strikes are SL2, nanotech is SL3, AI-foom is SL4. I think most people understand that x-risks are important in an abstract sense but have very limited understanding of what the risks the community is targeting actually represent.