You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

tim comments on [LINK] Steven Hawking warns of the dangers of AI - Less Wrong Discussion

10 Post author: Salemicus 02 December 2014 03:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread.

Comment author: tim 03 December 2014 02:23:29AM 7 points [-]

Just spitballing, but I would guess that this type of coverage is a net benefit due to the level of exposure and subsequent curiosity generated by "holy crap what is this thing that could spell doom for us all?" That is, it seems like singularitarian ideas need as much exposure as possible (any press is good press) and are a long way away from worrying about anti-AI picketers. Am I off here?

Comment author: MathiasZaman 03 December 2014 09:14:26AM 1 point [-]

I think you're correct. Ideas like AGI are mostly unknown by the general public and anything that can make someone curious about that cluster of ideas is probably a good thing.

Comment author: John_Maxwell_IV 04 December 2014 02:50:18AM 0 points [-]

What's the causal pathway by which coverage like this improves things? If we want technical expertise or research funding, it seems like there are more targeted channels. This could be optimal if we want to make some kind of political move though. What else?