3p1cd3m0n comments on How can I reduce existential risk from AI? - Less Wrong

46 Post author: lukeprog 13 November 2012 09:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

You are viewing a single comment's thread.

Comment author: 3p1cd3m0n 20 November 2014 03:31:22AM *  0 points [-]

Are there any decent arguments saying that working on trying to develop safe AGI would increase existential risk? I've found none, but I'd like to know because I'm considering developing AGI as a career.

Edit: What about AI that's not AGI?

Comment author: ike 20 November 2014 02:39:48PM 1 point [-]
Comment author: 3p1cd3m0n 21 November 2014 02:30:03AM 0 points [-]

Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?