3p1cd3m0n comments on How can I reduce existential risk from AI? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (92)
Are there any decent arguments saying that working on trying to develop safe AGI would increase existential risk? I've found none, but I'd like to know because I'm considering developing AGI as a career.
Edit: What about AI that's not AGI?
http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?