mkehrt comments on Other Existential Risks - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (120)
I think this is a key point. While I think unFriendly AI could be a problem in an eventual future, other issues seem much more compelling.
As someone who has been a computer science grad student for four years, I'm baffled by claims about AI. While I do not do research in AI, I know plenty of people who do. No one is working on AGI in academia, and I think this is true in industry as well. To people who actually work on giving computers more human capabilities, AGI is an entirely science ficitonal goal. It's not even clear that researchers in CS think an AGI is a desirable goal. So, while I think it probable that AGIs will eventually exist, it's something that is distant,
Therefore, it seems like, if one is interested in reducing existential risk, there are a lot more important things to work on. Resource depletion, nuclear proliferation and natural disasters like asteroids and supervolcanoes seem like much more useful targets.