mkehrt comments on Other Existential Risks - Less Wrong

32 Post author: multifoliaterose 17 August 2010 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (120)

You are viewing a single comment's thread.

Comment author: mkehrt 19 August 2010 12:17:05AM *  1 point [-]

But I see no reason for assigning high probability to notion that a runaway superhuman intelligence will be developed within such a short timescale. In the bloggingheads diavlog Scott Aaronson challenges Eliezer on this point and Eliezer offers some throwaway remarks which I do not find compelling. As far as I know, neither Eliezer nor anybody else at SIAI have provided a detailed explanation for why we should expect runaway superhuman intelligence on such a short timescale.

I think this is a key point. While I think unFriendly AI could be a problem in an eventual future, other issues seem much more compelling.

As someone who has been a computer science grad student for four years, I'm baffled by claims about AI. While I do not do research in AI, I know plenty of people who do. No one is working on AGI in academia, and I think this is true in industry as well. To people who actually work on giving computers more human capabilities, AGI is an entirely science ficitonal goal. It's not even clear that researchers in CS think an AGI is a desirable goal. So, while I think it probable that AGIs will eventually exist, it's something that is distant,

Therefore, it seems like, if one is interested in reducing existential risk, there are a lot more important things to work on. Resource depletion, nuclear proliferation and natural disasters like asteroids and supervolcanoes seem like much more useful targets.