LucasSloan comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: LucasSloan 19 August 2010 05:09:59AM 3 points [-]

It's hardly urgent, since AI researchers are nowhere near a runaway intelligence.

Sadly, there's no guarantee of that.

Comment author: Jonathan_Graehl 19 August 2010 08:21:44AM *  2 points [-]

Right, it's just (in my and most other AI researchers'[*] opinion) overwhelmingly likely that we are in fact nowhere near (the capability of) it. Although it's interesting to me that I don't feel there's that much difference in probability of "(good enough to) run away improving itself quickly past human level AI" in the next year, and in the next 10 years - both extremely close to 0 is the most specific I can be at this point. That suggests I haven't really quantified my beliefs exactly yet.

[*] I actually only work on natural language processing using really dumb machine learning, i.e. not general AI.