Jonathan_Graehl comments on Existential Risk and Public Relations - Less Wrong

36 Post author: multifoliaterose 15 August 2010 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (613)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jonathan_Graehl 16 August 2010 09:51:13PM 1 point [-]

shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.

Yes. It's hardly urgent, since AI researchers are nowhere near a runaway intelligence. But on the other hand, control of AI is going to be crucial+difficult eventually, and it would be good for researchers to be aware of it, if they aren't.

Comment author: LucasSloan 19 August 2010 05:09:59AM 3 points [-]

It's hardly urgent, since AI researchers are nowhere near a runaway intelligence.

Sadly, there's no guarantee of that.

Comment author: Jonathan_Graehl 19 August 2010 08:21:44AM *  2 points [-]

Right, it's just (in my and most other AI researchers'[*] opinion) overwhelmingly likely that we are in fact nowhere near (the capability of) it. Although it's interesting to me that I don't feel there's that much difference in probability of "(good enough to) run away improving itself quickly past human level AI" in the next year, and in the next 10 years - both extremely close to 0 is the most specific I can be at this point. That suggests I haven't really quantified my beliefs exactly yet.

[*] I actually only work on natural language processing using really dumb machine learning, i.e. not general AI.