ciphergoth comments on Open Thread, January 2011 - Less Wrong

4 Post author: ciphergoth 10 January 2011 11:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread.

Comment author: ciphergoth 10 January 2011 11:16:32AM 1 point [-]

I was asked what the mainstream thinks of AI Risk. My understanding is that the only comment on the subject from "mainstream" AI research is a conference report that says something like "Some people think there might be a risk from powerful AI, but there isn't." This was discussed here on LW, but obviously searching for it given only that information is pretty much impossible, so help would be much appreciated - thanks!

Comment author: Vladimir_Nesov 10 January 2011 12:18:58PM 5 points [-]

Hanson and then you posted a link to AAAI Panel on Long-term AI Futures (also discussed here).

From "Interim Report" (Aug 2009):

The group suggested outreach and communication to people and organizations about the low likelihood of the radical outcomes, sharing the rationale for the overall comfort of scientists in this realm, and for the need to educate people outside the AI research community about the promise of AI for enhancing the quality of human life in numerous ways, coupled with a re-focusing of attention on actionable, shorter-term challenges.

Comment author: ciphergoth 11 January 2011 10:43:43AM *  2 points [-]

Pursuing this further, I emailed focus group chair Professor David McAllester to ask if there had been any progress in "sharing the rationale". He replied:

The wording you mention in the report was supported by many people. However, I personally think the possibility of an AI chain reaction in the next few decades should not be dismissed. I am trying my very hardest to make it happen.

(I have his permission to share that)

Comment author: timtyler 12 January 2011 03:29:21PM *  1 point [-]

AAAI ex-president, Eric Horvitz seems ambivalent here:

Horvitz doubts that one of these virtual receptionists could ever lead to something that takes over the world. He says that's like expecting a kite to evolve into a 747 on its own.

So does that mean he thinks the singularity is ridiculous?

Mr. HORVITZ: Well, no. I think there's been a mix of views, and I have to say that I have mixed feelings myself.

Comment author: ciphergoth 10 January 2011 01:10:09PM 1 point [-]

Impressed - how did you find this? I'm also impressed I managed to forget something I myself re-posted. Thanks!

Comment author: timtyler 10 January 2011 09:31:14PM *  1 point [-]

Why robots won't rule. See also the links here.

Comment author: timtyler 19 January 2011 03:32:57PM 0 points [-]

Alon Halevy, a faculty member in the University of Washington's computer science department and an editor at the Journal of Artificial Intelligence Research, said he's not worried about friendliness.

"As a practical matter, I'm not concerned at all about AI being friendly or not," Halevy said. "The challenges we face are so enormous to even get to the point where we can call a system reasonably intelligent, that whether they are friendly or not will be an issue that is relatively easy to solve."

Comment author: timtyler 10 January 2011 09:26:43PM *  0 points [-]

"There's certainly a finite chance that the whole process will go wrong - and the robots will eat us." - Hans Moravec, here.