gjm comments on Google may be trying to take over the world - Less Wrong

22 [deleted] 27 January 2014 09:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (133)

You are viewing a single comment's thread.

Comment author: gjm 27 January 2014 12:05:57PM 6 points [-]

Peter Norvig is at least in principle aware of some of the issues; see e.g. this article about the current edition of Norvig&Russell's AIAMA (which mentions a few distinct way in which AI could have very bad consequences and cites Yudkowsky and Omohundro).

I don't know what Google's attitude is to these things, but if it's bad then either they aren't listening to Peter Norvig or they have what they think are strong counterarguments, and in either case an outsider having a polite word is unlikely to make a big difference.

Comment author: jamesf 28 January 2014 11:34:30PM 5 points [-]

Peter Norving was a resident at Hacker School while I was there, and we had a brief discussion about existential risks from AI. He basically told me that he predicts AI won't surpass humans in intelligence by so much that we won't be able to coerce it into not ruining everything. It was pretty surprising, if that is what he actually believes.

Comment author: XiXiDu 27 January 2014 01:15:21PM 5 points [-]

I don't know what Google's attitude is to these things, but if it's bad then either they aren't listening to Peter Norvig or they have what they think are strong counterargument...

My guess is that most people at Google, who are working on AI, take those risks somewhat seriously (i.e. less seriously than MIRI, but still acknowledge them) but think that the best way to mitigate risks associated with AGI is to research AGI itself, because the problems are intertwined.