Eliezer_Yudkowsky comments on Do Earths with slower economic growth have a better chance at FAI? - Less Wrong

30 Post author: Eliezer_Yudkowsky 12 June 2013 07:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (174)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 13 June 2013 02:32:36PM 3 points [-]

If there were a sufficiently smart government with a sufficiently demonstrated track record of cluefulness whose relevant officials seemed to genuinely get the idea of pro-humanity/pro-sentience/galactic-optimizing AI, the social ideals and technical impulse behind indirect normativity, and that AI was incredibly dangerous, I would consider trusting them to be in charge of a Manhattan Project with thousands of researchers with enforced norms against information leakage, like government cryptography projects. This might not cure required serial depth but it would let FAI parallelize more without leaking info that could be used to rapidly construct UFAI. I usually regard this scenario as a political impossibility.

Things that result in fewer resources going into AI specifically would result in fewer UFAI resources without reducing overall economic growth, but it needs to be kept in mind that some such research occurs in financial firms pushing trading algorithms, and a lot more in Google, not just in places like universities.

Comment author: Benja 03 November 2013 08:54:35PM *  0 points [-]

Things that result in fewer resources going into AI specifically would result in fewer UFAI resources without reducing overall economic growth, but it needs to be kept in mind that some such research occurs in financial firms pushing trading algorithms, and a lot more in Google, not just in places like universities.

To the extent that industry researchers publish less than academia (this seems particularly likely in financial firms, and to a lesser degree at Google), a hypothetical complete shutdown of academic AI research should reduce uFAI's parallelization advantage by 2+ orders of magnitude, though (presumably, the largest industrial uFAI teams are much smaller than the entire academic AI research community). It seems that reducing academic funding for AI only somewhat should translate pretty well into less parallel uFAI development as well.