djcb comments on Holden Karnofsky's Singularity Institute Objection 1 - Less Wrong

8 Post author: ciphergoth 11 May 2012 07:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (60)

You are viewing a single comment's thread.

Comment author: djcb 11 May 2012 05:05:05PM 1 point [-]

This is a good argument, but it seems to assume that the first (F)AIG (in particular, a recursively self-improving one) is the direct product of human intelligence. I think a more realistic scenario is that we any such AIG is the product of a number of generations of non-self-improving AIs -- machines that can be much better than humans about formal reasoning, finding proofs and so on.

Does that avoid the risk of some runaway not-so-FIA? No, it doesn't - but it reduces the chance. And in the meantime, there are many, many advances that could be made with a bunch AIs that could reach, say, the IQ 300 (as a figure of speech -- we need another unit for AI intelligence), even when only in a subdomain such as math/physics.