Viliam_Bur comments on Holden Karnofsky's Singularity Institute Objection 3 - Less Wrong

5 Post author: ciphergoth 11 May 2012 07:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 13 May 2012 03:36:51PM *  2 points [-]

Assume you were to gradually transform Google Maps into a seed AI, at what point would it become an existential risk and how?

If it tries to self-improve, and as a side effect turns the universe to computronium.

If it gains a general intelligence, and as a part of trying to provide better search results, it realizes that self-modification could bring much faster search results.

This whole idea of a harmless general intelligence is just imagining a general intelligence which is not general enough to be dangerous; which will be able to think generally, and yet somehow this ability will always reliably stop before thinking something that might end bad.

Comment author: XiXiDu 13 May 2012 05:45:12PM 1 point [-]

Assume you were to gradually transform Google Maps into a seed AI, at what point would it become an existential risk and how?

If it tries to self-improve, and as a side effect turns the universe to computronium.

Thanks, I completely missed that. Explains a lot.