XiXiDu comments on Holden Karnofsky's Singularity Institute Objection 3 - Less Wrong

5 Post author: ciphergoth 11 May 2012 07:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 11 May 2012 11:29:58AM *  0 points [-]

Siri and Watson are both very domain-specific AIs, so evaluating their "intelligence" or "Friendliness" is relatively trivial - you just have to see if their outputs match the small subset of the programmer's utility function that corresponds to what the programmer designed them to do.

Assume you were to gradually transform Google Maps into a seed AI, at what point would it become an existential risk and how? And why wouldn't you just skip that step?

More here.

Comment author: Viliam_Bur 13 May 2012 03:36:51PM *  2 points [-]

Assume you were to gradually transform Google Maps into a seed AI, at what point would it become an existential risk and how?

If it tries to self-improve, and as a side effect turns the universe to computronium.

If it gains a general intelligence, and as a part of trying to provide better search results, it realizes that self-modification could bring much faster search results.

This whole idea of a harmless general intelligence is just imagining a general intelligence which is not general enough to be dangerous; which will be able to think generally, and yet somehow this ability will always reliably stop before thinking something that might end bad.

Comment author: XiXiDu 13 May 2012 05:45:12PM 1 point [-]

Assume you were to gradually transform Google Maps into a seed AI, at what point would it become an existential risk and how?

If it tries to self-improve, and as a side effect turns the universe to computronium.

Thanks, I completely missed that. Explains a lot.

Comment author: siodine 11 May 2012 02:04:44PM 2 points [-]

That reminds me of Project Pigeon, only with a weapon capable of destroying the planet, and we're the pigeon.

Comment author: Rain 11 May 2012 09:32:23PM 3 points [-]

Assume you were to gradually transform Google Maps into a seed AI, at what point would it become an existential risk and how? And why wouldn't you just skip that step?

A very important part of Google Maps is Street View, which is created by cars driving around and taking pictures of everything. These could be viewed as 'arms' of the seed AI, along with its surveillance satellites, WiFi sniffing for more accurate geolocation, 3D modelling of buildings, and the recently introduced building-interior maps.

Which is to say, Super Google Maps could be a gigantic surveillance network and pervasive examiner of every corner of reality so it could be as up to date as possible.

Comment author: nshepperd 14 May 2012 07:23:44AM 0 points [-]

How does one do a gradual transformation on a discontinuous space such as the space of computer programs that are somehow related to navigation or general intelligence?