DaFranker comments on [Link] Values Spreading is Often More Important than Extinction Risk - Less Wrong

11 Post author: Pablo_Stafforini 07 April 2013 05:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: DaFranker 08 April 2013 07:50:21PM *  2 points [-]

BTW, I'm curious to hear more about the mechanics of your scenario. The AGI hacks itself onto every (Internet-connected) computer in the world. Then what?

Then the AI does precisely nothing other than hide its presence and do the following:

Send one email to a certain nano-something research scientist whom the AI has identified as "easy to bribe into building stuff he doesn't know about in exchange for money". The AI hacks some money (or maybe even earns it "legitimately"), sends it to the scientist, then tells the scientist to follow some specific set of instructions for building a specific nanorobot.

The scientist builds the nanorobot. The nanorobot proceeds to slowly and invisibly multiply until it has reached 100% penetration to every single human-inhabited place on Earth. Then it synchronously begins a grey goo event where every human is turned into piles of carbon and miscellaneous waste, and every other thinghy required for humans (or other animals) to survive on earth is transformed into more appropriate raw materials for the AI to use next.

And I'm only scratching the surface of a limited sample of some of the most obvious ways an AI could cause an extinction event from the comfort of only a few university networks, let alone every single computer connected to the Internet.