JGWeissman comments on Be a Visiting Fellow at the Singularity Institute - Less Wrong

26 Post author: AnnaSalamon 19 May 2010 08:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 25 May 2010 07:32:10AM 0 points [-]

I can see how a program well short of AGI could "crash" the internet, by using preprogrammed behaviors to take over vulnerable computers, to expand exponentially to fill the space of computers on the internet vulnerable to a given set of exploits, and run Denial of Service attacks on secured critical servers. But I would not even consider that an AI, and it would happen because its programmer pretty much intended for that to happen. It is not an example of an AI getting out of control.

Comment author: Sniffnoy 25 May 2010 08:16:33AM 1 point [-]

Of course, it's probably worth noting that it's happened once before that a careless programmer crashed the internet, without anything like AI being involved (though admittedly that sort of thing wouldn't have the same effect today, I don't think).

Comment author: Kevin 25 May 2010 09:55:23AM 0 points [-]

It does work as an example of just how easy it would be for an AGI to crash the internet, or even just take it over.