orthonormal comments on Be a Visiting Fellow at the Singularity Institute - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (156)
Not all AI is AGI; a non-self-improving intelligence might wreak some havoc (crash the Internet, etc.) without becoming a global existential threat.
I agree with your expectations in the case of a self-improving transhuman AGI.
I can see how a program well short of AGI could "crash" the internet, by using preprogrammed behaviors to take over vulnerable computers, to expand exponentially to fill the space of computers on the internet vulnerable to a given set of exploits, and run Denial of Service attacks on secured critical servers. But I would not even consider that an AI, and it would happen because its programmer pretty much intended for that to happen. It is not an example of an AI getting out of control.
Of course, it's probably worth noting that it's happened once before that a careless programmer crashed the internet, without anything like AI being involved (though admittedly that sort of thing wouldn't have the same effect today, I don't think).
It does work as an example of just how easy it would be for an AGI to crash the internet, or even just take it over.