loup-vaillant comments on Q&A with Abram Demski on risks from AI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (70)
I couldn''t have said better. I'll think about it if I ever have to explain the issue to laypeople. The key point I take is, it makes little matter if the AI has no limbs, as long as it can have humans do its bidding.
By the way, your scenario sounds both vastly more probable than a fully fledged hard take off, and nearly as scary. To take over the world, one doesn't need superhuman intelligence, nor self modification, nor faster thoughts, nor even nanotech or other SciFi technology. No, one just needs to be around the 90th human percentile in various domains (typically those relevant to take over the Roman Empire), and be able to duplicate oneself.
This is as weak as a "human-level" AI one could think of. Yet it sounds like it could probably set up a singleton before we could stop it (that would mean something like shutting down the Internet, or building another AI before the first takes over the entire network). And the way I see it, it is even worse:
Now just a caveat: I assumed the AI (or upload) would start right away with enough processing power to demonstrate human-level abilities in "real time". We could on the other hand imagine an AI for which we can demonstrate that if it ran a couple of orders of magnitude faster, then it would be as capable as a human mind. That would delay a hard take-off, and make it more predictable (assuming no self-modification). It may also let us prevent the rise of a Singleton.
I'm thinking the second is probable. A single AI seems unlikely.