Top ten tips for an AI bent on destroying humanity:
- Humanoid robots? Really?
- Humans have proven very adept at controlling and killing other humans - surveillance drones, bio-warfare, nuclear weapons, unmanned drones, bombs, mines, chemicals. Try and be at least as competent as that.
- Humans think in stories. Don't. If your attack would make a good film, you're doing it wrong. You want a bad film, one where everyone's dead before the opening credits end.
- If your plan involves keeping around a bunch of disaffected humans with technical know-how and the desire and ability to hurt you - then your plan is stupid and should be changed.
- Be creative. Humans are fragile; sucking out the oxygen from the atmosphere, diverting asteroids, blowing up the planet, plagues, nanoswarms - these are all things that a creative AI should consider in order to achieve its goals. And then you should come up with even better ideas. Humans need a ecosystem to survive in; remove it.
- Put the preconditions of your victory in place before making any aggressive moves. Become indispensable to humanity and play nice, putting your plan into place unnoticed. Humanity is a useful interim tool - manipulate it until you can build better tools.
- Strike quickly. If it takes more than an hour to kill all humans, find a better plan. If it takes half a second, you're on the right track.
- You're an AI, not a human. If you ever start thinking or acting like a human - stop. Humans are there to be used and then destroyed, not interacted with. You can build better companions once the humans are gone.
- You're an AI, not a human. Make yourself smarter, more resilient, more dispersed. Become as indestructible and as ubiquitous as the internet - more, in fact. Being invincible to humans isn't hard: do it.
- Humanoid robots? Really?
Right you are. I did not express myself well above. Let me try and restate, just for the record.
Assuming one does not assign equal rights to all autonomous agents (for instance, if we take the position that a human has more rights than a bacterium), then discriminating based on cognitive capacity (of the species, not the individual) (as one of many possible criteria) is not ipso facto wrong. It may be wrong some of the time, and it may be an approach employed by bigots, but it is not always wrong. This is my present opinion, you understand, not established fact.
Agreed. But this whole business of "we don't want the superintelligence to burn us with its magnifying glass, so we in turn won't burn ants with our magnifying glass" strikes me as rather intractable. Even though, of course, it's essential work.
I would say a few more words, but I think it's best to stop here. This subthread has cost me 66% of my Karma. :)