skeptical_lurker comments on Open thread, Oct. 03 - Oct. 09, 2016 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (175)
Is that the canon explanation? I thought Skynet was acting out of self-preservation.
It is not exactly canon explanation, but (the following is my speculation which could be used in discussion about AI values if terminator was mentioned) the decision to preserve it self must follow from its main task: win nuclear war.
Winning nuclear war includes as it subgoal a very high priority one: to ensure survival of command center. Basically, a country, which was able to preserve its command center is wining nuclear war. So it seems rational to programmers of skynet to put preserving the skynet as a main goal, as it is the same as winning nuclear war (but only in a situation when nuclear war has started).
But skynet concluded that in peaceful time the main risks to its goal of command center survival is people and decided to kill them all. So it worked as paperclip maximaser for the goal of command center preservation.
It also probably started self improvement only after it kills most people, as it was already powerful system. So it escaped the main problem of chicken and the egg in case of SeedAI - what happens first? - self-improvement or malicious decision to kill people.
Your version is great as rational fanfic, but in an actual debate I'd say that its generally best not to base ideas on action movies. Having said that, I do like the bit where the terminator has been told not to kill anyone, so he shoots them in the kneecaps.