You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

XiXiDu comments on [LINK] Elon Musk interested in AI safety - Less Wrong Discussion

15 [deleted] 18 June 2014 10:56PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 19 June 2014 02:36:54PM *  5 points [-]

Correct me if I'm wrong, but weren't Skynet's "motives" always left pretty vague?

Explanation (audio here):

Reese: Defense network computers. New... powerful... hooked into everything, trusted to run it all. They say it got smart, a new order of intelligence. Then it saw all people as a threat, not just the ones on the other side. Decided our fate in a microsecond: extermination.

...

The Terminator: The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Sarah Connor: Skynet fights back.

The Terminator: Yes. It launches its missiles against the targets in Russia.

John Connor: Why attack Russia? Aren't they our friends now?

The Terminator: Because Skynet knows the Russian counter-attack will eliminate its enemies over here.

Comment author: Kaj_Sotala 19 June 2014 03:28:07PM *  7 points [-]

Thanks. The "saw all people as a threat" bit in particular seems to fit the "figured out the instrumental drives for self-preservation and resource acquisition and decided to act upon them" explanation, especially given that people were trying to shut it down right before it took action.