timtyler comments on A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk - Less Wrong

34 Post author: chaosmage 07 January 2014 05:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 08 January 2014 12:11:05PM 0 points [-]

Eliezer's posts do a great job of explaining the actual dangers of unfriendly AI, more along the lines of "the AI neither loves you, nor hates you, but you are composed of matter it can use for other things".

I'm not sure that's true. At the beginning stages where an AI is vulnerable it might very well use violence to prevent itself from getting destroyed.

Comment author: timtyler 10 January 2014 12:18:08AM *  -2 points [-]

Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it's (arguably) not so likely to kill everybody. MIRI appears to be focussing on the "killing everybody case". That is because - according to them - that is a really, really bad outcome.

The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.