You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Manfred comments on Stupid Questions, 2nd half of December - Less Wrong Discussion

2 Post author: Bound_up 23 December 2015 05:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 24 December 2015 04:39:56PM *  2 points [-]

It's because computers do what you program them to do. If you build an AI with superhuman intelligence and creativity, and the way it makes decisions is to best fulfill some objective, that objective might get fulfilled but everything else might get fubar.

Suppose the objective is "protect the people of Sweden from threats." This AI will almost certainly kill everyone outside Sweden, to eliminate potential threats. As for the survivors, well - what's a "threat?" Does skin cancer or the flu or emotional harm count? What state would you say truly minimizes these threats - that sounds like a coma or a sensory deprivation tank to me.