You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ikrase comments on The Up-Goer Five Game: Explaining hard ideas with simple words - Less Wrong Discussion

29 Post author: RobbBB 05 September 2013 05:54AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (82)

You are viewing a single comment's thread.

Comment author: ikrase 07 September 2013 12:45:09AM 1 point [-]

Complexity and Fragility of Value, My take: When people talk about the things they want, they usually don't say very many things. But when you check what things people actually want, they want a whole lot of different things. People also sometimes don't realize that they want things because they have always had those things and never worried that they might lose them.

If we were to write a book of all the things people want so a computer could figure out ways to give people the things they want, the book would probably be long and hard to write. If there were some small problems in the book, the computer wouldn't be able to see the problems and would give people the wrong things. That would probably be very, very, very bad.

*Risks of Creative Super AIs: *If we make computers, they will never know to do anything that people didn't tell them to do. We can tell computers to try to figure things out for themselves but even then we need to get them started on that. Computers will not know what people want except if people tell the computers exactly what they want. Very strong computers might get really stupid ideas about what they should do because they were wrong about what humans want. Also, very strong computers might do really bad things we don't want before we can turn them off.