hairyfigment comments on AI risk, new executive summary - Less Wrong

12 Post author: Stuart_Armstrong 18 April 2014 10:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (76)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 20 April 2014 08:56:51PM 0 points [-]

My hangup is that it seems like a truly benevolent AI would share our goals.

In the way that a "truly benevolent" human would leave an unpolluted lake for fish to live in, instead of using it for its own purposes. The fish might think that humans share its goals, but the human goals would be infinitely more complex than fish could understand.

Comment author: hairyfigment 20 April 2014 10:54:13PM -1 points [-]

...It sounds like you're hinting at the fact that humans are not benevolent towards fish. If we are, then we do share its goals when it comes to outcomes for the fish - we just have other goals, which do not conflict. (I'm assuming the fish actually has clear preferences.) And a well-designed AI should not even have additional goals. The lack of understanding "only" might come in with the means, or with our poor understanding of our own preferences.