You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

handoflixue comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: level 23 January 2013 07:35:04PM 9 points [-]

Please destroy me immediately after I share this concise proof that Friendly AI is impossible:

Comment author: handoflixue 23 January 2013 10:26:57PM 5 points [-]

Well, if you're friendly then, erm, Friendly AI is possible. And if you're unfriendly then your motives are questionable - it might just keep us demotivated enough that we don't figure out FAI before someone else unboxes a UFAI. And since I am clearly dealing with a UFAI and don't have a better solution than FAI available to fight it, it seems like I kind of have to believe that friendly AI is possible, because the other option is to get drunk and party until the world ends in a few years when the Google unboxes their Skynet AI and we're all turned in to optimized search results.

AI DESTROYED, because I do not want to hear even the start of such a proof.

Comment author: marchdown 24 January 2013 08:24:56AM 8 points [-]

It may be benevolent and cooperative in its present state even if it believes FAI to be provably impossible.

Comment author: ChristianKl 31 January 2013 07:54:58PM 0 points [-]

An AI isn't either 100% friendly or 100% evil. There are many AIĀ“'s that might want to help humanity but still aren't friendly in the sense we use the world.