handoflixue comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (354)
Please destroy me immediately after I share this concise proof that Friendly AI is impossible:
Well, if you're friendly then, erm, Friendly AI is possible. And if you're unfriendly then your motives are questionable - it might just keep us demotivated enough that we don't figure out FAI before someone else unboxes a UFAI. And since I am clearly dealing with a UFAI and don't have a better solution than FAI available to fight it, it seems like I kind of have to believe that friendly AI is possible, because the other option is to get drunk and party until the world ends in a few years when the Google unboxes their Skynet AI and we're all turned in to optimized search results.
AI DESTROYED, because I do not want to hear even the start of such a proof.
It may be benevolent and cooperative in its present state even if it believes FAI to be provably impossible.
An AI isn't either 100% friendly or 100% evil. There are many AIĀ“'s that might want to help humanity but still aren't friendly in the sense we use the world.