Gunnar_Zarncke comments on Open thread, Oct. 03 - Oct. 09, 2016 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (175)
No, a Superintelligence is by definition capable of working out what a human wishes.
However, a Superintelligence designed to e.g. calculate digits of pi would not care about what a human wishes. It simply cares about calculating digits of pi.
If all it takes to ensure FAI is to instruct "henceforth, always do what humans mean, not what they say" then FAI is trivial.
Except I bet that this also lots of caveats, e.g. in resolving the ambiguity of the referent 'humans'. Though the basic approach of using an AI's intelligence to understand the commands is part of some approaches.