You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gunnar_Zarncke comments on Open thread, Oct. 03 - Oct. 09, 2016 - Less Wrong Discussion

4 Post author: MrMind 03 October 2016 06:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (175)

You are viewing a single comment's thread. Show more comments above.

Comment author: SoerenE 04 October 2016 01:27:42PM 2 points [-]

No, a Superintelligence is by definition capable of working out what a human wishes.

However, a Superintelligence designed to e.g. calculate digits of pi would not care about what a human wishes. It simply cares about calculating digits of pi.

Comment author: skeptical_lurker 04 October 2016 04:18:16PM 0 points [-]

If all it takes to ensure FAI is to instruct "henceforth, always do what humans mean, not what they say" then FAI is trivial.

Comment author: Gunnar_Zarncke 04 October 2016 04:33:56PM 1 point [-]

Except I bet that this also lots of caveats, e.g. in resolving the ambiguity of the referent 'humans'. Though the basic approach of using an AI's intelligence to understand the commands is part of some approaches.