You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

imuli comments on AI caught by a module that counterfactually doesn't exist - Less Wrong Discussion

9 Post author: Stuart_Armstrong 17 November 2014 05:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread.

Comment author: imuli 17 November 2014 09:32:05PM *  -1 points [-]

Well, a generic FAI would accomplish SG (by assumption - we may need to work a bit on this part).

Yes... I am skeptical that that particular sub-goal would be accomplished by a FAI, but it sounds like an acceptable use for (even non-rigorous!) encodings of human values.