You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

hairyfigment comments on Stupid Questions May 2015 - Less Wrong Discussion

10 Post author: Gondolinian 01 May 2015 05:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (263)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 04 May 2015 09:42:28AM *  4 points [-]

How can we have Friendly AI even if we humans cannot agree about our ethical values? This is an SQ because probably this was the first problem solved -it just so obivious - yet I cannot find it.

I have not finished the sequences yet, but they sound a bit optimistic to me - as if basically everybody is a modern utilitarian and the rest of the people just don't count. To give you the really dumbest question: what about religious folks? Is it just supposed to be a secular-values AI and they can go pound sand, or some sort of an agreement, compromise drawn with them and then that implemented? Is some sort of a generally agreed Human Values system a prerequisite?

My issue here is that if we want to listen to everybody, then this will be a never-ending debate. If you draw the line and e.g. include on people with reasonably utilitarian value systems, where do you draw the line etc.

Comment author: hairyfigment 14 May 2015 05:47:24PM 0 points [-]

As I told someone else, this pdf has preliminary discussion about how to resolve differences that persist under extrapolation.

The specific example of religious disagreements seems like a trivial problem to anyone who gets far enough to consider the question. Since there aren't any gods, the AI can ask what religious people would want if they accepted this fact. (This is roughly why I would oppose extrapolating only LW-ers rather than humanity as a whole.) But hey, maybe the question is more difficult than I think - we wouldn't specifically tell the AI to be an atheist if general rules of thinking did not suffice - or maybe this focus on surface claims hides some deeper disagreement that can't be so easily settled by probability.