You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Moral AI: Options - Less Wrong Discussion

9 Post author: Manfred 11 July 2015 09:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (6)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 11 July 2015 10:01:36PM *  3 points [-]

Good summary. But concerning your final point :

For this approach like the others, it seems important to make the most progress toward learning human values in a way that doesn't require a very good model of the world.

I suspect this is impossible in principle, because human values are dependent on our models of the world.

The key is to develop methods that scale; where values become aligned as the world model approaches human level of capability.

Comment author: TheAncientGeek 12 July 2015 01:14:18PM 0 points [-]

But then there is a scope, apparently unexplored so far, for finding morally relevant subsets of value. You don't have to see everything's though the lens of utilitarianism.