You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Conservation of expected moral evidence, clarified - Less Wrong Discussion

11 Post author: Stuart_Armstrong 20 June 2014 10:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (10)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 23 June 2014 01:46:57PM 0 points [-]

An interesting point, hinting that my approach at moral updating ( http://lesswrong.com/lw/jxa/proper_value_learning_through_indifference/ ) may be better than I supposed.

Comment author: Slider 25 June 2014 07:26:44AM 1 point [-]

I was more getting to that it narrows down the problem instead of generalising it. It reduces the responsibilities of the AI and widens those of humans. If you solved this problem you would only get up to the most virtous human (which isn't exactly bad). Going beyond would require ethics competency that would have to be added as we are tying it's hands in this department.

Comment author: Stuart_Armstrong 25 June 2014 10:30:20AM 0 points [-]

I take the point in practice, but there's no reason we couldn't design something to follow a path towards ultra-ethicshood that had the conservation property. For instance, if we could implement "as soon as you know your morals would change, then change them", this would give us a good part of the "conservation" law.