PhilGoetz comments on Separate morality from free will - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (84)
I meant that we attribute morality to an agent. Suppose agent A1 makes a decision in environment E1 that I approve of morally, based on value set V. You can't come up with another environment E2, such that if A1 were in environment E2, and made the same decision using the same mental steps and having exactly the same mental representations, I will say it was immoral for A1 in environment E2 according to value set V.
You can easily come up with an environment E2 where the outcome of A1's actions are bad. If you change the environment enough, you can come up with an E2 where A1's values consistently lead to bad outcomes, and so A1 "should" change its values (for some complicated and confusing value of "should"). But, if we're judging the morality of A1's behavior according to a constant set of values, then properties of the environment which are unknown to A1 will have no impact on our (or at least my) judgement of whether A1's decision was moral.
A simpler way of saying all this is: Information unknown to agent A has no impact on our judgement of whether A's actions are moral.