atucker comments on Are Deontological Moral Judgments Rationalizations? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (168)
I don't mean to imply that the kind of person who would kill the fat man would also kill for profit. The only observation that's necessary for my argument is that killing the fat man -- by which I mean actually doing so, not merely saying you'd do so -- indicates that the decision algorithms in your brain are sufficiently remote from the human standard that you can no longer be trusted to behave in normal, cooperative, and non-dangerous ways. (Which is then correctly perceived by others when they consider you scary.)
Now, to be more precise, there are actually two different issues there. The first is whether pushing the fat man is compatible with otherwise cooperative and benevolent behavior within the human mind-space. (I'd say even if it is, the latter is highly improbable given the former.) The second one is whether minds that implement some such utilitarian (or otherwise non-human) ethic could cooperate with each other the way humans are able to thanks to the mutual predictability of our constrained minds. That's an extremely deep and complicated problem of game and decision theory, which is absolutely crucial for the future problems of artificial minds and human self-modification, but has little bearing on the contemporary problems of ideology, ethics, etc.
It seems like you can make similar arguments for virtue ethics and acausal trade.
If another agent is able to simulate you well, then it helps them to coordinate with you by knowing what you will do without communicating. When you're not able to have a good prediction of what other people will do, it takes waaay more computation to figure out how to get what you want, and if its compatible with them getting what they want.
By making yourself easily simulated, you open yourself up to ambient control, and by not being easily simulated you're difficult to trust. Lawful Stupid seems to happen when you have too many rules enforced too inflexibly, and often (in literature) other characters can take advantage of that really easily.