Vladimir_Nesov comments on This Didn't Have To Happen - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (183)
I'm interested in a system that allows a John Stuart Mill and an Anton LaVey to peacefully coexist without attempting to judge who is more 'objectively' moral. I wish to be able to choose my own terminal values without having to perfectly align them with every other agent. Morality and ethics are then the minimal framework of agreed rules that allows us all to pursue our own ends without all 'defecting' (the prisoner's dilemma is too simple to be a really representative model but is a useful analogy).
The extent and nature of that minimal framework is an open question and is what I'm interested in establishing.
Think coordination. Two agents may coordinate their actions, if doing so will benefit both. In this sense, it's cooperation. It doesn't include fighting over preferences, fighting over preferences will just consist in them acting on environment without coordination. But this should never be possible, since the set of coordinated plans is strictly greater than a set of uncoordinated plans, and as a result it should always contain a solution that is a Pareto improvement on the best uncoordinated one, that is at least as good for both players as the best uncoordinated solution. Thus, it's always useful to coordinate your actions will all other agents (and at this point, you also need to dole the benefit of coordination to each side fairly, think Ultimatum game).