You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

eli_sennesh comments on [LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality - Less Wrong Discussion

18 Post author: shminux 19 June 2014 08:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 20 June 2014 04:26:07PM 2 points [-]

Well yes, and attempting to group all actual or possible individuals into one tribe is a major mistake, one that I think should be given a name. Well, as it turns out, the name I was already going to give it is at least partially in use: False Universalism.

Ethics ought to include some kind of reasoning for determining when some bit of universalism (some universalization of a maxim, in the Kantian or Timeless sense, or some value cohering, in the CEV sense) has become False Universalism, so that the groups or individuals who diverge from each other to the point of incompatibility can be handled as conflicting, rather than simply having the ethical algorithm return the answer that one or the other is Right and the other is Wrong and the Wrong shall be corrected until they follow the values of the Right.

Comment author: Leonhart 20 June 2014 10:40:42PM *  2 points [-]

handled as conflicting

"Handled as conflicting" seems to either mean "all-out war" or at best "temporary putting off of all-out war until we've used all the atoms on our side of the universe".

If the two sides shared your desire to be symmetrically peaceful with other sides whose only point of similarity with them was the desire to be symmetrically peaceful with other sides whose... then Universalism isn't false. That's its minimal case.

And if it does fail, it seems counterproductive for you to point that out to us, because while we're happily and deludedly trying to apply it, we're not genociding each other all over your lawn.

Comment author: [deleted] 21 June 2014 07:39:28PM *  0 points [-]

Sorry, when I said "False Universalism", I meant things like, "one group wants to have kings, and another wants parliamentary democracy". Or "one group wants chocolate, and the other wants vanilla". Common moral algorithms seem to simply assume that the majority wins, so if the majority wants chocolate, everyone gets chocolate. Moral constructionism gets around this by saying: values may not be universal, but we can come to game-theoretically sound agreements (even if they're only Timelessly sound, like Rawls' Theory of Justice) on how to handle the disagreements productively, thus wasting fewer resources on fighting each other when we could be spending them on Fun.

Basically, I think the correct moral algorithm is: use a constructionist algorithm to cluster people into groups who can then use realist universalisms internally.