Scott suggests that ranking morality is similar to ranking web pages. A quote:
Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer. Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:
A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.
Proposed solution:
Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.” Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already. We apply the rule over and over, until the number of morality credits per person converges to an equilibrium. (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.) We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.
He then talks about "eigenmoses and eigenjesus" and other fun ideas, like Plato at the Googleplex.
One final quote:
All that's needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.
EDIT: I am guessing that after judicious application of this algorithm one would end up with the other Scott A's loosely connected components with varying definitions of morality, the Archipelago. UPDATE: He chimes in.
EDIT2: The obvious issue of equating prevailing mores with morality is discussed to death in the comments. Please read them first before raising it yet again here.
Sorry, when I said "False Universalism", I meant things like, "one group wants to have kings, and another wants parliamentary democracy". Or "one group wants chocolate, and the other wants vanilla". Common moral algorithms seem to simply assume that the majority wins, so if the majority wants chocolate, everyone gets chocolate. Moral constructionism gets around this by saying: values may not be universal, but we can come to game-theoretically sound agreements (even if they're only Timelessly sound, like Rawls' Theory of Justice) on how to handle the disagreements productively, thus wasting fewer resources on fighting each other when we could be spending them on Fun.
Basically, I think the correct moral algorithm is: use a constructionist algorithm to cluster people into groups who can then use realist universalisms internally.