Scott suggests that ranking morality is similar to ranking web pages. A quote:
Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer. Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:
A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.
Proposed solution:
Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.” Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already. We apply the rule over and over, until the number of morality credits per person converges to an equilibrium. (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.) We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.
He then talks about "eigenmoses and eigenjesus" and other fun ideas, like Plato at the Googleplex.
One final quote:
All that's needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.
EDIT: I am guessing that after judicious application of this algorithm one would end up with the other Scott A's loosely connected components with varying definitions of morality, the Archipelago. UPDATE: He chimes in.
EDIT2: The obvious issue of equating prevailing mores with morality is discussed to death in the comments. Please read them first before raising it yet again here.
Well yes, and attempting to group all actual or possible individuals into one tribe is a major mistake, one that I think should be given a name. Well, as it turns out, the name I was already going to give it is at least partially in use: False Universalism.
Ethics ought to include some kind of reasoning for determining when some bit of universalism (some universalization of a maxim, in the Kantian or Timeless sense, or some value cohering, in the CEV sense) has become False Universalism, so that the groups or individuals who diverge from each other to the point of incompatibility can be handled as conflicting, rather than simply having the ethical algorithm return the answer that one or the other is Right and the other is Wrong and the Wrong shall be corrected until they follow the values of the Right.
"Handled as conflicting" seems to either mean "all-out war" or at best "temporary putting off of all-out war until we've used all the atoms on our side of the universe".
If the two sides shared your desire to be symmetrically peaceful with other sides whose only point of similarity with them was the desire to be symmetrically peaceful with other sides whose... then Universalism isn't false. That's its minimal case.
And if it does fail, it seems counterproductive for you to point that out to us, because while we're happily and deludedly trying to apply it, we're not genociding each other all over your lawn.