Scott suggests that ranking morality is similar to ranking web pages. A quote:
Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer. Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:
A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.
Proposed solution:
Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.” Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already. We apply the rule over and over, until the number of morality credits per person converges to an equilibrium. (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.) We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.
He then talks about "eigenmoses and eigenjesus" and other fun ideas, like Plato at the Googleplex.
One final quote:
All that's needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.
EDIT: I am guessing that after judicious application of this algorithm one would end up with the other Scott A's loosely connected components with varying definitions of morality, the Archipelago. UPDATE: He chimes in.
EDIT2: The obvious issue of equating prevailing mores with morality is discussed to death in the comments. Please read them first before raising it yet again here.
By the way, this is also related to the argument in "Well-Kept Gardens Die By Pacifism". When we design a system for moderating a web community, we are choosing between "order" and "chaos", not between "good" and "evil".
We can move the power to moderator, to some inner circle of users, to most active users, even to users with most sockpuppets, but we can't just move it to "good". We can choose which kind of people or which kind of behavior gets the most power, but we can't choose that the power will magically disappear if they try to abuse it; because any rule designed to prevent abuse can also be abused. The values have to come from outside of the voting system; from the humans who use it. So at the end, the only reasonable choice is to design the system to preserve the existing power, whatever it is -- allowing change only when it is initiated by the currently existing power -- because the only alternative is to let forces from outside of the garden optimize for their values, again, whatever they are, not only the "good" ones. And yes, if the web community had a horrible values at the beginning, the proper moderating system will preserve them. That's not bug, that's a side-effect of a feature. (Luckily, on the web, you have the easy option of leaving the community.)
In this sense, we have to realize that the eigen-whatever system proposed in the article, if designed correctly (how to do this specifically is still open to discussion), would capture something like "the applause lights of the majority of the influential people", or something similar. If the "majority of the influential people" are evil, or just plain stupid, the eigen-result can easily contain evil or stupidity. It almost certainly contains religion and other irrationality. At best, this system is a useful tool to see what the "majority of influential people" think is morality (as V_V said), which itself is a very nice result for a mathematical equation, but I wouldn't feel immoral for disagreeing with in at some specific points. Also, it misses the "extrapolated" part of the CEV; for example, if people's moral opinions are based on incorrect or confused beliefs, the result will contain morality based on incorrect beliefs, so it could give you a recommendation to do both X and Y, where X and Y are contradictory.