Scott suggests that ranking morality is similar to ranking web pages. A quote:
Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer. Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:
A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.
Proposed solution:
Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.” Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already. We apply the rule over and over, until the number of morality credits per person converges to an equilibrium. (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.) We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.
He then talks about "eigenmoses and eigenjesus" and other fun ideas, like Plato at the Googleplex.
One final quote:
All that's needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.
EDIT: I am guessing that after judicious application of this algorithm one would end up with the other Scott A's loosely connected components with varying definitions of morality, the Archipelago. UPDATE: He chimes in.
EDIT2: The obvious issue of equating prevailing mores with morality is discussed to death in the comments. Please read them first before raising it yet again here.
Someone who thinks that morality (or even an important part of morality) is "co-operate with the good people, punish the bad" is immediately revealing the shallowness of their thinking (and incidentally, giving me a 90%+ chance of guessing their politics).
It should be "co-operate with good actions, punish bad actions." And you can get that unambiguously with a graph, providing you also (separately) have a ranker on action value. But you can't get anywhere without a ranker on action value, as the post adequately demonstrates.
This comment seems to have missed the point that by looking at who you are cooperating with you are declaring the "ranker on action value" to be what the people who cooperate with each other do. Which is a clever way of getting around the problem of having to have an independent machine that ranks actions that somehow people are supposed to agree isn't just a matter of assuming what is moral in your assumptions rather than discovering it as a conclusion.
The way I wrote this, I ranked your action. How different is it if I say "you are wrong' and downvote you, and people look at graphs of who downvoted whom?