If you run this analysis over groups of people that include competing religions or just plain competing tribes or nations, I think you will get eigenmodes which sort those people by their affinity groups, and eigenvalues which essentially just count up how many people are in each affinity group. So we find supporting Team6 is more "moral" because there are more people in Team6 than any other team, and we conclude essentially that might makes right.
I think evolutionarily speaking, our propensity for morality is designed to make us team players, or at least to make enough of us enough of a team player to reap the benefits to the group of cooperation. So if this proposal just identifies teams and counts their members, this doesn't make it wrong, but it would be important to point out that it is just finding the affinity groups and not answering deep questions about whether incest is wrong or whether we should push fat people in front of trolley cars.
Scott suggests that ranking morality is similar to ranking web pages. A quote:
Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer. Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:
A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.
Proposed solution:
Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.” Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already. We apply the rule over and over, until the number of morality credits per person converges to an equilibrium. (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.) We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.
He then talks about "eigenmoses and eigenjesus" and other fun ideas, like Plato at the Googleplex.
One final quote:
All that's needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.
EDIT: I am guessing that after judicious application of this algorithm one would end up with the other Scott A's loosely connected components with varying definitions of morality, the Archipelago. UPDATE: He chimes in.
EDIT2: The obvious issue of equating prevailing mores with morality is discussed to death in the comments. Please read them first before raising it yet again here.