Suppose that we, a group of rationalists, are trying to come to an agreement on which course of action to take. All of us have the same beliefs about the world, as Aumann's Agreement Theorem requires, but everyone has different preferences. What decision-making procedure should we use? We want our decision-making procedure to satisfy a number of common-sense criteria:
- Non-dictatorship: No one person should be able to dictate what we should do next. The decision-making process must take multiple people's preferences into account.
- Determinism: Given the same choices, and the same preferences, we should always make the same decisions.
- Pareto efficiency: If every member of the group prefers action A to action B, the group as a whole should also prefer A to B.
- Independence of irrelevant alternatives: If we, as a group, prefer A to B, and a new option C is introduced, then we should still prefer A to B, regardless of what we think about C.
Arrow's Theorem says that: Suppose we have a list of possible courses of action, {A, B, C, D... Q}. Everyone gets a pencil and a piece of paper, and writes down what their preferences are. Eg., if you thought that Q was the best, D was the second best, and A was the worst, you would write down Q > D > (...) > A. We then collect all the slips of paper, and put them in a big bin. However, we can't take all these slips of paper and produce a set of preferences for the group as a whole, eg., {H > E > I > C > ... > D > A}, in a way that satisfies all of the above criteria.
Therefore, (so the story goes), much woe be unto us, for there is no way we can make decisions that satisfies this entirely reasonable list of requirements. Except, this isn't actually true. Suppose that, instead of writing down a list of which actions are preferred to which other actions, we write down a score next to each action, on a scale from 0-10. (For simplicity, we allow irrationals as well as integers, so, say, sqrt(2) or 7/3 are perfectly valid scores). We add up the numbers that each person puts down for each action. Say, if person A puts down 5 for action A, and person B puts down 7 for action A, then A has a total score of 12. We then decree: we will, as a group, prefer any action with a higher total score to any action with a lower total score. This procedure does satisfy all of the desired criteria:
- Non-dictatorship: Suppose there are two possible actions, A and B, and ten people in the group. Each person gives a score of zero to A and a ten to B, so B has a total score of 100, as compared to A's score of zero. If any one person changes their score for A to X, and their score for B to Y, then the total score for A is now X, and the score for B is now 90 + Y. Since we know that 0 <= X <= 10 and 0 <= Y <= 10, we can derive that 90 + Y >= 90 > 10 >= X, so B is still preferable to A. Therefore, no one person can switch the group's decision from B to A, so no one person is a dictator.
- Determinism: All we do is add and rank the scores, and adding and ranking are both deterministic operations, so we know this procedure is deterministic.
- Pareto efficiency: Suppose that there are N people in the group, and two possible actions, A and B. Every person prefers A to B, so person X will assign some score to B, B_X, and some score to A, A_X, where A_X > B_X. Let C_X = A_X - B_X; we know that C_X > 0. The total score for B is (B_1 + B_2 + B_3 + ... + B_N), and the total score for A is (A_1 + A_2 + A_3 + ... + A_N) = (B_1 + B_2 + B_3 + ... + B_N) + (C_1 + C_2 + C_3 + ... + C_N) > (B_1 + B_2 + B_3 + ... + B_N), so the group as a whole also prefers A to B.
- Independence of irrelevant alternatives: If we prefer A to B as a group, and we all assign the same scores to A and B when C is introduced, then the total score of A must still be higher than that of B, even after C is brought in. Hence, our decision between A and B does not depend on C.
What happened here? Arrow's Theorem isn't wrong, in the usual sense; the proof is perfectly valid mathematically. However, the theorem only covers a specific subset of decision algorithms: those where everyone writes down a simple preference ordering, of the form {A > B > C > D ... > Q}. It doesn't cover all collective decision procedures.
I think what's happening here, in general, is that a). there is a great deal of demand for theorems which show counterintuitive results, as these tend to be the most interesting, and b). there are a large number of theorems which give counterintuitive results about things in the world of mathematics, or physics. For instance, many mathematicians were surprised when Cantor showed that the number of all rational numbers is equal to the number of all integers.
Therefore, every time someone finds a theorem which appears to show something counterintuitive about real life, there is a tendency to stretch the definition: to say, wow, this theorem shows that thing X is actually impossible, when it usually just says "thing X is impossible if you use method Y, under conditions A, B and C". And it's unlikely you'll get called on it, because people are used to theorems which show things like 0.999... = 1, and we apparently treat this as evidence that a theorem which (supposedly) says "you can't design a sane collective decision-making procedure" or "you can't do electromagnetic levitation" is more likely to be correct.
More examples of this type of fallacy:
- Earnshaw's theorem states that no stationary collection of charges, or magnets, is stable. Hence, you cannot put a whole bunch of magnets in a plate, put the plate over a table with some more magnets in it, and expect it to stay where it is, regardless of the configuration of the magnets. This theorem was taken to mean that electromagnetic levitation of any sort was impossible, which caused a great deal of grief, as scientists then still thought that atoms were governed by electromagnetism rather than quantum mechanics, and so the theorem would appear to imply that no collection of atoms is stable. However, we now know that atoms stay stable by virtue of the Pauli exclusion principle, not classical E&M. And, it turns out, it is possible to levitate things with magnets: you just have to keep the thing spinning, instead of leaving it in one place. Or, you can use diamagnetism, but that's another discussion.
- Heisenberg's Student Confusion Principle, which does not say "we can't find out what the position and momentum of a particle is", has already been covered earlier, by our own Eliezer Yudkowsky.
- Conway's Free Will Theorem says that it's impossible to predict the outcomes of certain quantum experiments ahead of time. This is a pretty interesting fact about quantum physics. However, it is only a fact about quantum physics; it is not a fact about this horribly entangled thing that we call "free will". There are also numerous classical, fully deterministic computations that we can't predict the output ahead of time; eg., if you write a Turing Machine to output every twin prime, will it halt after some finite number of steps? If we could predict all such computations ahead of time, it would violate the Halting Theorem, and so there must be some that we can't predict.
- There are numerous law of economics which appear to fall into this category; however, so far as I can tell, few of them have been mathematically formalized. I am not entirely sure whether this is a good thing or a bad thing, as formalizing them would make them more precise, and clarify the conditions under which they apply, but also might cause people to believe in a law "because it's been mathematically proven!", even well outside of the domain it was originally intended for.
The procedure you're proposing collapses into approval voting immediately.
Nobody has any reason to vote anything else than 10 (possibly minus epsilon) or 0 (possibly plus epsilon). If you like A slightly more than B, and estimate A will get 81 points, while B will get 90 points, the optimal behaviour is to vote 10 for A (and everything better than A), and 0 for B (and everything worse than B), any other vote is strictly worse for you. There is no scenario under which voting anything else than 0 and 10 is better than extreme votes.
Approval voting is prone to tactical voting (this should added to requirements, Arrow's Theorem talks about preferences, as it assumes voting is uniquely determined by them) - you never need to order candidates in a way that reverses your preferences, but approval threshold depends on what you think others will vote like. If you think A > B > C, and think C is leading, you vote 10,10,0. If you think B is leading, you vote 10,0,0. It also fails at determinism.
If we give people predefined allowance of points (like n alternatives need scores 1 to n, or get n points for arbitrary distribution) it fails independence immediately, and is prone to preference reversal in tactical voting, which is even worse (if A > B > C, and C is leading, but B has some chances, you vote 0, 10, 0, with A < B).
Relevant section from Steve Rayhawk's nursery effect link:
... (read more)