John Rawls suggests the thought experiment of an "original position" where people decide the political system of a society under a "veil of ignorance" by which they lose the knowledge of certain information about themselves. Rawls's veil of ignorance doesn't justify the kind of society he supports.
It seems to fail at every step individually:
- At best, the support of people in the OP provides necessary but probably insufficient conditions for justice, unless he refutes all the other proposed conditions involving whatever rights, desert, etc.
- And really the conditions of the OP are actively contrary to good decision-making. For example, in the OP, you don't know your particular conception of the good (??) and you're essentially self-interested. . .
- There's no reason to think, generally, that people disagree with John Rawls only because of their social position or psychological quirks
- There's no reason to think, specifically, that people would have the literally infinite risk aversion required to support the maximin principle.
- Even given everything, the best social setup could easily be optimized for the long-term (in consideration of future people) in a way that makes it very different (e.g. harsher for the poor living today) from the kind of egalitarian society I understand Rawls to support.
More concretely:
- (A) I imagine that if Aristotle were under a thin veil of ignorance, he would just say "Well if I turn out to be born a slave then I will deserve it." It's unfair and not very convincing to say that people would just agree with a long list of your specific ideas if not for their personal circumstances.
- (B) If you won the lottery and I demanded that you sell your ticket to me for $100 on the grounds that you would have, hypothetically, agreed to do this yesterday (before you know that it was a winner), you don't have to do this; the hypothetical situation doesn't actually bear on reality in this way.
Another frame is that his argument involves a bunch of provisions that seem designed to avoid common counterarguments but are otherwise arbitrary (utility monsters, utilitarianism, etc).
Well, they're inimical to good personal self-interested decision making, but why would that matter? Do you think justice and self interested rationality are the same? If they are differerent, what's the problem? Rawl's theory would not necessarily predict the behaviour of a self interested agent , but it's not supposed to. It's a normative theory: justly is how people should behave, not how they invariably do. If they have their own theories of ethics, well they are theories and not necessarily correct. Mere disagreement between the front-of-the-veil and behind-the-veil versions of a person doesn't tell you much.
They might have a well constructed case against him, he might have a well constructed case against them.