John Rawls suggests the thought experiment of an "original position" where people decide the political system of a society under a "veil of ignorance" by which they lose the knowledge of certain information about themselves. Rawls's veil of ignorance doesn't justify the kind of society he supports.

It seems to fail at every step individually:

  1. At best, the support of people in the OP provides necessary but probably insufficient conditions for justice, unless he refutes all the other proposed conditions involving whatever rights, desert, etc.
  2. And really the conditions of the OP are actively contrary to good decision-making. For example, in the OP, you don't know your particular conception of the good (??) and you're essentially self-interested. . .
  3. There's no reason to think, generally, that people disagree with John Rawls only because of their social position or psychological quirks
  4. There's no reason to think, specifically, that people would have the literally infinite risk aversion required to support the maximin principle.
  5. Even given everything, the best social setup could easily be optimized for the long-term (in consideration of future people) in a way that makes it very different (e.g. harsher for the poor living today) from the kind of egalitarian society I understand Rawls to support.

More concretely:

  • (A) I imagine that if Aristotle were under a thin veil of ignorance, he would just say "Well if I turn out to be born a slave then I will deserve it." It's unfair and not very convincing to say that people would just agree with a long list of your specific ideas if not for their personal circumstances.
  • (B) If you won the lottery and I demanded that you sell your ticket to me for $100 on the grounds that you would have, hypothetically, agreed to do this yesterday (before you know that it was a winner), you don't have to do this; the hypothetical situation doesn't actually bear on reality in this way.

Another frame is that his argument involves a bunch of provisions that seem designed to avoid common counterarguments but are otherwise arbitrary (utility monsters, utilitarianism, etc).


 

New Comment
9 comments, sorted by Click to highlight new comments since:

I think it just forces people to choose a policy which is best for the whole of society rather than just a subset of it (as people tend to choose policies which benefit whatever subset they're part of)

If you're X kind of person you might want human rights for all X. By applying the veil of ignorance, you'd have to argue "Human rights should extent to all groups, even those I now consider to be bad people" (i.e. for all X), which actually is how human rights currently work (and isn't that what makes them good?)

It's simply neutrality and equality under the law. The act of making a policy which is objective rather than subjective. It's essentially the opposite of assuming that the majority is always correct, letting them dominate and bully the minorities, and calling this process "fair" or "democracy".

It's easy for the majority to say "We're correct and whoever disagrees is a terrible person", or for a minority to say "We're being treated unfairly because the majority is evil". By not knowing which group you will belong to, you're forced to come up with a policy which considers a scope large enough to be a superset of both groups, for instance "We will decide what's correct through the scientific model, and let everyone have a voice".

I think it works well for what it does (creating a fair, universal set of rules). It's not perfect, but I don't think a more perfect method is possible in reality. Maybe the idea generalizes poorly, maybe most people are incapable of applying the method? I'm not sure, I can't understand your arguments very well, so I'm just communicating my own intuition.

But (A) is possibly true, and (B) would be true until the information is updated. Would I buy lottery tickets for 20$ and sell them at 100$ before knowing if they were winning ones? Of course, this is the superior strategy every time. Would I sell a winning lottery ticket for less than the winning price? I would not, this is a losing strategy. I don't think this conflicts with the above intuition about fairness, it's a seperate and somewhat unintuitive math problem in my eyes.

[-]TAG20

And really the conditions of the OP are actively contrary to good decision-making, e.g. that you don’t know your particular conception of the good (??) or that you’re essentially self-interested. . .

Well, they're inimical to good personal self-interested decision making, but why would that matter? Do you think justice and self interested rationality are the same? If they are differerent, what's the problem? Rawl's theory would not necessarily predict the behaviour of a self interested agent , but it's not supposed to. It's a normative theory: justly is how people should behave, not how they invariably do. If they have their own theories of ethics, well they are theories and not necessarily correct. Mere disagreement between the front-of-the-veil and behind-the-veil versions of a person doesn't tell you much.

There’s no reason to think, generally, that people disagree with John Rawls only because of their social position or psychological quirks

They might have a well constructed case against him, he might have a well constructed case against them.

[-]Dagon20

His argument is also founded in dualism - the idea that there can exist a preference or consistent preference-haver outside of specific embodied individuals.  There is no agent who can be ignorant in the way he proposes.

Beliefs and preferences are contingent on specific existence.  If you're a different person, you have different beliefs and preferences.

You might be interested in John Harsanyi on the topic.
He argues that the conclusion achieved in the original position is (average) utilitarianism.

I agree that behind the veil one shouldn't know the time (and thus can't care differently about current vs future humans). This actually causes further problems for Rawls conception when you project back in time, what if the worst life that will ever be lived has already been lived? Then the maximin principle gives no guidance at all, and in positions of uncertainty it recommends putting all effort in preventing a new minimum from being set.

I downvoted this post because the whole set up is straw manning Rawls work. To claim that a highly recognized philosophical treatment of justice that has inspired countless discussions and professional philosophers doesn’t “make any sense” is an extraordinary claim that should ideally be backed by a detailed argument and evidence. However, to me the post seems handwavey and more like armchair philosophizing than detailed engagement. Don’t get me wrong, feel free to do that but please make clear that this is what you are doing.

Regarding your claim that the veil of ignorance doesn’t map to decision making in reality, that’s obvious. But that’s also not the point of this thought experiment. It’s about how to approach the ideal of justice and not how to ultimately implement it in our non-ideal world. One can debate the merits of talking and thinking about ideals but calling it “senseless” without some deeper engagement seems pretty harsh.

Many (perhaps most) famous "highly recognized" philosophical arguments are nonsensical (zombies, as an example). If one doesn't make sense to you, it is far more likely that it doesn't make sense at all than it is that you're missing something.

Since a lot of arguments on internet forums are nonsensical, the fact that your comment doesn’t makes sense to me, means that it is far more likely that it doesn’t make sense at all than it is that I am missing something.

That’s pretty ironic.