CannibalSmith comments on Individual vs. Group Epistemic Rationality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (54)
I get that you're trying to establish some shared premise that we can work from, but, I'm not totally sure what you mean by your assertion even with the additional explanation, so let me just try to make an argument for my position, and you can tell me whether any part doesn't makes sense to you.
Consider a group of 100 ideally rational agents, who for some reason cannot establish a government that is capable of collecting taxes or enforcing contracts at a low cost. They all think that some idea A has probability of .99 of being true, but it would be socially optimal for one individual to continue to scrutinize it for flaws. Suppose that's because if it is flawed, then one individual would be able to detect it eventually at an expected cost of $1, and knowing that the flaw exists would be worth $10 each for everyone. Unfortunately no individual agent has an incentive to do this on its own, because it would decrease their individual expected utility, and they can't solve the public goods problem due to large transaction costs.
On the other hand, if there was one agent who irrationally thought that A has only a probability of .8 of being true, then it would be willing to take on this task.
Alternatively, they each roll an appropriately sided die and get on with the task if the die comes up one.
D20, naturally:
Better than random: They are each in a rotation that assigns such tasks to one agent as they come up.
That would require coordination.
The random assignment also requires coordination. The only reason an agent in the group would accept the chance that it has to do the work is that the other agents accept the chance for themselves.
But why are we worrying so much about this? We actually can coordinate.