Wei_Dai comments on Individual vs. Group Epistemic Rationality - Less Wrong

22 Post author: Wei_Dai 02 March 2010 09:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 03 March 2010 04:20:31PM 2 points [-]

I get that you're trying to establish some shared premise that we can work from, but, I'm not totally sure what you mean by your assertion even with the additional explanation, so let me just try to make an argument for my position, and you can tell me whether any part doesn't makes sense to you.

Consider a group of 100 ideally rational agents, who for some reason cannot establish a government that is capable of collecting taxes or enforcing contracts at a low cost. They all think that some idea A has probability of .99 of being true, but it would be socially optimal for one individual to continue to scrutinize it for flaws. Suppose that's because if it is flawed, then one individual would be able to detect it eventually at an expected cost of $1, and knowing that the flaw exists would be worth $10 each for everyone. Unfortunately no individual agent has an incentive to do this on its own, because it would decrease their individual expected utility, and they can't solve the public goods problem due to large transaction costs.

On the other hand, if there was one agent who irrationally thought that A has only a probability of .8 of being true, then it would be willing to take on this task.

Comment author: CannibalSmith 04 March 2010 10:48:38AM *  3 points [-]

Alternatively, they each roll an appropriately sided die and get on with the task if the die comes up one.

Comment author: wedrifid 04 March 2010 11:58:13PM *  3 points [-]

D20, naturally:

  • 20 - do the task
  • 1 - do nothing
  • For all others compare 'expected task value' to your 'status saving throw'.
Comment author: JGWeissman 04 March 2010 07:31:07PM 1 point [-]

Better than random: They are each in a rotation that assigns such tasks to one agent as they come up.

Comment author: CannibalSmith 05 March 2010 10:49:48AM 0 points [-]

That would require coordination.

Comment author: JGWeissman 05 March 2010 04:43:30PM 1 point [-]

The random assignment also requires coordination. The only reason an agent in the group would accept the chance that it has to do the work is that the other agents accept the chance for themselves.

But why are we worrying so much about this? We actually can coordinate.

Comment author: Eliezer_Yudkowsky 04 March 2010 07:12:56PM 4 points [-]

Wedifred's remarks above seem obvious to me. Furthermore, your reply seems to consist of "for some reason a group cannot solve a coordination problem rationally, but if I suppose that they are allowed to take the same action that a rational group would perform for irrational reasons only, then the irrational group wins".

Comment author: wedrifid 04 March 2010 12:17:17AM 0 points [-]

Consider a group of 100 ideally rational agents, who for some reason cannot establish a government that is capable of collecting taxes or enforcing contracts at a low cost. They all think that some idea A has probability of .99 of being true, but it would be socially optimal for one individual to continue to scrutinize it for flaws. Suppose that's because if it is flawed, then one individual would be able to detect it eventually at an expected cost of $1, and knowing that the flaw exists would be worth $10 each for everyone. Unfortunately no individual agent has an incentive to do this on its own, because it would decrease their individual expected utility, and they can't solve the public goods problem due to large transaction costs.

Ok, so one of the agents being epistemically flawed may solve a group coordination problem. I like the counterfactual, could you flesh it out slightly to specify what payoff each individual gets for exploring ideas and contributing them to the collective?