In Reply to: Rationalization, Epistemic Handwashing, Selective Processes
Eliezer Yudkowsky wrote about scientists defending pet hypotheses, and prosecutors and defenders as examples of clever rationalization. His primary focus was advice to the well-intentioned individual rationalist, which is excellent as far as it goes. But Anna Salamon and Steve Rayhawk ask how a social system should be structured for group rationality.
The adversarial system is widely used in criminal justice. In the legal world, roles such as Prosecution, Defense, and Judge are all guaranteed to be filled, with roughly the same amount of human effort applied to each side. Suppose individuals chose their own roles. It is possible that one role turns out more popular. Because different effort is applied to different sides, selecting for the positions with the strongest arguments will no longer much select for positions that are true.
One role might be more popular because of an information cascade: individuals read the extant arguments and then choose a role, striving to align themselves with the truth, and create arguments for that position. Alternately, a role may be popular due to status-based affiliation, or striving to be on the "winning" side.
I'm well aware that there are vastly more than two sides to most questions. Imagine a list of rationalist roles something like IDEO's "Ten Faces".
Example rationalist roles, leaving the obvious ones for last:
- The Mediator, who strives for common understanding and combining evidence.
- The Wise, who may not take a stand, but only criticize internal consistency of arguments.
- The Perpendicularist, who strives to break up polarization by "pulling the rope sideways".
- The Advocate, who champions a controversial claim or proposed action.
- The Detractor, who points out flaws in the controversial claim or proposed action.
Due to natural group phenomena (cascades, affiliation), in order to achieve group rationality, there need to be social structures that strive to prevent those natural phenomena. Roles might help.
I'm completely uncertain whether this would work better, worse, or the same as more common methods of group decision-making. It's certainly an interesting idea.
I would make one caution, though. I find that businesses, schools, and decision-making workshops are far too willing to accept any cute or clever or radical sounding idea without any evidence that it works. It's easier to use them as an boast: "Don't say our decisions aren't rational. We care so much about being rational that we make all our decisions with special rationalist hats. If you're so rational, what do you do?" With "make decisions as well as possible based on available information" being a less acceptable answer than "have color coded teams using the Ten Step Rationalo-X Colors Method" or whatever.
For me to use this, I would need evidence that it worked. The best evidence would be assigning people to random groups, having one group talk it out informally and having the other use this hats method, and making them work on problems that are difficult but where there is one correct answer. If the hat people come to the correct answer more often than the non-hat people, then we use hats for everything.
I don't know why people don't do this more often for the common decision-making systems proposed, but I'll bet Robin Hanson would have some choice things to say about it.