Previously in series: Purchase Fuzzies and Utilons Separately
Followup to: Conjuring an Evolution To Serve You
GreyThumb.blog offered an interesting comparison of poor animal breeding practices and the fall of Enron, which I previously posted on in some detail. The essential theme was that individual selection on chickens for the chicken in each generation who laid the most eggs, produced highly competitive chickens—the most dominant chickens that pecked their way to the top of the pecking order at the expense of other chickens. The chickens subjected to this individual selection for egg-laying prowess needed their beaks clipped, or housing in individual cages, or they would peck each other to death.
Which is to say: individual selection is selecting on the wrong criterion, because what the farmer actually wants is high egg production from groups of chickens.
While group selection is nearly impossible in ordinary biology, it is easy to impose in the laboratory: and breeding the best groups, rather than the best individuals, increased average days of hen survival from 160 to 348, and egg mass per bird from 5.3 to 13.3 kg.
The analogy being to the way that Enron evaluated its employees every year, fired the bottom 10%, and gave the top individual performers huge raises and bonuses. Jeff Skilling fancied himself as exploiting the wondrous power of evolution, it seems.
If you look over my accumulated essays, you will observe that the art contained therein is almost entirely individual in nature... for around the same reason that it all focuses on confronting impossibly tricky questions: That's what I was doing when I thought up all this stuff, and for the most part I worked in solitude. But this is not inherent in the Art, not reflective of what a true martial art of rationality would be like if many people had contributed to its development along many facets.
Case in point: At the recent LW / OB meetup, we played Paranoid Debating, a game that tests group rationality. As is only appropriate, this game was not the invention of any single person, but was collectively thought up in a series of suggestions by Nick Bostrom, Black Belt Bayesian, Tom McCabe, and steven0461.
In the game's final form, Robin Gane-McCalla asked us questions like "How many Rhode Islands would fit into Alaska?" and a group of (in this case) four rationalists tried to pool their knowledge and figure out the answer... except that before the round started, we each drew facedown from a set of four cards, containing one spade card and one red card. Whoever drew the red card got the job of trying to mislead the group. Whoever drew the spade showed the card and became the spokesperson, who had to select the final answer. It was interesting, trying to play this game, and realizing how little I'd practiced basic skills like trying to measure the appropriateness of another's confidence or figure out who was lying.
A bit further along, at the suggestion of Steve Rayhawk, and slightly simplified by myself, we named 60% confidence intervals for the quantity with lower and upper bounds; Steve fit a Cauchy distribution to the interval ("because it has a fatter tail than a Gaussian") and we were scored according to the log of our probability density on the true answer, except for the red-card drawer, who got the negative of this number.
The Paranoid Debating game worked surprisingly well—at least I had fun, despite somehow managing to draw the red card three out of four times. I can totally visualize doing this at some corporate training event or even at parties. The red player is technically acting as an individual and learning to practice deception, but perhaps practicing deception (in this controlled, ethically approved setting) might help you be a little less gullible in turn. As Zelazny observes, there is a difference in the arts of discovering lies and finding truth.
In a real institution... you would probably want to optimize less for fun, and more for work-relevance: something more like Black Belt Bayesian's original suggestion of The Aumann Game, no red cards. But where both B3 and Tom McCabe originally thought in terms of scoring individuals, I would suggest forming people into groups and scoring the groups. An institution's performance is the sum of its groups more directly than it is the sum of its individuals—though of course there are interactions between groups as well. Find people who, in general, seem to have a statistical tendency to belong to high-performing groups—these are the ones who contribute much to the group, who are persuasive with good arguments.
I wonder if there are any hedge funds that practice "trio trading", by analogy with pair programming?
Hal Finney called Aumann's Agreement Theorem "the most interesting, surprising, and challenging result in the field of human bias: that mutually respectful, honest, and rational debaters cannot disagree on any factual matter once they know each other's opinions". It is not just my own essays that are skewed toward individual application; the whole trope of Traditional Rationality seems to me skewed the same way. It's the individual heretic who is the hero, and Authority the untrustworthy villain whose main job is not to resist the heretic too much, to be properly defeated. Science is cast as a competition between theories in an arena with rules designed to let the strongest contender win. Of course, it may be that I am selective in my memory, and that if I went back and read my childhood books again, I would notice more on group tactics that originally slipped my attention... but really, Aumann's Agreement Theorem doesn't get enough attention.
Of course most Bayesian math is not widely known—the Agreement Theorem is no exception here. But even the intuitively obvious counterpart of the Agreement Theorem, the treatment of others' beliefs as evidence, receives little shrift in Traditional Rationality. This may have something to do with Science developing in the midst of insanity and in defiance of Authority; that is a historical fact about how Science developed. But if the high performers of a rationality dojo need to practice the same sort of lonely dissent... well, that must not be a very effective rationality dojo.
Part of the sequence The Craft and the Community
Next post: "Incremental Progress and the Valley"
Previous post: "Purchase Fuzzies and Utilons Separately"
So far there've only been LW/OB meetups in the Bay area -- is there any way we could plot the geographic distribution of LW members and determine whether there are other spots where we could get a good meetup going?
I have put up a post for everyone to tell us where they are in the world.