The book looks pretty interesting and that's a nice story, but I'm not sure that this conclusion is much of a revelation. I'd be a bit more interested in why talking through an issue works when it does.
For instance, when I see
Part of the reason for the change was a historic conference held in Bermuda in 1996, and attended by many of the world’s leading biologists, including several of the leaders of the government-sponsored Human Genome Project.
and
The biologists in the room had enough clout that they convinced several major scientific grant agencies to make immediate data sharing a mandatory requirement of working on the human genome. Scientists who refused to share data would get no grant money to do research. This changed the game, and immediate sharing of human genetic data became the norm.
I think "Ok, so talking through something is important when most of the parties involved would be amenable to the issue, since they already have clout and don't really need to fear rivals so much. When you happen to be part of a relatively powerful group that can make things happen via consensus, and it seems like there is an important issue you could garner consensus on, it would be good to gather up the group and have a chat." This seems kind of trivial though.
Honestly sharing one's utility function to allow a Pareto optimum to be reached is cooperating in a prisoner's dilemma in which multiple sides say what their utility function is.
Indeed, and a it is a prisoner's dilemma in which the other person doesn't even know if you defected after the fact.
Kind of. If you have data but don't share it, you can't publish off of it either. And there are grant monitors. If you're ordering the reagents for several times the sequencing you're putting up in the bank, they may well ask questions. Some are more assiduous than others, but do you want to take that chance?
Lessdazed and I are talking about the sharing of utility functions honestly, which is something of a different game than deciding whether to defect on a a cooperative agreement that already has enforcement mechanism in place.
My point was that if it is a Pareto optimum which is going to be implemented then the 'defection' would be in providing a utility function constructed in whatever way would make the calculated pareto optimum closest to your actual utility function. Talking up, understating or outright falsifying your desires in a negotiation is common practice.
Ah, yes. Thanks for clearing that up.
In this particular case, then, I don't see what lies they could have profitably told. The situation was very symmetric, and falsifications of the utility function would kind of stand out.
In other cases, yes, that's a problem. In the case where everyone can verify what everyone really needs or normatively should need*, then this works a lot better.
*I mean the case where, say, a company would benefit from more secrecy, but only at the cost of keeping effective medicines off the market. If they object on the ground of profit, the rest of the community can rightly give them the finger. And any alternative justifications for secrecy they offer will be subjected to additional scrutiny because everyone can see how it'd benefit them specially.
Michael Nielsen's new book Reinventing Discovery is invigorating. Here's one passage on how a small group talked an issue through and had a large impact on scientific progress: