There are many pleasant benefits of improved rationality:
- Winning more often.
- Better affective forecasting.
- Better self-help skills (e.g., CBT is applied rationality).
I'd like to mention two other benefits of rationality that arise when working with other rationalists, which I've noticed since moving to Berkeley to work with Singularity Institute (first as an intern, then as a staff member).
The first is the comfort of knowing that people you work with agree on literally hundreds of norms and values relevant to decision-making: the laws of logic and probability theory, the recommendations of cognitive science for judgment and decision-making, the values of broad consequentialism and x-risk reduction, etc. When I walk into a decision-making meeting with Eliezer Yudkowsky or Anna Salamon or Louie Helm, I notice I'm more relaxed than when I walk into a meeting with most people. I know that we're operating on Crocker's rules, that we all want to make the decisions that will most reduce existential risk, and that we agree on how we should go about making such a decision.
The second pleasure, related to the first, is the extremely common result of reaching Aumann agreement after initially disagreeing. Having worked closely with Anna on both the rationality minicamp and a forthcoming article on intelligence explosion, we've had many opportunities to Aumann on things. We start by disagreeing on X. Then we reduce knowledge asymmetry about X. Then we share additional arguments for multiple potential conclusions about X. Then we both update from our initial impressions, also taking into account the other's updated opinion. In the end, we almost always agree on a final judgment or decision about X. And it's not that we agree to disagree and just move forward with one of our judgments. We actually both agree on what the most probably correct judgment is. I've had this experience literally hundreds of times with Anna alone.
Being more rational is a pleasure. Being rational in the company of other rationalists is even better. Forget not the good news of situationist psychology.
The wiki entry does not look good to me.
This sentence is problematic. Beliefs are probabilistic, and the import of some rationalist's estimate varies according to one's own knowledge. If I am fairly certain that a rationalist has been getting flawed evidence (that is selected to support a proposition) and thinks the evidence is probably fine, that rationalist's weak belief that that proposition is true is, for me, evidence against the proposition.
Iterative updating is a method rationalists can use when they can't share information (as humans often can't do well), but that is a process the result of which is agreement, but not Aumann agreement.
Aumann agreement is a result of two rationalists sharing all information and ideally updating. It's a thing to know so that one can assess a situation after two reasoners have reached their conclusions based on identical information, because if those conclusions are not identical, then one or both are not perfect rationalists. But one doesn't get much benefit from knowing the theorem, and wouldn't even if people actually could share all their information; if one updates properly on evidence, one doesn't need to know about Aumann agreement to reach proper conclusions because it has nothing to do with the normal process of reasoning about most things, and likewise if one knew the theorem but not how to update, it would be of little help.
As Vladmir_Nesov said:
It's especially unhelpful for humans as we can't share all our information.
As Wei_Dei said:
So Wei_Dei's use is fine, as in his post he describe's its limited usefulness.
As I don't understand this at all, perhaps this sentence is fine and I badly misunderstand the concepts here.
No, this is not the case. All they need is a common prior and common knowledge of their probabilities. The whole reason Aumann agreement is clever is because you're not sharing the evidence that convinced you.
See, for example, the original paper.