There are many pleasant benefits of improved rationality:

I'd like to mention two other benefits of rationality that arise when working with other rationalists, which I've noticed since moving to Berkeley to work with Singularity Institute (first as an intern, then as a staff member).

The first is the comfort of knowing that people you work with agree on literally hundreds of norms and values relevant to decision-making: the laws of logic and probability theory, the recommendations of cognitive science for judgment and decision-making, the values of broad consequentialism and x-risk reduction, etc. When I walk into a decision-making meeting with Eliezer Yudkowsky or Anna Salamon or Louie Helm, I notice I'm more relaxed than when I walk into a meeting with most people. I know that we're operating on Crocker's rules, that we all want to make the decisions that will most reduce existential risk, and that we agree on how we should go about making such a decision.

The second pleasure, related to the first, is the extremely common result of reaching Aumann agreement after initially disagreeing. Having worked closely with Anna on both the rationality minicamp and a forthcoming article on intelligence explosion, we've had many opportunities to Aumann on things. We start by disagreeing on X. Then we reduce knowledge asymmetry about X. Then we share additional arguments for multiple potential conclusions about X. Then we both update from our initial impressions, also taking into account the other's updated opinion. In the end, we almost always agree on a final judgment or decision about X. And it's not that we agree to disagree and just move forward with one of our judgments. We actually both agree on what the most probably correct judgment is. I've had this experience literally hundreds of times with Anna alone.

Being more rational is a pleasure. Being rational in the company of other rationalists is even better. Forget not the good news of situationist psychology.

New Comment
42 comments, sorted by Click to highlight new comments since:

The second pleasure, related to the first, is the extremely common result of reaching Aumann agreement after initially disagreeing.

It's never "Aumann agreement", it's just agreement, even if more specifically agreement on actual belief (rather than on ostensible position) reached by forming common understanding.

Do you also object to the use of the term "Aumann agreement" by Wei Dai and on the LW wiki?

Wei Dai discusses the actual theorem, and in the last section expresses a sentiment similar to mine. I disapprove of the first paragraph of "Aumann agreement" wiki page (but see also the separate Aumann's agreement theorem wiki page).

FWIW, I wrote up a brief explanation and proof of Aumann's agreement theorem.

The wiki entry does not look good to me.

Unless you think I'm so irredeemably irrational that my opinions anticorrelate with truth, then the very fact that I believe something is Bayesian evidence that that something is true

This sentence is problematic. Beliefs are probabilistic, and the import of some rationalist's estimate varies according to one's own knowledge. If I am fairly certain that a rationalist has been getting flawed evidence (that is selected to support a proposition) and thinks the evidence is probably fine, that rationalist's weak belief that that proposition is true is, for me, evidence against the proposition.

Consider: if I'm an honest seeker of truth, and you're an honest seeker of truth, and we believe each other to be honest, then we can update on each other's opinions and quickly reach agreement.

Iterative updating is a method rationalists can use when they can't share information (as humans often can't do well), but that is a process the result of which is agreement, but not Aumann agreement.

Aumann agreement is a result of two rationalists sharing all information and ideally updating. It's a thing to know so that one can assess a situation after two reasoners have reached their conclusions based on identical information, because if those conclusions are not identical, then one or both are not perfect rationalists. But one doesn't get much benefit from knowing the theorem, and wouldn't even if people actually could share all their information; if one updates properly on evidence, one doesn't need to know about Aumann agreement to reach proper conclusions because it has nothing to do with the normal process of reasoning about most things, and likewise if one knew the theorem but not how to update, it would be of little help.

As Vladmir_Nesov said:

The crucial point is that it's not a procedure, it's a property, an indicator and not a method.

It's especially unhelpful for humans as we can't share all our information.

As Wei_Dei said:

Having explained all of that, it seems to me that this theorem is less relevant to a practical rationalist than I thought before I really understood it. After looking at the math, it's apparent that "common knowledge" is a much stricter requirement than it sounds. The most obvious way to achieve it is for the two agents to simply tell each other I(w) and J(w), after which they share a new, common information partition. But in that case, agreement itself is obvious and there is no need to learn or understand Aumann's theorem.

So Wei_Dei's use is fine, as in his post he describe's its limited usefulness.

at no point in a conversation can Bayesians have common knowledge that they will disagree.

As I don't understand this at all, perhaps this sentence is fine and I badly misunderstand the concepts here.

Aumann agreement is a result of two rationalists sharing all information and ideally updating.

No, this is not the case. All they need is a common prior and common knowledge of their probabilities. The whole reason Aumann agreement is clever is because you're not sharing the evidence that convinced you.

See, for example, the original paper.

Updated. (My brain, I didn't edit the comment.)

at no point in a conversation can Bayesians have common knowledge that they will disagree.

As I don't understand this at all, perhaps this sentence is fine and I badly misunderstand the concepts here.

"Common knowledge" is a far stronger condition than it sounds.

So "at no point in a conversation can Bayesians have common knowledge that they will disagree," means "'Common knowledge' is a far stronger condition than it sounds," and nothing more and nothing less?

See, "knowledge" is of something that is true, or at least actually interpreted input. So if someone can't have knowledge of it, that implies i's true and one merely can't know it. If there can't be common knowledge, that implies that at least one can't know the true thing. But the thing in question, "that they will disagree", is false, right?

I do not understand what the words in the sentence mean. It seems to read:

"At no point can two ideal reasoners both know true fact X, where true fact X is that they will disagree on posteriors, and that each knows that they will disagree on posteriors, etc."

But the theorem is that they will not disagree on posteriors...

So "at no point in a conversation can Bayesians have common knowledge that they will disagree," means "'Common knowledge' is a far stronger condition than it sounds," and nothing more and nothing less?

No, for a couple of reasons.

First, I misunderstood the context of that quote. I thought that it was from Wei Dai's post (because he was the last-named source that you'd quoted). Under this misapprehension, I took him to be pointing out that common knowledge of anything is a fantastically strong condition, and so, in particular, common knowledge of disagreement is practically impossible. It's theoretically possible for two Bayesians to have common knowledge of disagreement (though, by the theorem, they must have had different priors). But can't happen in the real world, such as in Luke's conversations with Anna.

But I now see that this whole line of thought was based on a silly misunderstanding on my part.

In the context of the LW wiki entry, I think that the quote is just supposed to be a restatement of Aumann's result. In that context, Bayesian reasoners are assumed to have the the same prior (though this could be made clearer). Then I unpack the quote just as you do:

"At no point can two ideal reasoners both know true fact X, where true fact X is that they will disagree on posteriors, and that each knows that they will disagree on posteriors, etc."

As you point out, by Aumann's theorem, they won't disagree on posteriors, so they will never have common knowledge of disagreement, just as the quote says. Conversely, if they have common knowledge of posteriors, but, per the quote, they can't have common knowledge of disagreement, then those posteriors must agree, which is Aumann's theorem. In this sense, the quote is equivalent to Aumann's result.

Apparently the author doesn't use the word "knowledge" in such a way that to say "A can't have knowledge of X" is to imply that X is true. (Nor do I, FWIW.)

Yeah; "Aumann agreement" is (to my knowledge) my own invented term by which I mean "Agreement reached by, among other things, taking into account the Bayesian evidence the other's testimony."

taking into account [as] the Bayesian evidence the other's testimony

This seems like usually an unimportant (and/because unreliable/difficult to use) component, most of the work is done by convincing argument, which helps with inferential difficulties, rather than lack of information.

Agreed.

Then it seems like your definition is meaningless. Does your invented term mean something like "sharing information and collaboratively trying to reach the best answer?"

As above, I use "Aumann agreement" to mean "Agreement reached by, among other things, taking into account the Bayesian evidence the other's testimony." Vladimir is right that most of the work is done by convincing argument in most cases. However, there are many cases (e.g., "which sentence sounds better in this paragraph?") where taking the evidence of the other's opinion actually does change the alternative. Also, Anna and I (for example) have quite a lot of respect for the other's opinion on many subjects, and so we update more heavily from each other's testimony than most people would.

I don't think Aumann agreement is a good term for this; there's a huge difference between that mathematically precise procedure and the fuzzy process you're describing.

The crucial point is that it's not a procedure, it's a property, an indicator and not a method.

I'm sorry, I don't see what you're getting at I'm afraid!

Aumann agreement is already there, it's a fact of a certain situation, not a procedure for getting to an agreement, unlike the practice of forming a common understanding Luke talked about. My comment was basically a pun on your use of the word "procedure".

Agreed. This decision-making method is so common we normally don't name it. E.g. "I was going to dye my hair, but my friend told me about the terrible experience she had, and now I think I'll go to a salon instead of trying it at home." I don't see a need to make up jargon for "considering the advice of trusted people."

It seems like the purpose of this post was mostly to share your enjoyment of how wise your coworkers are and how well you cooperate with each other. Which is fine, but let's not technify it unnecessarily.

Wei_Dai used the term back in 2009.

It's always nice to hear that people are deriving utility (other than just the fun of discussion) out of all the stuff we talk about on this site. With that said, I wouldn't emphasize the first benefit you listed too strongly.

Yeah, it's true that surrounding yourself with people who agree with you on stuff is fun - and productive, if you agree correctly. But it's not a specific benefit of rationality - if you happened to believe that decisions should be made by searching your heart for the Holy Spirit's guidance, you would get exactly the same sense of subjective well-being by joining a group that believes the same thing. So to someone who isn't already a member of your rationalist in-group, this isn't going to look particularly appealing.

On the other hand, the second item does seem like something that's specific to a rationalist community. I would be curious to see if anyone could think of more things like that - if enough come up, it could make good reading for people who aren't necessarily enthusiastic about the topics LW covers.

I've heard Quakers praise Quaker desicion-making (like consensus, but with more pretense at being influenced by the Spirit) for the warm feeling it gives them. But those are only the ones who haven't snuck out before the meeting due to the incredibly long time it takes.

curious to see if anyone could think of more things like that - if enough come up, it could make good reading for people who aren't necessarily enthusiastic about the topics LW covers.

The less enthusiastic they are, the better reading that would be.

This post kind of makes me sad. I'm not sure what the post's purpose is, but it has certainly informed me that I am not likely to enjoy those great things, as I have no friends, and if I did, they wouldn't even try to be rational.

Probably.

Edit: It seems that responses to this message take the form of either, "Aw, how sad! This guy needs help," or, "How pathetic! What a loser!" I would appreciate if these responses were annulled. This comment is only a statement of fact, not a statement of depression, not a cry for help, not even a complaint.

Edit, March 2012: I regret making this comment.

Hear, hear. Reading about Luke's wonderful conversion from irrationality and what working with the SIAI is like has just made me realise how profoundly irrational the social interactions of us normal folk are. Knowing my acquaintances as well as I do, I'm unfortunately afraid of even pointing this out to them.

It seems likely to me that you can increase your likelihood of having peers who agree on norms with you, and who are able to come to agreement on statements based on data and mutual respect for one another's ability to make true statements. It'll never be a sure thing, certainly (and I doubt it's a sure thing for Luke), but it seems likely that you can increase it from where it is today.

Does that seem likely to you?

If so, does it seem worth doing?

[+]clemux-90

Aren't these just the pleasures of being in a group of likeminded people?

Possibly, but I think it's easier to get this with other rationalists than groups of other epistemologies. Several differences:

  • Other ways of thinking produce different conclusions with the same information, e.g. 2 people think the same way, but one's a Christian and one's a Muslim. So there isn't the benefit of Aumann agreement.

  • There are objective standards that all parties know everyone agrees on without prior discussion. Deviations from these can be noticed.

  • Rational people change their minds occasionally, which makes any discussion more pleasurable.

[-]djcb30

Obviously, it's epistemically good to change your mind after receiving better information... but there also seems to be a tendency to cling to our opinions. It seems that 'being wrong' negatively affects your self-image, status. Even if we (as aspiring rationalists) can overcome such biases, how to deal with that in a sub-rational world which sees things differently?

People seem to like people who are very confident about their ideas, rather than people that change their mind, even for apparently good reasons.

I noticed there is certain comic effect to immediately matter-of-factly agreeing with arguments on the side of your opponent's position that are correct, but were picked as debate-soldiers, expected to be fought or in some way excused/objected to. :-)

I had enormous dirty fun with this while on holiday. I got talking to a very smart (ex-LLNL nuclear engineer) ninety-year-old who was also very right wing. He proposed to me that the Government should harvest the organs of homeless people in order to give them to combat veterans. What he wanted me to say, of course, was that this proposal was outrageous and wrong, that human life was of greater value than that, and so on, so he could then say that these people were a drain on society and the people who'd made such sacrifices were more important, and so on.

Instead I said "Wouldn't it be cheaper to buy the organs from people in the third world? If you wanted to capture homeless people and take their organs, you'd need some sort of legal procedure to decide who was eligible, and there'd be appeals and so on, and it would all cost about as much as sentencing someone to death in the USA does now, which I'd guess must be hundreds of thousands of dollars at least. There must be plenty of people in poorer countries who would sacrifice their lives for a fraction of that money to feed their families in perpituity. There would be no use of force, no mistaken killings, and their organs would be higher quality. I'm not aware of a problem with getting organs for veterans, but if there is, that seems like a more efficient way to solve it."

His response? He went and got his CV so I'd be impressed at what a smart fellow he was!

You may be able to get an even better kidney-for-the-buck ratio (and increased moral outrage) with a lottery system: get $5.000 for a one-in-ten chance of losing a kidney; or $50.000 for a one-in-ten chance of being killed and having all your organs harvested.

That would be like signing up for a particularly high-risk job, like soldier.

To ensure people don't defect when they discover they've lost the lottery, you'd have to play lethal injection Russian Roulette.

The optimal system would probably be one where ten victi, uh, candidates around the world are simultaneously given a sleeping pill, attached to a machine with a breathing tube, and once they are a sleep a central server under high scrutiny randomly triggers death on one of the machines; upon waking up, the survivors are given their money.

OK, I think this thread is creepy enough now.

[-]djcb30

Only on LW one can get comments like that :-)

I was actually planning a 'However, ' in that comment but I will leave it as it is now. However, I do think that it is more common not to (openly) change one's opinion -- especially for people in power.

The rule-proving exception in the famous quote, attributed to Keyes:

When the facts change, I change my mind. What do you do, sir?

(and I forgive him the colloquial use of the word 'fact'...)

People seem to like people who are very confident about their ideas, rather than people that change their mind, even for apparently good reasons.

I'd certainly hope that I'm not one of the people who likes confident people, not mind changing people.

I have also found that since changing my mind more often as a result of reading Less Wrong, I've had people greatly approve of me publicly stating that I was wrong, and that someone has convinced me. So that's some evidence against what you're claiming.

[-]djcb10

It's great you can change your mind when so compelled by the facts - and you are right, maybe people do underestimate how good it looks to (not too often :-) admit that you were wrong.

Admitting failures (or not) is discussed at length in the nice pop-psy Mistakes were made, but not by me.

I'm currently trying to cultivate a more carefree/light-hearted/forager aesthetic to that end.

People like you more if you actually consider their opinions, and changing your mind is okay as long as it's toward their beliefs.

From the third person perspective, this can look high status if you seem to be more cosmopolitan as a result of it. I emphasize the fact that I'm having lots of fun.

That doesn't seem to have undermined people's confidence in my advice, but I'm not sure how it impacts perceptions of leadership that I might have in the future.

I wonder, how do you typically come to an agreement? Suppose Anna says A and you say B. Does one come to see that the other was right and you agree on either A or B, or do you both see that you were both wrong and you agree on a new position C?

All three types, perhaps 35% each on A and B, and 10% on C, and 20% on fourth and fifth and nth solutions.