Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Vladimir_Nesov comments on The Pleasures of Rationality - Less Wrong

16 Post author: lukeprog 28 October 2011 02:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 28 October 2011 03:04:39AM *  10 points [-]

The second pleasure, related to the first, is the extremely common result of reaching Aumann agreement after initially disagreeing.

It's never "Aumann agreement", it's just agreement, even if more specifically agreement on actual belief (rather than on ostensible position) reached by forming common understanding.

Comment author: lukeprog 28 October 2011 03:10:02AM *  1 point [-]

Yeah; "Aumann agreement" is (to my knowledge) my own invented term by which I mean "Agreement reached by, among other things, taking into account the Bayesian evidence the other's testimony."

Comment author: Hansenista 01 November 2011 12:59:14AM 2 points [-]

Wei_Dai used the term back in 2009.

Comment author: Vladimir_Nesov 28 October 2011 03:13:36AM *  4 points [-]

taking into account [as] the Bayesian evidence the other's testimony

This seems like usually an unimportant (and/because unreliable/difficult to use) component, most of the work is done by convincing argument, which helps with inferential difficulties, rather than lack of information.

Comment author: lukeprog 28 October 2011 03:18:03AM *  1 point [-]

Agreed.

Comment author: juliawise 28 October 2011 08:47:39PM 0 points [-]

Then it seems like your definition is meaningless. Does your invented term mean something like "sharing information and collaboratively trying to reach the best answer?"

Comment author: lukeprog 28 October 2011 10:18:41PM 3 points [-]

As above, I use "Aumann agreement" to mean "Agreement reached by, among other things, taking into account the Bayesian evidence the other's testimony." Vladimir is right that most of the work is done by convincing argument in most cases. However, there are many cases (e.g., "which sentence sounds better in this paragraph?") where taking the evidence of the other's opinion actually does change the alternative. Also, Anna and I (for example) have quite a lot of respect for the other's opinion on many subjects, and so we update more heavily from each other's testimony than most people would.

Comment author: ciphergoth 29 October 2011 08:09:25AM 3 points [-]

I don't think Aumann agreement is a good term for this; there's a huge difference between that mathematically precise procedure and the fuzzy process you're describing.

Comment author: juliawise 29 October 2011 01:47:24PM 4 points [-]

Agreed. This decision-making method is so common we normally don't name it. E.g. "I was going to dye my hair, but my friend told me about the terrible experience she had, and now I think I'll go to a salon instead of trying it at home." I don't see a need to make up jargon for "considering the advice of trusted people."

It seems like the purpose of this post was mostly to share your enjoyment of how wise your coworkers are and how well you cooperate with each other. Which is fine, but let's not technify it unnecessarily.

Comment author: Vladimir_Nesov 29 October 2011 10:40:38AM 3 points [-]

The crucial point is that it's not a procedure, it's a property, an indicator and not a method.

Comment author: ciphergoth 31 October 2011 07:48:35AM 0 points [-]

I'm sorry, I don't see what you're getting at I'm afraid!

Comment author: Vladimir_Nesov 31 October 2011 10:43:01AM 1 point [-]

Aumann agreement is already there, it's a fact of a certain situation, not a procedure for getting to an agreement, unlike the practice of forming a common understanding Luke talked about. My comment was basically a pun on your use of the word "procedure".

Comment author: lukeprog 03 November 2011 07:33:36AM 0 points [-]

Do you also object to the use of the term "Aumann agreement" by Wei Dai and on the LW wiki?

Comment author: Vladimir_Nesov 03 November 2011 10:46:56AM 1 point [-]

Wei Dai discusses the actual theorem, and in the last section expresses a sentiment similar to mine. I disapprove of the first paragraph of "Aumann agreement" wiki page (but see also the separate Aumann's agreement theorem wiki page).

Comment author: Tyrrell_McAllister 03 November 2011 09:06:29PM 0 points [-]

FWIW, I wrote up a brief explanation and proof of Aumann's agreement theorem.

Comment author: lessdazed 03 November 2011 01:13:52PM *  0 points [-]

The wiki entry does not look good to me.

Unless you think I'm so irredeemably irrational that my opinions anticorrelate with truth, then the very fact that I believe something is Bayesian evidence that that something is true

This sentence is problematic. Beliefs are probabilistic, and the import of some rationalist's estimate varies according to one's own knowledge. If I am fairly certain that a rationalist has been getting flawed evidence (that is selected to support a proposition) and thinks the evidence is probably fine, that rationalist's weak belief that that proposition is true is, for me, evidence against the proposition.

Consider: if I'm an honest seeker of truth, and you're an honest seeker of truth, and we believe each other to be honest, then we can update on each other's opinions and quickly reach agreement.

Iterative updating is a method rationalists can use when they can't share information (as humans often can't do well), but that is a process the result of which is agreement, but not Aumann agreement.

Aumann agreement is a result of two rationalists sharing all information and ideally updating. It's a thing to know so that one can assess a situation after two reasoners have reached their conclusions based on identical information, because if those conclusions are not identical, then one or both are not perfect rationalists. But one doesn't get much benefit from knowing the theorem, and wouldn't even if people actually could share all their information; if one updates properly on evidence, one doesn't need to know about Aumann agreement to reach proper conclusions because it has nothing to do with the normal process of reasoning about most things, and likewise if one knew the theorem but not how to update, it would be of little help.

As Vladmir_Nesov said:

The crucial point is that it's not a procedure, it's a property, an indicator and not a method.

It's especially unhelpful for humans as we can't share all our information.

As Wei_Dei said:

Having explained all of that, it seems to me that this theorem is less relevant to a practical rationalist than I thought before I really understood it. After looking at the math, it's apparent that "common knowledge" is a much stricter requirement than it sounds. The most obvious way to achieve it is for the two agents to simply tell each other I(w) and J(w), after which they share a new, common information partition. But in that case, agreement itself is obvious and there is no need to learn or understand Aumann's theorem.

So Wei_Dei's use is fine, as in his post he describe's its limited usefulness.

at no point in a conversation can Bayesians have common knowledge that they will disagree.

As I don't understand this at all, perhaps this sentence is fine and I badly misunderstand the concepts here.

Comment author: Larks 03 November 2011 11:15:48PM 4 points [-]

Aumann agreement is a result of two rationalists sharing all information and ideally updating.

No, this is not the case. All they need is a common prior and common knowledge of their probabilities. The whole reason Aumann agreement is clever is because you're not sharing the evidence that convinced you.

See, for example, the original paper.

Comment author: lessdazed 04 November 2011 12:48:21AM 0 points [-]

Updated. (My brain, I didn't edit the comment.)

Comment author: Tyrrell_McAllister 03 November 2011 09:24:15PM 2 points [-]

at no point in a conversation can Bayesians have common knowledge that they will disagree.

As I don't understand this at all, perhaps this sentence is fine and I badly misunderstand the concepts here.

"Common knowledge" is a far stronger condition than it sounds.

Comment author: lessdazed 04 November 2011 12:55:17AM 0 points [-]

So "at no point in a conversation can Bayesians have common knowledge that they will disagree," means "'Common knowledge' is a far stronger condition than it sounds," and nothing more and nothing less?

See, "knowledge" is of something that is true, or at least actually interpreted input. So if someone can't have knowledge of it, that implies i's true and one merely can't know it. If there can't be common knowledge, that implies that at least one can't know the true thing. But the thing in question, "that they will disagree", is false, right?

I do not understand what the words in the sentence mean. It seems to read:

"At no point can two ideal reasoners both know true fact X, where true fact X is that they will disagree on posteriors, and that each knows that they will disagree on posteriors, etc."

But the theorem is that they will not disagree on posteriors...

Comment author: Tyrrell_McAllister 04 November 2011 01:49:49AM *  0 points [-]

So "at no point in a conversation can Bayesians have common knowledge that they will disagree," means "'Common knowledge' is a far stronger condition than it sounds," and nothing more and nothing less?

No, for a couple of reasons.

First, I misunderstood the context of that quote. I thought that it was from Wei Dai's post (because he was the last-named source that you'd quoted). Under this misapprehension, I took him to be pointing out that common knowledge of anything is a fantastically strong condition, and so, in particular, common knowledge of disagreement is practically impossible. It's theoretically possible for two Bayesians to have common knowledge of disagreement (though, by the theorem, they must have had different priors). But can't happen in the real world, such as in Luke's conversations with Anna.

But I now see that this whole line of thought was based on a silly misunderstanding on my part.

In the context of the LW wiki entry, I think that the quote is just supposed to be a restatement of Aumann's result. In that context, Bayesian reasoners are assumed to have the the same prior (though this could be made clearer). Then I unpack the quote just as you do:

"At no point can two ideal reasoners both know true fact X, where true fact X is that they will disagree on posteriors, and that each knows that they will disagree on posteriors, etc."

As you point out, by Aumann's theorem, they won't disagree on posteriors, so they will never have common knowledge of disagreement, just as the quote says. Conversely, if they have common knowledge of posteriors, but, per the quote, they can't have common knowledge of disagreement, then those posteriors must agree, which is Aumann's theorem. In this sense, the quote is equivalent to Aumann's result.

Apparently the author doesn't use the word "knowledge" in such a way that to say "A can't have knowledge of X" is to imply that X is true. (Nor do I, FWIW.)