RobinHanson comments on Probability Space & Aumann Agreement - Less Wrong

34 Post author: Wei_Dai 10 December 2009 09:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread.

Comment author: RobinHanson 10 December 2009 10:58:49PM 4 points [-]

Sure all by itself this first paper doesn't seem very relevant for real disagreements, but there is a whole literature beyond this first paper, which weakens the assumptions required for similar results. Keep reading.

Comment author: Wei_Dai 10 December 2009 11:06:30PM 3 points [-]

I already scanned through some of the papers that cite Aumann, but didn't find anything that made me change my mind. Do you have any specific suggestions on what I should read?

Comment author: timtyler 11 December 2009 12:00:08AM 3 points [-]

Seen Hanson's own http://hanson.gmu.edu/deceive.pdf - and its references?

Comment author: Wei_Dai 11 December 2009 01:04:59AM 3 points [-]

Yes, I looked at that paper, and also Agreeing To Disagree: A Survey by Giacomo Bonanno and Klaus Nehring.

Comment author: HalFinney 11 December 2009 04:49:40PM *  3 points [-]

How about Scott Aaronson:

http://www.scottaaronson.com/papers/agree-econ.pdf

He shows that you do not have to exchange very much information to come to agreement. Now maybe this does not address the question of the potential intractability of the deductions to reach agreement (the wannabe papers may do this) but I think it shows that it is not necessary to exchange all relevant information.

The bottom line for me is the flavor of the Aumann theorem: that there must be a reason why the other person is being so stubborn as not to be convinced by your own tenacity. I think this insight is the key to the whole conclusion and it is totally overlooked by most disagreers.

Comment author: Wei_Dai 11 December 2009 09:52:22PM *  5 points [-]

I haven't read the whole paper yet, but here's one quote from it (page 5):

The dependence, alas, is exponential in 1 / (δ^3 ε^6), so our simulation procedure is still not practical. However, we expect that both the procedure and its analysis can be considerably improved.

Scott is talking about the computational complexity of his agreement protocol here. Even if we can improve the complexity to something that is considered practical from a computer science perspective, that will still likely be impractical for human beings, most of whom can't even multiply 3 digit numbers in their heads.

Comment author: timtyler 12 December 2009 03:44:30PM *  0 points [-]

To quote from the abstract of Scott Aaronson's paper:

"A celebrated 1976 theorem of Aumann asserts that honest, rational Bayesian agents with common priors will never agree to disagree": if their opinions about any topic are common knowledge, then those opinions must be equal."

Even "honest, rational, Bayesian agents" seems too weak. Goal-directed agents who are forced to signal their opinions to others can benefit from voluntarily deceiving themselves in order to effectively deceive others. Their self-deception makes their opinions more credible - since they honestly believe them.

If an agent honestly believes what they are saying, it is difficult to accuse them of dishonesty - and such an agent's understanding of Bayesian probability theory may be immaculate.

Such agents are not constrained to agree by Aumann's disagreement theorem.

Comment author: gwern 15 May 2010 05:12:06PM 2 points [-]

Goal-directed agents who are forced to signal their opinions to others can benefit from voluntarily deceiving themselves in order to effectively deceive others. Their self-deception makes their opinions more credible - since they honestly believe them.

This seems to reflect human cognitive architecture more than a general fact about optimal agents or even most/all goal-directed agents. That humans are not optimal is nothing new around here, nor that the agreement theorems have little relevance to real human arguments. (I can't be the only one to read the papers and think, 'hell, I don't trust myself as far as even the weakened models, much less Creationists and whatnot', and have little use for them.)

Comment author: timtyler 11 December 2009 08:48:30PM -1 points [-]

The reason is often that you regard your own perceptions and conclusion as trustworthy and in accordance with your own aims - whereas you don't have a very good reason to believe the other person is operating in your interests (rather than selfishly trying to manipulate you to serve their own interests). They may reason in much the same way.

Probably much the same circuitry continues to operate even in those very rare cases where two truth-seekers meet, and convince each other of their sincerity.

Comment author: SilasBarta 10 December 2009 11:57:07PM 3 points [-]

Uh oh, it looks like you guys are doing the Aumann "meet" operation to update your beliefs about Aumann. Make sure to keep track of the levels of recursion...