steven0461 comments on Less Wrong: Open Thread, September 2010 - Less Wrong

3 Post author: matt 01 September 2010 01:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (610)

You are viewing a single comment's thread. Show more comments above.

Comment author: steven0461 01 September 2010 10:32:27PM *  2 points [-]

People sometimes use "Aumann's agreement theorem" to mean "the idea that you should update on other people's opinions", and I agree this is inaccurate and it's not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating. Should I have said Geanakoplos and Polemarchakis?

Comment author: Wei_Dai 01 September 2010 11:47:07PM 2 points [-]

I think LWers have been using "Aumann agreement" to refer to the whole literature spawned by Aumann's original paper, which includes explicit protocols for Bayesians to reach agreement. This usage seems reasonable, although I'm not sure if it's standard outside of our community.

This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments

I'm not sure this is right... Here's what I wrote in Probability Space & Aumann Agreement:

But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is there a result in the literature that shows something closer to your "one can learn from knowing other people's opinions without knowing their arguments"?

Comment author: steven0461 02 September 2010 12:11:42AM 1 point [-]

I haven't read your post and my understanding is still hazy, but surely at least the theorems don't depend on the agents being able to fully reconstruct each other's evidence? If they do, then I don't see how it could be true that the probability the agents end up agreeing on is sometimes different from the one they would have had if they were able to share information. In this sort of setting I think I'm comfortable calling it "updating on each other's opinions".

Regardless of Aumann-like results, I don't see how:

one can learn from knowing other people's opinions without knowing their arguments

could possibly be controversial here, as long as people's opinions probabilistically depend on the truth.

Comment author: Wei_Dai 02 September 2010 03:39:24AM *  2 points [-]

but surely at least the theorems don't depend on the agents being able to fully reconstruct each other's evidence?

You're right, sometimes the agreement protocol terminates before the agents fully reconstruct each other's evidence, and they end up with a different agreed probability than if they just shared evidence.

But my point was mainly that exchanging information like this by repeatedly updating on each other's posterior probabilities is not any easier than just sharing evidence/arguments. You have to go through these convoluted logical deductions to try to infer what evidence the other guy might have seen or what argument he might be thinking of, given the probability he's telling you. Why not just tell each other what you saw or what your arguments are? Some of these protocols might be useful for artificial agents in situations where computation is cheap and bandwidth is expensive, but I don't think humans can benefit from them because it's too hard to do these logical deductions in our heads.

Also, it seems pretty obvious that you can't offload the computational complexity of these protocols onto a third party. The problem is that the third party does not have full information of either of the original parties, so he can't compute the posterior probability of either of them, given an announcement from the other.

It might be that a specialized "disagreement arbitrator" can still play some useful role, but I don't see any existing theory on how it might do so. Somebody would have to invent that theory first, I think.

Comment author: MBlume 02 September 2010 12:30:51AM 2 points [-]

for an ideal Bayesian, I think 'one can learn from X' is categorically true for all X....

Comment author: Perplexed 02 September 2010 12:52:02AM *  2 points [-]

... surely at least the theorems don't depend on the agents being able to fully reconstruct each other's evidence?

They don't necessarily reconstruct all of each other's evidence, just the parts that are relevant to their common knowledge. For example, two agents have common priors regarding the contents of an urn. Independently, they sample from the urn with replacement. They then exchange updated probabilities for P(Urn has Freq(red)<Freq(black)) and P(Urn has Freq(red)<0.9*Freq(black)). At this point, each can reconstruct the sizes and frequencies of the other agent's evidence samples ("4 reds and 4 blacks"), but they cannot reconstruct the exact sequences ("RRBRBBRB"). And they can update again to perfect agreement regarding the urn contents.

Edit: minor cleanup for clarity.

At least that is my understanding of Aumann's theorem.

Comment author: steven0461 02 September 2010 01:16:45AM 1 point [-]

That sounds right, but I was thinking of cases like this, where the whole process leads to a different (worse) answer than sharing information would have.

Comment author: Perplexed 02 September 2010 02:22:06AM 1 point [-]

Hmmm. It appears that in that (Venus, Mars) case, the agents should be exchanging questions as well as answers. They are both concerned regarding catastrophe, but confused regarding planets. So, if they tell each other what confuses them, they will efficiently communicate the important information.

In some ways, and contrary to Jaynes, I think that pure Bayesianism is flawed in that it fails to attach value to information. Certainly, agents with limited communication channel capacity should not waste bandwidth exchanging valueless information.

Comment author: timtyler 02 September 2010 08:56:49AM 0 points [-]

That comment leaves me wondering what "pure Bayesianism" is.

I don't think Bayesianism is a recipe for action in the first place - so how can "pure Bayesianism" be telling agents how they should be spending their time?

Comment author: Perplexed 02 September 2010 01:21:54PM 1 point [-]

By "pure Bayesianism", I meant the attitude expressed in Chapter 13 of Jaynes, near the end in the section entitled "Comments" and particularly the subsection at the very end entitled "Another dimension?". A pure "Jaynes Bayesian" seeks the truth, not because it is useful, but rather because it is truth.

By contrast, we might consider a "de Finetti Bayesian" who seeks the truth so as not to lose bets to Dutch bookies, or a "Wald Bayesian" who seeks truth to avoid loss of utility. The Wald Bayesian clearly is looking for a recipe for action, and the de Finetti Bayesian seeks at least a recipe for gambling.

Comment author: timtyler 02 September 2010 07:43:33PM *  1 point [-]

A truth seeker! Truth seeking is certainly pretty bizarre and unbiological. Agents can normally be expected to concentrate on making babies - not on seeking holy grails.

Comment deleted 02 September 2010 01:41:32PM [-]
Comment author: timtyler 02 September 2010 08:28:51PM -2 points [-]

Hi! As brief feedback, I was trying to find out what "pure Bayesianism" was being used to mean - so this didn't help too much.

Comment author: Stuart_Armstrong 02 September 2010 10:00:13AM 1 point [-]

You have to also be able to deduce how much of the other agent's information is shared with you. If you and them got your posteriors by reading the same blogs and watching the same TV shows, then this is very different from the case when you reached the same conclusion from completely different channels.

Comment author: Mitchell_Porter 02 September 2010 10:07:10AM 3 points [-]

If you and them got your posteriors by reading the same blogs and watching the same TV shows

Somewhere in there is a joke about the consequences of a sedentary lifestyle.

Comment author: Vladimir_Nesov 01 September 2010 10:43:58PM 0 points [-]

People sometimes use "Aumann's agreement theorem" to mean "the idea that you should update on other people's opinions", and I agree this is inaccurate and it's not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating.

The theorem doesn't involve any updating, so it's not a salient example in discussion of updating, much less proxy for that.

Should I have said Geanakoplos and Polemarchakis?

To answer literally, simply not mentioning the theorem would've done the trick, since there didn't seem to be a need for elaboration.