paulfchristiano comments on Self-Congratulatory Rationalism - Less Wrong

51 Post author: ChrisHallquist 01 March 2014 08:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (395)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 01 March 2014 09:21:52AM *  39 points [-]

So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it."

I disagree with this, and explained why in Probability Space & Aumann Agreement . To quote the relevant parts:

There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent's information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you're making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.

In other words, when I say "what's the evidence for that?", it's not that I don't trust your rationality (although of course I don't trust your rationality either), but I just can't deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.

Comment author: paulfchristiano 11 March 2014 01:15:03AM 4 points [-]

There is truth to this sentiment, but you should keep in mind results like this one by Scott Aaronson, that the amount of info that people actually have to transmit is independent of the amount of evidence that they have (even given computational limitations).

It seems like doubting each other's rationality is a perfectly fine explanation. I don't think most people around here are perfectly rational, nor that they think I'm perfectly rational, and definitely not that they all think that I think they are perfectly rational. So I doubt that they've updated enough on the fact that my views haven't converged towards theirs, and they may be right that I haven’t updated enough on the fact that their views haven’t converged towards mine.

In practice we live in a world where many pairs of people disagree, and you have to disagree with a lot of people. I don’t think the failure to have common knowledge is much of a vice, either of me or my interlocutor. It’s just a really hard condition.

Comment author: Wei_Dai 11 March 2014 08:17:36AM 2 points [-]

There is truth to this sentiment, but you should keep in mind results like this one by Scott Aaronson, that the amount of info that people actually have to transmit is independent of the amount of evidence that they have (even given computational limitations).

The point I wanted to make was that AFAIK there is currently no practical method for two humans to reliably reach agreement on some topic besides exchanging all the evidence they have, even if they trust each other to be as rational as humanly possible. The result by Scott Aaronson may be of theoretical interest (and maybe even of practical use by future AIs that can perform exact computations with the information in their minds), but seem to have no relevance to humans faced with real-world (i.e., as opposed to toy examples) disagreements.

I don’t think the failure to have common knowledge is much of a vice, either of me or my interlocutor. It’s just a really hard condition.

I don't understand this. Can you expand?

Comment author: Lumifer 11 March 2014 03:38:26PM 2 points [-]

there is currently no practical method for two humans to reliably reach agreement on some topic besides exchanging all the evidence they have

Huh? There is currently no practical method for two humans to reliably reach agreement on some topic, full stop. Exchanging all evidence might help, but given that we are talking about humans and not straw Vulcans, it is still not a reliable method.