eli_sennesh comments on Self-Congratulatory Rationalism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (395)
I interpret you as making the following criticisms:
1. People disagree with each other, rather than use Aumann agreement, which proves we don't really believe we're rational
Aside from Wei's comment, I think we also need to keep track of what we're doing.
If we were to choose a specific empirical fact or prediction - like "Russia will invade Ukraine tomorrow" - and everyone on Less Wrong were to go on Prediction Book and make their prediction and we took the average - then I would happily trust that number more than I would trust my own judgment. This is true across a wide variety of different facts.
But this doesn't preclude discussion. Aumann agreement is a way of forcing results if forcing results were our only goal, but we can learn more by trying to disentangle our reasoning processes. Some advantages to talking about things rather than immediately jumping to Aumann:
We can both increase our understanding of the issue.
We may find a subtler position we can both agree on. If I say "California is hot" and you say "California is cold", instead of immediately jumping to "50% probability either way" we can work out which parts of California are hot versus cold at which parts of the year.
We may trace part of our disagreement back to differing moral values. If I say "capital punishment is good" and you say "capital punishment is bad", then it may be right for me to adjust a little in your favor since you may have evidence that many death row inmates are innocent, but I may also find that most of the force of your argument is just that you think killing people is never okay. Depending on how you feel about moral facts and moral uncertainty, we might not want to Aumann adjust this one. Nearly everything in politics depends on moral differences at least a little.
We may trace our disagreement back to complicated issues of worldview and categorization. I am starting to interpret most liberal-conservative issues as a tendency to draw Schelling fences in different places and then correctly reason with the categories you've got. I'm not sure if you can Aumann-adjust that away, but you definitely can't do it without first realizing it's there, which takes some discussion.
So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.
2. It is possible that high IQ people can be very wrong and even in a sense "stupidly" wrong, and we don't acknowledge this enough.
I totally agree this is possible.
The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm definitely right; this other person has nothing to teach me."
So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong - like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we're too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says "I'm sure that creationism is true, and it doesn't matter whether really fancy scientists who use big words tell me it isn't."
We end up in a kind of bravery debate situation here, where we have to decide whether it's worth warning people more against the first failure mode (at the risk it will increase the second), or against the second failure mode more (at the risk that it will increase the first).
And, well, studies pretty universally find everyone is overconfident of their own opinions. Even the Less Wrong survey finds people here to be really overconfident.
So I think it's more important to warn people to be less confident they are right about things. The inevitable response is "What about creationism?!" to which the counterresponse is "Okay, but creationists are stupid, be less confident when you disagree with people as smart or smarter than you."
This gets misinterpreted as IQ fetishism, but I think it's more of a desperate search for something, anything to fetishize other than our own subjective feelings of certainty.
3. People are too willing to be charitable to other people's arguments.
This is another case where I think we're making the right tradeoff.
Once again there are two possible failure modes. First, you could be too charitable, and waste a lot of time engaging with people who are really stupid, trying to figure out a smart meaning to what they're saying. Second, you could be not charitable enough by prematurely dismissing an opponent without attempting to understand her, and so perhaps missing out on a subtler argument that proves she was right and you were wrong all along.
Once again, everyone is overconfident. No one is underconfident. People tell me I am too charitable all the time, and yet I constantly find I am being not-charitable-enough, unfairly misinterpreting other people's points, and so missing or ignoring very strong arguments. Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice "be less charitable" is more helpful than the advice "be more charitable".
As I said above, you can try to pinpoint where to apply this advice. You don't need to be charitable to really stupid people with no knowledge of a field. But once you've determined someone is in a reference class where there's a high prior on them having good ideas - they're smart, well-educated, have a basic committment to rationality - advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less - it might be useful in a couple of extreme cases, but I really doubt it's where the gain for the average person lies.
In fact, it's hard for me to square your observation that we still have strong disagreements with your claim that we're too charitable. At least one side is getting things wrong. Shouldn't they be trying to pay a lot more attention to the other side's arguments?
I feel like utter terror is underrated as an epistemic strategy. Unless you are some kind of freakish mutant, you are overconfident about nearly everything and have managed to build up very very strong memetic immunity to arguments that are trying to correct this. Charity is the proper response to this, and I don't think anybody does it enough.
4. People use too much jargon.
Yeah, probably.
There are probably many cases in which the jargony terms have subtly different meaning or serve as reminders of a more formal theory and so are useful ("metacontrarian" versus "showoff", for example), but probably a lot of cases where people could drop the jargon without cost.
I think this is a more general problem of people being bad at writing - "utilize" vs. "use" and all that.
5. People are too self-congratulatory and should be humbler
What's weird is that when I read this post, you keep saying people are too self-congratulatory, but to me it sounds more like you're arguing people are being too modest, and not self-congratulatory enough.
When people try to replace their own subjective analysis of who can easily be dismissed ("They don't agree with me; screw them") with something based more on IQ or credentials, they're being commendably modest ("As far as I can tell, this person is saying something dumb, but since I am often wrong, I should try to take the Outside View by looking at somewhat objective indicators of idea quality.")
And when people try to use the Principle of Charity, once again they are being commendably modest ("This person's arguments seem stupid to me, but maybe I am biased or a bad interpreter. Let me try again to make sure.")
I agree that it is an extraordinary claim to believe anyone is a perfect rationalists. That's why people need to keep these kinds of safeguards in place as saving throws against their inevitable failures.
Besides which, we're human beings, not fully-rational Bayesian agents by mathematical construction. Trying to pretend to reason like a computer is a pointless exercise when compared to actually talking things out the human way, and thus ensuring (the human way) that all parties leave better-informed than they arrived.