Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Self-Congratulatory Rationalism

51 Post author: ChrisHallquist 01 March 2014 08:52AM

Quite a few people complain about the atheist/skeptic/rationalist communities being self-congratulatory. I used to dismiss this as a sign of people's unwillingness to admit that rejecting religion, or astrology, or whatever, was any more rational than accepting those things. Lately, though, I've started to worry.

Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.

Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality. The fact that members of the LessWrong community tend to be smart is no guarantee that they will be rational. And we have much reason to fear "rationality" degenerating into signaling games.

What Disagreement Signifies

Let's start by talking about disagreement. There's been a lot of discussion of disagreement on LessWrong, and in particular of Aumann's agreement theorem, often glossed as something like "two rationalists can't agree to disagree." (Or perhaps that we can't foresee to disagree.) Discussion of disagreement, however, tends to focus on what to do about it. I'd rather take a step back, and look at what disagreement tells us about ourselves: namely, that we don't think all that highly of each other's rationality.

This, for me, is the take-away from Tyler Cowen and Robin Hanson's paper Are Disagreements Honest? In the paper, Cowen and Hanson define honest disagreement as meaning that "meaning that the disputants respect each other’s relevant abilities, and consider each person’s stated opinion to be his best estimate of the truth, given his information and effort," and they argue disagreements aren't honest in this sense.

I don't find this conclusion surprising. In fact, I suspect that while people sometimes do mean it when they talk about respectful disagreement, often they realize this is a polite fiction (which isn't necessarily a bad thing). Deep down, they know that disagreement is disrespect, at least in the sense of not thinking that highly of the other person's rationality. That people know this is shown in the fact that they don't like being told they're wrong—the reason why Dale Carnegie says you can't win an argument

On LessWrong, people are quick to criticize each others' views, so much so that I've heard people cite this as a reason to be reluctant to post/comment (again showing they know intuitively that disagreement is disrespect). Furthermore when people in LessWrong criticize others' views, they very often don't seem to expect to quickly reach agreement. Even people Yvain would classify as "experienced rationalists" sometimes knowingly have persistent disagreements. This suggests that LessWrongers almost never consider each other to be perfect rationalists.

And I actually think this is a sensible stance. For one thing, even if you met a perfect rationalist, it could be hard to figure out that they are one. Furthermore, the problem of knowing what to do about disagreement is made harder when you're faced with other people having persistent disagreements: if you find yourself agreeing with Alice, you'll have to think Bob is being irrational, and vice versa. If you rate them equally rational and adopt an intermediate view, you'll have to think they're both being a bit irrational for not doing likewise.

The situation is similar to Moore's paradox in philosophy—the impossibility of asserting "it's raining, but I don't believe it's raining." Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."

We can find some room for humility in an analog of the preface paradox, the fact that the author of a book can say things like "any errors that remain are mine." We can say this because we might think each individual claim in the book is highly probable, while recognize that all the little uncertainties add up to it being likely there are still errors. Similarly, we can think each of our beliefs are individually rational, while recognizing we still probably have some irrational beliefs—we just don't know which ones And just because respectful disagreement is a polite fiction doesn't mean we should abandon it. 

I don't have a clear sense of how controversial the above will be. Maybe we all already recognize that we don't respect each other's opinions 'round these parts. But I think some features of discussion at LessWrong look odd in light of the above points about disagreement—including some of the things people say about disagreement.

The wiki, for example, says that "Outside of well-functioning prediction markets, Aumann agreement can probably only be approximated by careful deliberative discourse. Thus, fostering effective deliberation should be seen as a key goal of Less Wrong." The point of Aumann's agreement theorem, though, is precisely that ideal rationalists shouldn't need to engage in deliberative discourse, as usually conceived, in order to reach agreement.

As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it." But when dealing with real people who may or may not have a rational basis for their beliefs, that's almost always the right stance to take.

Intelligence and Rationality

Intelligence does not equal rationality. Need I say more? Not long ago, I wouldn't have thought so. I would have thought it was a fundamental premise behind LessWrong, indeed behind old-school scientific skepticism. As Michael Shermer once said, "Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments. When I hear that, I think "whaaat? People on LessWrong make bad arguments all the time!" When this happens, I generally limit myself to trying to point out the flaw in the argument and/or downvoting, and resist the urge to shout "YOUR ARGUMENTS ARE BAD AND YOU SHOULD FEEL BAD." I just think it.

When I reach for an explanation of why terrible arguments from smart people shouldn't surprise anyone, I go to Yvain's Intellectual Hipsters and Meta-Contarianism, one of my favorite LessWrong posts of all time. While Yvain notes that meta-contrarianism often isn't a good thing, though, on re-reading it I noticed what seems like an important oversight:

A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.

A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

According to the survey, the average IQ on this site is around 145. People on this site differ from the mainstream in that they are more willing to say death is bad, more willing to say that science, capitalism, and the like are good, and less willing to say that there's some deep philosophical sense in which 1+1 = 3. That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death.

The pattern of countersignaling Yvain describes here is real. But it's important not to forget that sometimes, the super-wealthy signal their wealth by buying things even the moderately wealthy can't afford. And sometimes, the very intelligent signal their intelligence by holding opinions even the moderately intelligent have trouble understanding. You also get hybrid status moves: designer versions of normally low-class clothes, complicated justifications for opinions normally found among the uneducated.

Robin Hanson has argued that this leads to biases in academia:

I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds. If so, academic beliefs are secondary – the important thing is to clearly show respect to those who make impressive displays like theorems or difficult data analysis. And the obvious way for academics to use their beliefs to show respect for impressive folks is to have academic beliefs track the most impressive recent academic work.

Robin's post focuses on economics, but I suspect the problem is even worse in my home field of philosophy. As I've written before, the problem is that in philosophy, philosophers never agree on whether a philosopher has solved a problem. Therefore, there can be no rewards for being right, only rewards for showing off your impressive intellect. This often means finding clever ways to be wrong.

I need to emphasize that I really do think philosophers are showing off real intelligence, not merely showing off faux-cleverness. GRE scores suggest philosophers are among the smartest academics, and their performance is arguably made more impressive by the fact that GRE quant scores are bimodally distributed based on whether your major required you to spend four years practicing your high school math, with philosophy being one of the majors that doesn't grant that advantage. Based on this, if you think it's wrong to dismiss the views of high-IQ people, you shouldn't be dismissive of mainstream philosophy. But in fact I think LessWrong's oft-noticed dismissiveness of mainstream philosophy is largely justified.

I've found philosophy of religion in particular to be a goldmine of terrible arguments made by smart people. Consider Alvin Plantinga's modal ontological argument. The argument is sufficiently difficult to understand that I won't try to explain it here. If you want to understand it, I'm not sure what to tell you except to maybe read Plantinga's book The Nature of NecessityIn fact, I predict at least one LessWronger will comment on this thread with an incorrect explanation or criticism of the argument. Which is not to say they wouldn't be smart enough to understand it, just that it might take them a few iterations of getting it wrong to finally get it right. And coming up with an argument like that is no mean feat—I'd guess Plantinga's IQ is just as high as the average LessWronger's.

Once you understand the modal ontological argument, though, it quickly becomes obvious that Plantinga's logic works just as well to "prove" that it's a necessary truth that pigs fly. Or that Plantinga's god does not exist. Or even as a general purpose "proof" of any purported mathematical truth you please. The main point is that Plantinga's argument is not stupid in the sense of being something you'd only come up with if you had a low IQ—the opposite is true. But Plantinga's argument is stupid in the sense of being something you'd only come up with it while under the influence of some serious motivated reasoning.

The modal ontological argument is admittedly an extreme case. Rarely is the chasm between the difficulty of the concepts underlying an argument, and the argument's actual merits, so vast. Still, beware the temptation to affiliate with smart people by taking everything they say seriously.

Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"

The Principle of Charity

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

More frustrating than this simple disagreement over charity, though, is when people who invoke the principle of charity do so selectively. They apply it to people who's views they're at least somewhat sympathetic to, but when they find someone they want to attack, they have trouble meeting basic standards of fairness. And in the most frustrating cases, this gets explicit justification: "we need to read these people charitably, because they are obviously very intelligent and rational." I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.

I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses. Whatever its merits, though, they can't depend on the actual intelligence and rationality of the person making an argument. Not only is intelligence no guarantee against making bad arguments, the whole reason we demand other people tell us their reasons for their opinions in the first place is we fear their reasons might be bad ones.

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

Beware Weirdness for Weirdness' Sake

There's a theory in the psychology and sociology of religion that the purpose of seemingly foolish rituals like circumcision and snake-handling is to provide a costly and therefore hard-to-fake signal of group commitment. I think I've heard it suggested—though I can't find by who—that crazy religious doctrines could serve a similar purpose. It's easy to say you believe in a god, but being willing to risk ridicule by saying you believe in one god who is three persons, who are all the same god, yet not identical to each other, and you can't explain how that is but it's a mystery you accept on faith... now that takes dedication.

Once you notice the general "signal group commitment in costly ways" strategy, it seems to crop up everywhere. Subcultures often seem to go out of their way to be weird, to do things that will shock people outside the subculture, ranging from tattoos and weird clothing to coming up with reasons why things regarded as normal and innocuous in the broader culture are actually evil. Even something as simple as a large body of jargon and in-jokes can do the trick: if someone takes the time to learn all the jargon and in-jokes, you know they're committed.

This tendency is probably harmless when done with humor and self-awareness, but it's more worrisome when a group becomes convinced its little bits of weirdness for weirdness' sake are a sign of its superiority to other groups. And it's worth being aware of, because it makes sense of signaling moves that aren't straightforwardly plays for higher status.

The LessWrong community has amassed a truly impressive store of jargon and in-jokes over the years, and some of it's quite useful (I reiterate my love for the term "meta-contrarian"). But as with all jargon, LessWrongian jargon is often just a silly way of saying things you could have said without it. For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.

That bit of LessWrong jargon is merely silly. Worse, I think, is the jargon around politics. Recently, a friend gave "they avoid blue-green politics" as a reason LessWrongians are more rational than other people. It took a day before it clicked that "blue-green politics" here basically just meant "partisanship." But complaining about partisanship is old hat—literally. America's founders were fretting about it back in the 18th century. Nowadays, such worries are something you expect to hear from boringly middle-brow columnists at major newspapers, not edgy contrarians.

But "blue-green politics," "politics is the mind-killer"... never mind how much content they add, the point is they're obscure enough to work as an excuse to feel superior to anyone whose political views are too mainstream. Outsiders will probably think you're weird, invoking obscure jargon to quickly dismiss ideas that seem plausible to them, but on the upside you'll get to bond with members of your in-group over your feelings of superiority.

A More Humble Rationalism?

I feel like I should wrap up with some advice. Unfortunately, this post was motivated by problems I'd seen, not my having thought of brilliant solutions to them. So I'll limit myself to some fairly boring, non-brilliant advice.

First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that.

Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy. Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme. It's just a little too easy to forget where "rationality" is supposed to connect with the real world, increasing the temptation for "rationality" to spiral off into signaling games.

Comments (366)

Comment author: Wei_Dai 01 March 2014 09:21:52AM *  39 points [-]

So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it."

I disagree with this, and explained why in Probability Space & Aumann Agreement . To quote the relevant parts:

There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent's information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you're making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.

In other words, when I say "what's the evidence for that?", it's not that I don't trust your rationality (although of course I don't trust your rationality either), but I just can't deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.

Comment author: JWP 01 March 2014 04:20:23PM 6 points [-]

when I say "what's the evidence for that?", it's not that I don't trust your rationality (although of course I don't trust your rationality either), but I just can't deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.

Yes. There are reasons to ask for evidence that have nothing to do with disrespect.

  • Even assuming that all parties are perfectly rational and that any disagreement must stem from differing information, it is not always obvious which party has better relevant information. Sharing evidence can clarify whether you know something that I don't, or vice versa.

  • Information is a good thing; it refines one's model of the world. Even if you are correct and I am wrong, asking for evidence has the potential to add your information to my model of the world. This is preferable to just taking your word for the conclusion, because that information may well be relevant to more decisions than the topic at hand.

Comment author: paulfchristiano 11 March 2014 01:15:03AM 4 points [-]

There is truth to this sentiment, but you should keep in mind results like this one by Scott Aaronson, that the amount of info that people actually have to transmit is independent of the amount of evidence that they have (even given computational limitations).

It seems like doubting each other's rationality is a perfectly fine explanation. I don't think most people around here are perfectly rational, nor that they think I'm perfectly rational, and definitely not that they all think that I think they are perfectly rational. So I doubt that they've updated enough on the fact that my views haven't converged towards theirs, and they may be right that I haven’t updated enough on the fact that their views haven’t converged towards mine.

In practice we live in a world where many pairs of people disagree, and you have to disagree with a lot of people. I don’t think the failure to have common knowledge is much of a vice, either of me or my interlocutor. It’s just a really hard condition.

Comment author: Wei_Dai 11 March 2014 08:17:36AM 2 points [-]

There is truth to this sentiment, but you should keep in mind results like this one by Scott Aaronson, that the amount of info that people actually have to transmit is independent of the amount of evidence that they have (even given computational limitations).

The point I wanted to make was that AFAIK there is currently no practical method for two humans to reliably reach agreement on some topic besides exchanging all the evidence they have, even if they trust each other to be as rational as humanly possible. The result by Scott Aaronson may be of theoretical interest (and maybe even of practical use by future AIs that can perform exact computations with the information in their minds), but seem to have no relevance to humans faced with real-world (i.e., as opposed to toy examples) disagreements.

I don’t think the failure to have common knowledge is much of a vice, either of me or my interlocutor. It’s just a really hard condition.

I don't understand this. Can you expand?

Comment author: Lumifer 11 March 2014 03:38:26PM 2 points [-]

there is currently no practical method for two humans to reliably reach agreement on some topic besides exchanging all the evidence they have

Huh? There is currently no practical method for two humans to reliably reach agreement on some topic, full stop. Exchanging all evidence might help, but given that we are talking about humans and not straw Vulcans, it is still not a reliable method.

Comment author: ChrisHallquist 02 March 2014 10:55:08PM 0 points [-]

There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent's information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you're making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.

I won't try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong. It seems like two people trading probability estimates shouldn't need to deduce exactly what the other has observed, they just need to make inferences along the lines of, "wow, she wasn't swayed as much as I expected by me telling her my opinion, she must think she has some pretty good evidence." At least that's the inference you would make if you both knew you trust each other's rationality. More realistically, of course, the correct inference is usually "she wasn't swayed by me telling her my opinion, she doesn't just trust me to be rational."

Consider what would have to happen for two rationalists who knowingly trust each other's rationality to have a persistent disagreement. Because of conservation of expected evidence, Alice has to think her probability estimate would on average remain the same after hearing Bob's evidence, and Bob must think the same about hearing Alice's evidence. That seems to suggest they both must think they have better, more relevant evidence to the question at hand. And might be perfectly reasonable for them to think that at first.

But after several rounds of sharing their probability estimates and seeing the other not budge, Alice will have to realize Bob thinks he's better informed about the topic than she is. And Bob will have to realize the same about Alice. And if they both trust each other's rationality, Alice will have to think, "I thought I was better informed than Bob about this, but it looks like Bob thinks he's the one who's better informed, so maybe I'm wrong about being better informed." And Bob will have to have the parallel thought. Eventually, they should converge.

Comment author: Eugine_Nier 02 March 2014 11:36:00PM 2 points [-]

I won't try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong.

Wei Dai's description is correct, see here for an example where the final estimate is outside the range of the initial two. And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.

Comment author: Will_Newsome 04 April 2014 04:47:07AM 1 point [-]

And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.

Wonder if a list of such things can be constructed. Algorithmic information theory is an example where Eliezer drew the wrong implications from the math and unfortunately much of LessWrong inherited that. Group selection (multi-level selection) might be another example, but less clear cut, as that requires computational modeling and not just interpretation of mathematics. I'm sure there are more and better examples.

Comment author: RobinZ 23 April 2014 03:39:05PM *  0 points [-]

In other words, when I say "what's the evidence for that?", it's not that I don't trust your rationality (although of course I don't trust your rationality either), but I just can't deduce what evidence you must have observed from your probability declaration alone even if you were fully rational.

The argument can even be made more general than that: under many circumstances, it is cheaper for us to discuss the evidence we have than it is for us to try to deduce it from our respective probability estimates.

Comment author: PeterDonis 02 March 2014 02:48:04AM 0 points [-]

(although of course I don't trust your rationality either)

I'm not sure this qualifier is necessary. Your argument is sufficient to establish your point (which I agree with) even if you do trust the other's rationality.

Comment author: ChrisHallquist 02 March 2014 10:35:33PM 1 point [-]

Personally, I am entirely in favor of the "I don't trust your rationality either" qualifier.

Comment author: PeterDonis 03 March 2014 04:46:13PM *  0 points [-]

Is that because you think it's necessary to Wei_Dai's argument, or just because you would like people to be up front about what they think?

Comment author: Gunnar_Zarncke 01 March 2014 09:54:14PM -1 points [-]

Yes. But it entirely depends on how the request for supportive references is phrased.

Good:

Interesting point. I'm not entirely clear how you arrived at that position. I'd like to look up some detail questions on that. Could you provide references I might look at?

Bad:

That argument makes no sense. What references do you have to support such a ridiculous claim?

The neutral

What's the evidence for that?

leaves the interpretation of the attitude to the reader/addressee and is bound to be misinterpreted (people misinterpreting tone or meaning of email).

Comment author: ChrisHallquist 02 March 2014 11:02:42PM 0 points [-]

Saying

Interesting point. I'm not entirely clear how you arrived at that position. I'd like to look up some detail questions on that. Could you provide references I might look at?

sort of implies you're updating towards the other's position. If you not only disagree but are totally unswayed by hearing the other person's opinion, it becomes polite but empty verbiage (not that polite but empty verbiage is always a bad thing).

Comment author: shware 01 March 2014 08:16:41PM *  23 points [-]

A Christian proverb says: “The Church is not a country club for saints, but a hospital for sinners”. Likewise, the rationalist community is not an ivory tower for people with no biases or strong emotional reactions, it’s a dojo for people learning to resist them.

SlateStarCodex

Comment author: 7EE1D988 01 March 2014 10:58:35AM 11 points [-]

I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses.

Some people are just bad at explaining their ideas correctly (too hasty, didn't reread themselves, not a high enough verbal SAT, foreign mother tongue, inferential distance, etc.), others are just bad at reading and understanding other's ideas correctly (too hasty, didn't read the whole argument before replying, glossed over that one word which changed the whole meaning of a sentence, etc.).

I've seen many poorly explained arguments which I could understand as true or at least pointing in interesting directions, which were summarily ignored or shot down by uncharitable readers.

Comment author: alicey 01 March 2014 04:28:32PM *  4 points [-]

i tend to express ideas tersely, which seems to count as poorly-explained if my audience is expecting more verbiage, so they round me off to the nearest cliche and mostly downvote me

i have mostly stopped posting or commenting on lesswrong and stackexchange because of this

like, when i want to say something, i think "i can predict that people will misunderstand and downvote me, but i don't know what improvements i could make to this post to prevent this. sigh."

revisiting this on 2014-03-14, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is

for example, i suspect that the use of more intuitively sensible grammar in this comment (mostly just a lack of capitalization) often discards the frame-message-bit of "i might be intelligent" (or ... something) that such people understand from messages (despite this being an incorrect thing to understand)

Comment author: shokwave 03 March 2014 05:16:35AM 5 points [-]

so they round me off to the nearest cliche

I have found great value in re-reading my posts looking for possible similar-sounding cliches, and re-writing to make the post deliberately inconsistent with those.

For example, the previous sentence could be rounded off to the cliche "Avoid cliches in your writing". I tried to avoid that possible interpretation by including "deliberately inconsistent".

Comment author: RobinZ 23 April 2014 03:34:23PM 0 points [-]

I like it - do you know if it works in face-to-face conversations?

Comment author: TheOtherDave 01 March 2014 04:40:02PM 2 points [-]

Well, you describe the problem as terseness.
If that's true, it suggests that one set of improvements might involve explaining your ideas more fully and providing more of your reasons for considering those ideas true and relevant and important.

Have you tried that?
If so, what has the result been?

Comment author: alicey 01 March 2014 05:58:28PM 0 points [-]

"well, don't be terse" fails on grounds of "brevity is a virtue" (unless i am interacting with people i am trying to manipulate)

Comment author: TheOtherDave 01 March 2014 09:30:38PM 2 points [-]

I understand this to mean that the only value you see to non-brevity is its higher success at manipulation.

Is that in fact what you meant?

Comment author: alicey 14 March 2014 11:49:31PM *  0 points [-]

basically? people i can't speak plainly with are generally people who i need to manipulate to get what i value

Comment author: elharo 01 March 2014 07:31:05PM 1 point [-]

In other words, you prefer brevity to clarity and being understood? Something's a little skewed here.

It sounds like you and TheOtherDave have both identified the problem. Assuming you know what the problem is, why not fix it?

It may be that you are incorrect about the cause of the problem, but it's easy enough test your hypothesis. The cost is low and the value of the information gained would be high. Either you're right and brevity is your problem, in which case you should be more verbose when you wish to be understood. Or you're wrong and added verbosity would not make people less inclined to "round you off to the nearest cliche", in which case you could look for other changes to your writing that would help readers understand you better.

Comment author: philh 02 March 2014 01:07:48AM 6 points [-]

Well, I think that "be more verbose" is a little like "sell nonapples". A brief post can be expanded in many different directions, and it might not be obvious which directions would be helpful and which would be boring.

Comment author: jamesf 02 March 2014 03:27:05AM *  0 points [-]

What does brevity offer you that makes it worthwhile, even when it impedes communication?

Predicting how communication will fail is generally Really Hard, but it's a good opportunity to refine your models of specific people and groups of people.

Comment author: alicey 14 March 2014 11:29:40PM 0 points [-]

improving signal to noise, holding the signal constant, is brevity

when brevity impedes communication, but only with a subset of people, then the reduced signal is because they're not good at understanding brief things, so it is worth not being brief with them, but it's not fun

Comment author: ThrustVectoring 02 March 2014 10:57:28AM 1 point [-]

I suspect that the issue is not terseness, but rather not understanding and bridging the inferential distance between you and your audience. It's hard for me to say more without a specific example.

Comment author: alicey 14 March 2014 11:42:11PM 0 points [-]

revisiting this, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is

Comment author: RobinZ 23 April 2014 04:13:27PM 1 point [-]

Some people are just bad at explaining their ideas correctly (too hasty, didn't reread themselves, not a high enough verbal SAT, foreign mother tongue, inferential distance, etc.), others are just bad at reading and understanding other's ideas correctly (too hasty, didn't read the whole argument before replying, glossed over that one word which changed the whole meaning of a sentence, etc.).

This understates the case, even. At different times, an individual can be more or less prone to haste, laziness, or any of several possible sources of error, and at times, you yourself can commit any of these errors. I think the greatest value of a well-formulated principle of charity is that it leads to a general trend of "failure of communication -> correction of failure of communication -> valuable communication" instead of "failure of communication -> termination of communication".

I've seen many poorly explained arguments which I could understand as true or at least pointing in interesting directions, which were summarily ignored or shot down by uncharitable readers.

Actually, there's another point you could make along the lines of Jay Smooth's advice about racist remarks, particularly the part starting at 1:23, when you are discussing something in 'public' (e.g. anywhere on the Internet). If I think my opposite number is making bad arguments (e.g. when she is proposing an a priori proof of the existence of a god), I can think of few more convincing avenues to demonstrate to all the spectators that she's full of it than by giving her every possible opportunity to reveal that her argument is not wrong.

Regardless of what benefit you are balancing against a cost, though, a useful principle of charity should emphasize that your failure to engage with someone you don't believe to be sufficiently rational is a matter of the cost of time, not the value of their contribution. Saying "I don't care what you think" will burn bridges with many non-LessWrongian folk; saying, "This argument seems like a huge time sink" is much less likely to.

Comment author: Lumifer 23 April 2014 04:38:24PM 2 points [-]

a useful principle of charity should emphasize that your failure to engage with someone you don't believe to be sufficiently rational is a matter of the cost of time, not the value of their contribution.

So if I believe that someone is stupid, mindkilled, etc. and is not capable (at least at the moment) of contributing anything valuable, does this principle emphasize that I should not believe that, or that I should not tell that to this someone?

Comment author: Vaniver 23 April 2014 06:36:54PM 2 points [-]

It's not obvious to me that's the right distinction to make, but I do think that the principle of charity does actually result in a map shift relative to the default. That is, an epistemic principle of charity is a correction like one would make with the fundamental attribution error: "I have only seen one example of this person doing X, I should restrain my natural tendency to overestimate the resulting update I should make."

That is, if you have not used the principle of charity in reaching the belief that someone else is stupid or mindkilled, then you should not use that belief as reason to not apply the principle of charity.

Comment author: Lumifer 23 April 2014 07:02:19PM 0 points [-]

the principle of charity does actually result in a map shift relative to the default.

What is the default? And is it everyone's default, or only the unenlightened ones', or whose?

This implies that the "default" map is wrong -- correct?

if you have not used the principle of charity in reaching the belief

I don't quite understand that. When I'm reaching a particular belief, I basically do it to the best of my ability -- if I am aware of errors, biases, etc. I will try to correct them. Are you saying that the principle of charity is special in that regard -- that I should apply it anyway even if I don't think it's needed?

An attribution error is an attribution error -- if you recognize it you should fix it, and not apply global corrections regardless.

Comment author: Vaniver 23 April 2014 08:44:40PM *  3 points [-]

This implies that the "default" map is wrong -- correct?

I am pretty sure that most humans are uncharitable in interpreting the skills, motives, and understanding of someone they see as a debate opponent, yes. This observation is basically the complement of the principle of charity- the PoC exists because "most people are too unkind here; you should be kinder to try to correct," and if you have somehow hit the correct level of kindness, then no further change is necessary.

I don't quite understand that. When I'm reaching a particular belief, I basically do it to the best of my ability -- if I am aware of errors, biases, etc. I will try to correct them. Are you saying that the principle of charity is special in that regard

I think that the principle of charity is like other biases.

that I should apply it anyway even if I don't think it's needed?

This question seems just weird to me. How do you know you can trust your cognitive system that says "nah, I'm not being biased right now"? This calls to mind the statistical prediction rule results, where people would come up with all sorts of stories why their impression was more accurate than linear fits to the accumulated data- but, of course, those were precisely the times when they should have silenced their inner argument and gone with the more accurate rule. The point of these sorts of things is that you take them seriously, even when you generate rationalizations for why you shouldn't take them seriously!

(There are, of course, times when the rules do not apply, and not every argument against a counterbiasing technique is a rationalization. But you should be doubly suspicious against such arguments.)

Comment author: Lumifer 23 April 2014 08:57:30PM 0 points [-]

This question seems just weird to me. How do you know you can trust your cognitive system that says "nah, I'm not being biased right now"?

It's weird to me that the question is weird to you X-/

You know when and to what degree you can trust your cognitive system in the usual way: you look at what it tells you and test it against the reality. In this particular case you check whether later, more complete evaluations corroborate your initial perception or there is a persistent bias.

If you can't trust your cognitive system then you get all tangled up in self-referential loops and really have no basis on which to decide by how much to correct your thinking or even which corrections to apply.

Comment author: Vaniver 23 April 2014 10:01:37PM *  2 points [-]

It's weird to me that the question is weird to you X-/

To me, a fundamental premise of the bias-correction project is "you are running on untrustworthy hardware." That is, biases are not just of academic interest, and not just ways that other people mistakes, but known flaws that you personally should attend to with regards to your own mind.

There's more, but I think in order to explain that better I should jump to this first:

If you can't trust your cognitive system then you get all tangled up in self-referential loops and really have no basis on which to decide by how much to correct your thinking or even which corrections to apply.

You can ascribe different parts of your cognitive system different levels of trust, and build a hierarchy out of them. To illustrate a simple example, I can model myself as having a 'motive-detection system,' which is normally rather accurate but loses accuracy when used on opponents. Then there's a higher-level system that is a 'bias-detection system' which detects how much accuracy is lost when I use my motive-detection system on opponents. Because this is hierarchical, I think it bottoms out in a finite number of steps; I can use my trusted 'statistical inference' system to verify the results from my 'bias-detection' system, which then informs how I use the results from my 'motive-detection system.'

Suppose I just had the motive-detection system, and learned of PoC. The wrong thing to do would be to compare my motive-detection system to itself, find no discrepancy, and declare myself unbiased. "All my opponents are malevolent or idiots, because I think they are." The right thing to do would be to construct the bias-detection system, and actively behave in such a way to generate more data to determine whether or not my motive-detection system is inaccurate, and if so, where and by how much. Only after a while of doing this can I begin to trust myself to know whether or not the PoC is needed, because by then I've developed a good sense of how unkind I become when considering my opponents.

If I mistakenly believe that my opponents are malevolent idiots, I can only get out of that hole by either severing the link between my belief in their evil stupidity and my actions when discussing with them, or by discarding that belief and seeing if the evidence causes it to regrow. I word it this way because one needs to move to the place of uncertainty, and then consider the hypotheses, rather than saying "Is my belief that my opponents are malevolent idiots correct? Well, let's consider all the pieces of evidence that come to mind right now: yes, they are evil and stupid! Myth confirmed."

Which brings us to here:

You know when and to what degree you can trust your cognitive system in the usual way: you look at what it tells you and test it against the reality. In this particular case you check whether later, more complete evaluations corroborate your initial perception or there is a persistent bias.

Your cognitive system has a rather large degree of control over the reality that you perceive; to a large extent, that is the point of having a cognitive system. Unless the 'usual way' of verifying the accuracy of your cognitive system takes that into account, which it does not do by default for most humans, then this will not remove most biases. For example, could you detect confirmation bias by checking whether more complete evaluations corroborate your initial perception? Not really- you need to have internalized the idea of 'confirmation bias' in order to define 'more complete evaluations' to mean 'evaluations where I seek out disconfirming evidence also' rather than just 'evaluations where I accumulate more evidence.'

[Edit]: On rereading this comment, the primary conclusion I was going for- that PoC encompasses both procedural and epistemic shifts, which are deeply entwined with each other- is there but not as clear as I would like.

Comment author: Lumifer 24 April 2014 04:10:39PM *  0 points [-]

Before I get into the response, let me make a couple of clarifying points.

First, the issue somewhat drifted from "to what degree should you update on the basis of what looks stupid" to "how careful you need to be about updating your opinion of your opponents in an argument". I am not primarily talking about arguments, I'm talking about the more general case of observing someone being stupid and updating on this basis towards the "this person is stupid" hypothesis.

Second, my evaluation of stupidity is based more on how a person argues rather than on what position he holds. To give an example, I know some smart people who have argued against evolution (not in the sense that it doesn't exist, but rather in the sense that the current evolutionary theory is not a good explanation for a bunch of observables). On the other hand, if someone comes in and goes "ha ha duh of course evolution is correct my textbook says so what u dumb?", well then...

"you are running on untrustworthy hardware."

I don't like this approach. Mainly this has to do with the fact that unrolling "untrustworthy" makes it very messy.

As you yourself point out, a mind is not a single entity. It is useful to treat is as a set or an ecology of different agents which have different capabilities, often different goals, and typically pull into different directions. Given this, who is doing the trusting or distrusting? And given the major differences between the agents, what does "trust" even mean?

I find this expression is usually used to mean that human mind is not a simple-enough logical calculating machine. My first response to this is duh! and the second one is that this is a good thing.

Consider an example. Alice, a hetero girl, meets Bob at a party. Bob looks fine, speaks the right words, etc. and Alice's conscious mind finds absolutely nothing wrong with the idea of dragging him into her bed. However her gut instincts scream at her to run away fast -- for no good reason that her consciousness can discern. Basically she has a really bad feeling about Bob for no articulable reason. Should she tell herself her hardware is untrustworthy and invite Bob overnight?

The wrong thing to do would be to compare my motive-detection system to itself, find no discrepancy, and declare myself unbiased.

True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one? Does additional evidence support your initial reaction? If it does, you can probably trust your initial reactions more. If it does not, you can't and should adjust.

Yes, I know about anchoring and such. But again, at some point you have to trust yourself (or some modules of yourself) because if you can't there is just no firm ground to stand on at all.

If I mistakenly believe that my opponents are malevolent idiots, I can only get out of that hole by ... discarding that belief and seeing if the evidence causes it to regrow.

I don't see why. Just do the usual Bayesian updating on the evidence. If the weight of the accumulated evidence points out that they are not, well, update. Why do you have to discard your prior in order to do that?

you need to have internalized the idea of 'confirmation bias' in order to define 'more complete evaluations' to mean 'evaluations where I seek out disconfirming evidence also' rather than just 'evaluations where I accumulate more evidence.'

Yep. Which is why the Sequences, the Kahneman & Tversky book, etc. are all very useful. But, as I've been saying in my responses to RobinZ, for me this doesn't fall under the principle of charity, this falls under the principle of "don't be an idiot yourself".

Comment author: Vaniver 24 April 2014 06:56:56PM 1 point [-]

First, the issue somewhat drifted from "to what degree should you update on the basis of what looks stupid" to "how careful you need to be about updating your opinion of your opponents in an argument".

I understand PoC to only apply in the latter case, with a broad definition of what constitutes an argument. A teacher, for example, likely should not apply the PoC to their students' answers, and instead be worried about the illusion of transparency and the double illusion of transparency. (Checking the ancestral comment, it's not obvious to me that you wanted to switch contexts- 7EE1D988 and RobinZ both look like they're discussing conservations or arguments- and you may want to be clearer in the future about context changes.)

I am not primarily talking about arguments, I'm talking about the more general case of observing someone being stupid and updating on this basis towards the "this person is stupid" hypothesis.

Here, I think you just need to make fundamental attribution error corrections (as well as any outgroup bias corrections, if those apply).

Given this, who is doing the trusting or distrusting?

Presumably, whatever module sits on the top of the hierarchy (or sufficiently near the top of the ecological web).

Should she tell herself her hardware is untrustworthy and invite Bob overnight?

From just the context given, no, she should trust her intuition. But we could easily alter the context so that she should tell herself that her hardware is untrustworthy and override her intuition- perhaps she has social anxiety or paranoia she's trying to overcome, and a trusted (probably female) friend doesn't get the same threatening vibe from Bob.

True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one?

You don't directly perceive reality, though, and your perceptions are determined in part by your behavior, in ways both trivial and subtle. Perhaps Mallory is able to read your perception of him from your actions, and thus behaves cruelly towards you?

As a more mathematical example, in the iterated prisoner's dilemma with noise, TitForTat performs poorly against itself, whereas a forgiving TitForTat performs much better. PoC is the forgiveness that compensates for the noise.

I don't see why.

This is discussed a few paragraphs ago, but this is a good opportunity to formulate it in a way that is more abstract but perhaps clearer: claims about other people's motives or characteristics are often claims about counterfactuals or hypotheticals. Suppose I believe "If I were to greet to Mallory, he would snub me," and thus in order to avoid the status hit I don't say hi to Mallory. In order to confirm or disconfirm that belief, I need to alter my behavior; if I don't greet Mallory, then I don't get any evidence!

(For the PoC specifically, the hypothetical is generally "if I put extra effort into communicating with Mallory, that effort would be wasted," where the PoC argues that you've probably overestimated the probability that you'll waste effort. This is why RobinZ argues for disengaging with "I don't have the time for this" rather than "I don't think you're worth my time.")

But, as I've been saying in my responses to RobinZ, for me this doesn't fall under the principle of charity, this falls under the principle of "don't be an idiot yourself".

I think that "don't be an idiot" is far too terse a package. It's like boiling down moral instruction to "be good," without any hint that "good" is actually a tremendously complicated concept, and being it a difficult endeavor which is aided by many different strategies. If an earnest youth came to you and asked how to think better, would you tell them just "don't be an idiot" or would you point them to a list of biases and counterbiasing principles?

Comment author: TheAncientGeek 23 April 2014 09:19:00PM 0 points [-]

What is the reality about whether you interpreted someone correct.y? When do you hit the bedrock of Real Meaning?

Comment author: RobinZ 23 April 2014 07:01:47PM *  1 point [-]

I see that my conception of the "principle of charity" is either non-trivial to articulate or so inchoate as to be substantially altered by my attempts to do so. Bearing that in mind:

The principle of charity isn't a propositional thesis, it's a procedural rule, like the presumption of innocence. It exists because the cost of false positives is high relative to the cost of reducing false positives: the shortest route towards correctness in many cases is the instruction or argumentation of others, many of whom would appear, upon initial contact, to be stupid, mindkilled, dishonest, ignorant, or otherwise unreliable sources upon the subject in question. The behavior proposed by the principle of charity is intended to result in your being able to reliably distinguish between failures of communication and failures of reasoning.

My remark took the above as a basis and proposed behavior to execute in cases where the initial remark strongly suggests that the speaker is thinking irrationally (e.g. an assertion that the modern evolutionary synthesis is grossly incorrect) and your estimate of the time required to evaluate the actual state of the speaker's reasoning processes was more than you are willing to spend. In such a case, what the principle of charity implies are two things:

  • You should consider the nuttiness of the speaker as being an open question with a large prior probability, akin to your belief prior to lifting a dice cup that you have not rolled double-sixes, rather than a closed question with a large posterior probability, akin to your belief that the modern evolutionary synthesis is largely correct.
  • You should withdraw from the conversation in such a fashion as to emphasize that you are in general willing to put forth the effort to understand what they are saying, but that the moment is not opportune.

Minor tyop fix T1503-4.

Comment author: Lumifer 23 April 2014 07:14:43PM 2 points [-]

the cost of false positives is high relative to the cost of reducing false positives

I don't see it as self-evident. Or, more precisely, in some situations it is, and in other situations it is not.

The behavior proposed by the principle of charity is intended to result in your being able to reliably distinguish between failures of communication and failures of reasoning.

You are saying (a bit later in your post) that the principle of charity implies two things. The second one is a pure politeness rule and it doesn't seem to me that the fashion of withdrawing from a conversation will help me "reliably distinguish" anything.

As to the first point, you are basically saying I should ignore evidence (or, rather, shift the evidence into the prior and refuse to estimate the posterior). That doesn't help me reliably distinguish anything either.

In fact, I don't see why there should be a particular exception here ("a procedural rule") to the bog-standard practice of updating on evidence. If my updating process is incorrect, I should fix it and not paper it over with special rules for seemingly-stupid people. If it is reasonably OK, I should just go ahead and update. That will not necessarily result in either a "closed question" or a "large posterior" -- it all depends on the particulars.

Comment author: TheAncientGeek 23 April 2014 09:44:05PM 4 points [-]

I'll say it again: POC doesn't mean "believe everyone is sane and intelligent", it means "treat everyone's comments as though they were made by a sane , intelligent, person".

Comment author: Lumifer 24 April 2014 03:22:54PM 1 point [-]

it means "treat everyone's comments as though they were made by a sane , intelligent, person".

I don't like this rule. My approach is simpler: attempt to understand what the person means. This does not require me to treat him as sane or intelligent.

Comment author: TheAncientGeek 24 April 2014 05:09:36PM 2 points [-]

How do you know how many mistakes you are or aren't making?

Comment author: RobinZ 23 April 2014 09:33:46PM 3 points [-]

The prior comment leads directly into this one: upon what grounds do I assert that an inexpensive test exists to change my beliefs about the rationality of an unfamiliar discussant? I realize that it is not true in the general case that the plural of anecdote is data, and much the following lacks citations, but:

  • Many people raised to believe that evolution is false because it contradicts their religion change their minds in their first college biology class. (I can't attest to this from personal experience - this is something I've seen frequently reported or alluded to via blogs like Slacktivist.)
  • An intelligent, well-meaning, LessWrongian fellow was (hopefully-)almost driven out of my local Less Wrong meetup in no small part because a number of prominent members accused him of (essentially) being a troll. In the course of a few hours conversation between myself and a couple others focused on figuring out what he actually meant, I was able to determine that (a) he misunderstood the subject of conversation he had entered, (b) he was unskilled at elaborating in a way that clarified his meaning when confusion occurred, and (c) he was an intelligent, well-meaning, LessWrongian fellow whose participation in future meetups I would value.
  • I am unable to provide the details of this particular example (it was relayed to me in confidence), but an acquaintance of mine was a member of a group which was attempting to resolve an elementary technical challenge - roughly the equivalent of setting up a target-shooting range with a safe backstop in terms of training required. A proposal was made that was obviously unsatisfactory - the equivalent of proposing that the targets be laid on the ground and everyone shoot straight down from a second-story window - and my acquaintance's objection to it on common-sense grounds was treated with a response equivalent to, "You're Japanese, what would you know about firearms?" (In point of fact, while no metaphorical gunsmith, my acquaintance's knowledge was easily sufficient to teach a Boy Scout merit badge class.)
  • In my first experience on what was then known as the Internet Infidels Discussion Board, my propensity to ask "what do you mean by x" sufficed to transform a frustrated, impatient discussant into a cheerful, enthusiastic one - and simultaneously demonstrate that said discussant's arguments were worthless in a way which made it easy to close the argument.

In other words, I do not often see the case in which performing the tests implied by the principle of charity - e.g. "are you saying [paraphrase]?" - are wasteful, and I frequently see cases where failing to do so has been.

Comment author: Lumifer 24 April 2014 03:19:32PM *  1 point [-]

What you are talking about doesn't fall under the principle of charity (in my interpretation of it). It falls under the very general rubric of "don't be stupid yourself".

In particular, considering that the speaker expresses his view within a framework which is different from your default framework is not an application of the principle of charity -- it's an application of the principle "don't be stupid, of course people talk within their frameworks, not within your framework".

Comment author: RobinZ 24 April 2014 04:48:53PM *  2 points [-]

I might be arguing for something different than your principle of charity. What I am arguing for - and I realize now that I haven't actually explained a procedure, just motivations for one - is along the following lines:

When somebody says something prima facie wrong, there are several possibilities, both regarding their intended meaning:

  • They may have meant exactly what you heard.
  • They may have meant something else, but worded it poorly.
  • They may have been engaging in some rhetorical maneuver or joke.
  • They may have been deceiving themselves.
  • They may have been intentionally trolling.
  • They may have been lying.

...and your ability to infer such:

  • Their remark may resemble some reasonable assertion, worded badly.
  • Their remark may be explicable as ironic or joking in some sense.
  • Their remark may conform to some plausible bias of reasoning.
  • Their remark may seem like a lie they would find useful.*
  • Their remark may represent an attempt to irritate you for their own pleasure.*
  • Their remark may simply be stupid.
  • Their remark may allow more than one of the above interpretations.

What my interpretation of the principle of charity suggests as an elementary course of action in this situation is, with an appropriate degree of polite confusion, to ask for clarification or elaboration, and to accompany this request with paraphrases of the most likely interpretations you can identify of their remarks excluding the ones I marked with asterisks.

Depending on their actual intent, this has a good chance of making them:

  • Elucidate their reasoning behind the unbelievable remark (or admit to being unable to do so);
  • Correct their misstatement (or your misinterpretation - the difference is irrelevant);
  • Admit to their failed humor;
  • Admit to their being unable to support their assertion, back off from it, or sputter incoherently;
  • Grow impatient at your failure to rise to their goading and give up; or
  • Back off from (or admit to, or be proven guilty of) their now-unsupportable deception.

In the first three or four cases, you have managed to advance the conversation with a well-meaning discussant without insult; in the latter two or three, you have thwarted the goals of an ill-intentioned one - especially, in the last case, because you haven't allowed them the option of distracting everyone from your refutations by claiming you insulted them. (Even if they do so claim, it will be obvious that they have no just cause to be.)

I say this falls under the principle of charity because it involves (a) granting them, at least rhetorically, the best possible motives, and (b) giving them enough of your time and attention to seek engagement with their meaning, not just a lazy gloss of their words.

Minor formatting edit.

Comment author: RobinZ 10 June 2014 03:27:47PM 0 points [-]

Belatedly: I recently discovered that in 2011 I posted a link to an essay on debating charitably by pdf23ds a.k.a. Chris Capel - this is MichaelBishop's summary and this is a repost of the text (the original site went down some time ago). I recall endorsing Capel's essay unreservedly last time I read it; I would be glad to discuss the essay, my prior comments, or any differences that exist between the two if you wish.

Comment author: RobinZ 24 April 2014 03:00:27PM 2 points [-]

A small addendum, that I realized I omitted from my prior arguments in favor of the principle of charity:

Because I make a habit of asking for clarification when I don't understand, offering clarification when not understood, and preferring "I don't agree with your assertion" to "you are being stupid", people are happier to talk to me. Among the costs of always responding to what people say instead of your best understanding of what they mean - especially if you are quick to dismiss people when their statements are flawed - is that talking to you becomes costly: I have to word my statements precisely to ensure that I have not said something I do not mean, meant something I did not say, or made claims you will demand support for without support. If, on the other hand, I am confident that you will gladly allow me to correct my errors of presentation, I can simply speak, and fix anything I say wrong as it comes up.

Which, in turn, means that I can learn from a lot of people who would not want to speak to me otherwise.

Comment author: Lumifer 24 April 2014 03:37:21PM 1 point [-]

responding to what people say instead of your best understanding of what they mean

Again: I completely agree that you should make your best effort to understand what other people actually mean. I do not call this charity -- it sounds like SOP and "just don't be an idiot yourself" to me.

Comment author: RobinZ 23 April 2014 07:59:58PM 1 point [-]

I don't see it as self-evident. Or, more precisely, in some situations it is, and in other situations it is not.

You're right: it's not self-evident. I'll go ahead and post a followup comment discussing what sort of evidential support the assertion has.

As to the first point, you are basically saying I should ignore evidence (or, rather, shift the evidence into the prior and refuse to estimate the posterior). That doesn't help me reliably distinguish anything either.

My usage of the terms "prior" and "posterior" was obviously mistaken. What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it's perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability. I have high confidence that an inexpensive test - lifting the dice cup - will change my beliefs about the value of the die roll by many orders of magnitude, and low confidence that any comparable test exists to affect my confidence regarding the scientific theory.

Comment author: Lumifer 23 April 2014 08:30:33PM 2 points [-]

What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it's perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability.

I think you are talking about what's in local parlance is called a "weak prior" vs a "strong prior". Bayesian updating involves assigning relative importance the the prior and to the evidence. A weak prior is easily changed by even not very significant evidence. On the other hand, it takes a lot of solid evidence to move a strong prior.

In this terminology, your pre-roll estimation of the probability of double sixes is a weak prior -- the evidence of an actual roll will totally overwhelm it. But your estimation of the correctness of the modern evolutionary theory is a strong prior -- it will take much convincing evidence to persuade you that the theory is not correct after all.

Of course, the posterior of a previous update becomes the prior of the next update.

Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.

And I don't see why this should be so.

Comment author: RobinZ 23 April 2014 09:25:55PM 2 points [-]

Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.

Oh, dear - that's not what I meant at all. I meant that - absent a strong prior - the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent. It's entirely possible that ten minutes of conversation will suffice to make a strong prior out of this weaker one - there's someone arguing for dualism on a webcomic forum I (in)frequent along the same lines as Chalmers "hard problem of consciousness", and it took less than ten posts to establish pretty confidently that the same refutations would apply - but as the history of DIPS (defense-independent pitching statistics) shows, it's entirely possible for an idea to be as correct as "the earth is a sphere, not a plane" and nevertheless be taken as prima facie absurd.

(As the metaphor implies, DIPS is not quite correct, but it would be more accurate to describe its successors as "fixing DIPS" than as "showing that DIPS was completely wrongheaded".)

Comment author: Lumifer 24 April 2014 03:15:01PM 1 point [-]

I meant that - absent a strong prior - the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent.

Oh, I agree with that.

What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid. The principle of charity should not prevent that from happening. Of course evidence of stupidity should not make you close the case, declare someone irretrievably stupid, and stop considering any further evidence.

As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.

Comment author: RobinZ 24 April 2014 03:43:54PM 2 points [-]

What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid.

...in the context during which they exhibited the behavior which generated said evidence, of course. In broader contexts, or other contexts? To a much lesser extent, and not (usually) strongly in the strong-prior sense, but again, yes. That you should always be capable of considering further evidence is - I am glad to say - so universally accepted a proposition in this forum that I do not bother to enunciate it, but I take no issue with drawing conclusions from a sufficient body of evidence.

Come to think, you might be amused by this fictional dialogue about a mendacious former politician, illustrating the ridiculousness of conflating "never assume that someone is arguing in bad faith" and "never assert that someone is arguing in bad faith". (The author also posted a sequel, if you enjoy the first.)

As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.

I'm afraid that I would have about as much luck barking like a duck as enunciating how I evaluate the intelligence (or reasonableness, or honesty, or...) of those I converse with. YMMV, indeed.

Comment author: V_V 23 April 2014 08:47:45PM 1 point [-]

Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being. And I don't see why this should be so.

People tend to update too much in these circumstances: Fundamental attribution error

Comment author: Lumifer 23 April 2014 09:09:32PM *  0 points [-]

The fundamental attribution error is about underestimating the importance of external drivers (the particular situation, random chance, etc.) and overestimating the importance of internal factors (personality, beliefs, etc.) as an explanation for observed actions.

If a person in a discussion is spewing nonsense, it is rare that external factors are making her do it (other than a variety of mind-altering chemicals). The indicators of stupidity are NOT what position a person argues or how much knowledge about the subject does she has -- it's how she does it. And inability e.g. to follow basic logic is hard to attribute to external factors.

Comment author: TheAncientGeek 23 April 2014 09:33:48PM *  2 points [-]

This discussion has got badly derailed. You are taking it that there is some robust fact about someones lack of lrationality or intelligence which may or may not be explained by internal or external factors.

The point is that you cannot make a reliable judgement about someone's rationality or intelligence unless you have understood that they are saying,....and you cannot reliably understand what they ares saying unl ess you treat it as if it were the product of a rational and intelligent person. You can go to "stupid"when all attempts have failed, but not before.

Comment author: TheAncientGeek 23 April 2014 05:34:13PM *  1 point [-]

Depends. Have you tried charitable interpretations of what they are saying that dont make them stupid, or are you going with your initial reaction?

Comment author: Viliam_Bur 01 March 2014 07:53:51PM *  10 points [-]

there seem to be a lot of people in the LessWrong community who imagine themselves to be (...) paragons of rationality who other people should accept as such.

Uhm. My first reaction is to ask "who specifically?", because I don't have this impression. (At least I think most people here aren't like this, and if a few happen to be, I probably did not notice the relevant comments.) On the other hand, if I imagine myself at your place, even if I had specific people in mind, I probably wouldn't want to name them, to avoid making it a personal accusation instead of observation of trends. Now I don't know what do to.

Perhaps could someone else give me a few examples of comments (preferably by different people) where LW members imagine themselves paragons of rationality and ask other people to accept them as such? (If I happen to be such an example myself, that information would be even more valuable to me. Feel free to send me a private message if you hesitate to write it publicly, but I don't mind if you do. Crocker's rules, Litany of Tarski, etc.)

I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects.

I do relate to this one, even if I don't know if I have expressed this sentinent on LW. I believe I am able to listen to opinions that are unpleasant or that I disagree with, without freaking out, much more than an average person, although not literally always. It's stronger in real life than online, because in real life I take time to think, while on internet I am more in "respond and move on (there are so many other pages to read)" mode. Some other people have told me they noticed this about me, so it's not just my own imagination.

Okay, you probably didn't mean me with this one... I just wanted to say I don't see this as a bad thing per se, assuming the person is telling the truth. And I also believe that LW has a higher ratio of people for whom this is true, compared with average population, although not everyone here is like that.

Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality.

I don't consider everyone here rational, and it's likely some people don't consider me rational. But there are also other reasons for frequent disagreement.

Aspiring rationalists are sometimes encouraged to make bets, because bet is a tax on bullshit, and paying a lot of tax may show you your irrationality and encourage you to get rid of it. Even if it's not about money; we need to calibrate ourselves. Some of us use the prediction book, CFAR has developed the calibration game.

Analogically, if I have an opinion, I say it in a comment, because that's similar to making a bet. If I am wrong, I will likely get a feedback, which is an opportunity to learn. I trust other people here intellectually to disagree with me only if they have a good reason to disagree, and I also trust them emotionally that if I happen to write something stupid, they will just correct me and move on (instead of e.g. reminding me of my mistake for the rest of my life). Because of this, I post here my opinions more often, and voice them more strongly if I feel it's deserved. Thus, more opportunity for disagreement.

On a different website I might keep quiet instead or speak very diplomatically, which would give less opportunity to disagreement; but it wouldn't mean I have higher estimate on that community's rationality; quite the opposite. If disagreement is disrespect, then tiptoeing around the mere possibility of disagreement means considering the other person insane. Which is how I learned to behave outside of LW; and I am still not near the level of disdain that a Carnegie-like behavior would require.

I've heard people cite this as a reason to be reluctant to post/comment (again showing they know intuitively that disagreement is disrespect).

We probably should have some "easy mode" for the beginners. But we shouldn't turn the whole website into the "easy mode". Well, this probably deserves a separate discussion.

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments.

On a few occassions I made fun of Mensa on LW, and I don't remember anyone contradicting me, so I thought we have a consensus that high IQ does not imply high rationality (although some level may be necessary). Stanovich wrote a book about it, and Kaj Sotala reviewed it here.

You make a few very good points in the article. Confusing intelligence with rationality is bad; selective charity is unfair; asking someone to treat me as a perfect rationalist is silly; it's good to apply healthy cynicism also to your own group; and we should put more emphasis on being aspiring rationalists. It just seems to me that you perceive the LW community as less rational than I do. Maybe we just have different people in mind when we think about the community. (By the way, I am curious if there is a correlation between people who complain that you don't believe in their sanity, and people who are reluctant to comment on LW because of the criticism.)

Comment author: moridinamael 08 March 2014 06:42:42PM 9 points [-]

I've recently had to go on (for a few months) some medication which had the side effect of significant cognitive impairment. Let's hand-wavingly equate this side effect to shaving thirty points off my IQ. That's what it felt like from the inside.

While on the medication, I constantly felt the need to idiot-proof my own life, to protect myself from the mistakes that my future self would certainly make. My ability to just trust myself to make good decisions in the future was removed.

This had far more ramifications than I can go into in a brief comment, but I can generalize by saying that I was forced to plan more carefully, to slow down, to double-check my work. Unable to think as deeply into problems in a freewheeling cognitive fashion, I was forced to break them down carefully on paper and understand that anything I didn't write down would be forgotten.

Basically what I'm trying to say is that being stupider probably forced me to be more rational.

When I went off the medication, I felt my old self waking up again, the size of concepts I could manipulate growing until I could once again comprehend and work on programs I had written before starting the drugs in the first place. I could follow long chains of verbal argument and concoct my own. And ... I pretty much immediately went back to my old problem solving habits of relying on big leaps in insight. Which I don't really blame myself for, because that's sort of what brains are for.

I don't know what the balance is. I don't know how and when to rein in the self-defeating aspects of intelligence. I probably made fewer mistakes when I was dumber but I also did less things period.

Comment author: John_Maxwell_IV 15 March 2014 06:50:28AM 4 points [-]

What medication?

Comment author: eli_sennesh 02 March 2014 04:00:57PM *  9 points [-]

Ok, there's no way to say this without sounding like I'm signalling something, but here goes.

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

"If you can't say something you are very confident is actually smart, don't say anything at all." This is, in fact, why I don't say very much, or say it in a lot of detail, much of the time. I have all kinds of thoughts about all kinds of things, but I've had to retract sincerely-held beliefs so many times I just no longer bother embarrassing myself by opening my big dumb mouth.

Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme.

In my opinion, it's actually terrible branding for a movement. "Rationality is systematized winning"; ok, great, what are we winning at? Rationality and goals are orthogonal to each-other, after all, and at a first look, LW's goals can look like nothing more than an attempt to signal "I'm smarter than you" or even "I'm more of an emotionless Straw-Vulcan cyborg than you" to the rest of the world.

This is not a joke, I actually have a friend who virulently hates LW and resents his friends who get involved in it because he thinks we're a bunch of sociopathic Borg wannabes following a cult of personality. You might have an impulse right now to just call him an ignorant jerk and be done with it, but look, would you prefer the world in which you get to feel satisfied about having identified an ignorant jerk, or would you prefer the world in which he's actually persuaded about some rationalist ideas, makes some improvements to his life, maybe donates money to MIRI/CFAR, and so on? The latter, unfortunately, requires social engagement with a semi-hostile skeptic, which we all know is much harder than just calling him an asshole, taking our ball, and going home.

So anyway, what are we trying to do around here? It should be mentioned a bit more often on the website.

(At the very least, my strongest evidence that we're not a cult of personality is that we disagree amongst ourselves about everything. On the level of sociological health, this is an extremely good sign.)

That bit of LessWrong jargon is merely silly. Worse, I think, is the jargon around politics. Recently, a friend gave "they avoid blue-green politics" as a reason LessWrongians are more rational than other people. It took a day before it clicked that "blue-green politics" here basically just meant "partisanship." But complaining about partisanship is old hat—literally. America's founders were fretting about it back in the 18th century. Nowadays, such worries are something you expect to hear from boringly middle-brow columnists at major newspapers, not edgy contrarians.

While I do agree about the jargon issue, I think the contrarianism and the meta-contrarianism often make people feel they've arrived to A Rational Answer, at which point they stop thinking.

For instance, if Americans have always thought their political system is too partisan, has anyone in political science actually bothered to construct an objective measurement and collected time-series data? What does the time-series data actually say? Besides, once we strip off the tribal signalling, don't all those boringly mainstream ideologies actually have some few real points we could do with engaging with?

(Generally, LW is actually very good at engaging with those points, but we also simultaneously signal that we're adamantly refusing to engage in partisan politics. It's like playing an ideological Tsundere: "Baka! I'm only doing this because it's rational. It's not like I agree with you or anything! blush")

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid.

Ok, but then let me propose a counter-principle: Principle of Informative Calling-Out. I actively prefer to be told when I'm wrong and corrected. Unfortunately, once you ditch the principle of charity, the most common response to an incorrect statement often becomes, essentially, "Just how stupid are you!?", or other forms of low-information signalling about my interlocutor's intelligence and rationality compared to mine.

I need to emphasize that I really do think philosophers are showing off real intelligence, not merely showing off faux-cleverness. GRE scores suggest philosophers are among the smartest academics, and their performance is arguably made more impressive by the fact that GRE quant scores are bimodally distributed based on whether your major required you to spend four years practicing your high school math, with philosophy being one of the majors that doesn't grant that advantage. Based on this, if you think it's wrong to dismiss the views of high-IQ people, you shouldn't be dismissive of mainstream philosophy. But in fact I think LessWrong's oft-noticed dismissiveness of mainstream philosophy is largely justified.

You should be looking at this instrumentally. The question is not whether you think "mainstream philosophy" (the very phrase is suspect, since mainstream academic philosophy divides into a number of distinct schools, Analytic and Continental being the top two off the top of my head) is correct. The question is whether you think you will, at some point, have any use for interacting with mainstream philosophy and its practitioners. If they will be useful to you, it is worth learning their vocabulary and their modes of operation in order to, when necessary, enlist their aid, or win at their game.

Comment author: Yvain 02 March 2014 07:50:21AM *  34 points [-]

I interpret you as making the following criticisms:

1. People disagree with each other, rather than use Aumann agreement, which proves we don't really believe we're rational

Aside from Wei's comment, I think we also need to keep track of what we're doing.

If we were to choose a specific empirical fact or prediction - like "Russia will invade Ukraine tomorrow" - and everyone on Less Wrong were to go on Prediction Book and make their prediction and we took the average - then I would happily trust that number more than I would trust my own judgment. This is true across a wide variety of different facts.

But this doesn't preclude discussion. Aumann agreement is a way of forcing results if forcing results were our only goal, but we can learn more by trying to disentangle our reasoning processes. Some advantages to talking about things rather than immediately jumping to Aumann:

  • We can both increase our understanding of the issue.

  • We may find a subtler position we can both agree on. If I say "California is hot" and you say "California is cold", instead of immediately jumping to "50% probability either way" we can work out which parts of California are hot versus cold at which parts of the year.

  • We may trace part of our disagreement back to differing moral values. If I say "capital punishment is good" and you say "capital punishment is bad", then it may be right for me to adjust a little in your favor since you may have evidence that many death row inmates are innocent, but I may also find that most of the force of your argument is just that you think killing people is never okay. Depending on how you feel about moral facts and moral uncertainty, we might not want to Aumann adjust this one. Nearly everything in politics depends on moral differences at least a little.

  • We may trace our disagreement back to complicated issues of worldview and categorization. I am starting to interpret most liberal-conservative issues as a tendency to draw Schelling fences in different places and then correctly reason with the categories you've got. I'm not sure if you can Aumann-adjust that away, but you definitely can't do it without first realizing it's there, which takes some discussion.

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

2. It is possible that high IQ people can be very wrong and even in a sense "stupidly" wrong, and we don't acknowledge this enough.

I totally agree this is possible.

The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm definitely right; this other person has nothing to teach me."

So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong - like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we're too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says "I'm sure that creationism is true, and it doesn't matter whether really fancy scientists who use big words tell me it isn't."

We end up in a kind of bravery debate situation here, where we have to decide whether it's worth warning people more against the first failure mode (at the risk it will increase the second), or against the second failure mode more (at the risk that it will increase the first).

And, well, studies pretty universally find everyone is overconfident of their own opinions. Even the Less Wrong survey finds people here to be really overconfident.

So I think it's more important to warn people to be less confident they are right about things. The inevitable response is "What about creationism?!" to which the counterresponse is "Okay, but creationists are stupid, be less confident when you disagree with people as smart or smarter than you."

This gets misinterpreted as IQ fetishism, but I think it's more of a desperate search for something, anything to fetishize other than our own subjective feelings of certainty.

3. People are too willing to be charitable to other people's arguments.

This is another case where I think we're making the right tradeoff.

Once again there are two possible failure modes. First, you could be too charitable, and waste a lot of time engaging with people who are really stupid, trying to figure out a smart meaning to what they're saying. Second, you could be not charitable enough by prematurely dismissing an opponent without attempting to understand her, and so perhaps missing out on a subtler argument that proves she was right and you were wrong all along.

Once again, everyone is overconfident. No one is underconfident. People tell me I am too charitable all the time, and yet I constantly find I am being not-charitable-enough, unfairly misinterpreting other people's points, and so missing or ignoring very strong arguments. Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice "be less charitable" is more helpful than the advice "be more charitable".

As I said above, you can try to pinpoint where to apply this advice. You don't need to be charitable to really stupid people with no knowledge of a field. But once you've determined someone is in a reference class where there's a high prior on them having good ideas - they're smart, well-educated, have a basic committment to rationality - advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less - it might be useful in a couple of extreme cases, but I really doubt it's where the gain for the average person lies.

In fact, it's hard for me to square your observation that we still have strong disagreements with your claim that we're too charitable. At least one side is getting things wrong. Shouldn't they be trying to pay a lot more attention to the other side's arguments?

I feel like utter terror is underrated as an epistemic strategy. Unless you are some kind of freakish mutant, you are overconfident about nearly everything and have managed to build up very very strong memetic immunity to arguments that are trying to correct this. Charity is the proper response to this, and I don't think anybody does it enough.

4. People use too much jargon.

Yeah, probably.

There are probably many cases in which the jargony terms have subtly different meaning or serve as reminders of a more formal theory and so are useful ("metacontrarian" versus "showoff", for example), but probably a lot of cases where people could drop the jargon without cost.

I think this is a more general problem of people being bad at writing - "utilize" vs. "use" and all that.

5. People are too self-congratulatory and should be humbler

What's weird is that when I read this post, you keep saying people are too self-congratulatory, but to me it sounds more like you're arguing people are being too modest, and not self-congratulatory enough.

When people try to replace their own subjective analysis of who can easily be dismissed ("They don't agree with me; screw them") with something based more on IQ or credentials, they're being commendably modest ("As far as I can tell, this person is saying something dumb, but since I am often wrong, I should try to take the Outside View by looking at somewhat objective indicators of idea quality.")

And when people try to use the Principle of Charity, once again they are being commendably modest ("This person's arguments seem stupid to me, but maybe I am biased or a bad interpreter. Let me try again to make sure.")

I agree that it is an extraordinary claim to believe anyone is a perfect rationalists. That's why people need to keep these kinds of safeguards in place as saving throws against their inevitable failures.

Comment author: eli_sennesh 02 March 2014 04:08:56PM 5 points [-]

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

Besides which, we're human beings, not fully-rational Bayesian agents by mathematical construction. Trying to pretend to reason like a computer is a pointless exercise when compared to actually talking things out the human way, and thus ensuring (the human way) that all parties leave better-informed than they arrived.

Comment author: elharo 02 March 2014 11:50:41AM *  3 points [-]

IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational.

FYI IQ, whatever it measures, has little to no correlation with either epistemic or instrumental rationality, For extensive discussion of this topic see Keith Stanovich's What Intelligence Tests Miss

In brief, intelligence (as measured by an IQ test), epistemic rationality (the ability to form correct models of the world), and instrumental rationality (the ability to define and carry out effective plans for achieving ones goals) are three different things. A high score on an IQ test does not correlate with enhanced epistemic or instrumental rationality.

For examples, of the lack of correlation between IQ and epistemic rationality, consider the very smart folks you have likely met who have gotten themselves wrapped up in incredibly complex and intellectually challenging belief systems that do not match the world we live in: Objectivism, Larouchism, Scientology, apologetics, etc.

For examples of the lack of correlation between IQ and instrumental rationality, consider the very smart folks you have likely met who cannot get out of their parents basement, and whose impact on the world is limited to posting long threads on Internet forums and playing WoW.

Comment author: Kaj_Sotala 11 March 2014 10:01:08AM 1 point [-]

Keith Stanovich's What Intelligence Tests Miss

LW discussion.

Comment author: ChrisHallquist 03 March 2014 12:42:28AM *  5 points [-]

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.

The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm definitely right; this other person has nothing to teach me."

So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong - like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we're too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says "I'm sure that creationism is true, and it doesn't matter whether really fancy scientists who use big words tell me it isn't."

I guess I need to clarify that I think IQ is a terrible proxy for rationality, that the correlation is weak at best. And your suggested heuristic will do nothing to stop high IQ crackpots from ignoring the mainstream scientific consensus. Or even low IQ crackpots who can find high IQ crackpots to support them. This is actually a thing that happens with some creationists—people thinking "because I'm an <engineer / physicist / MD / mathematician>, I can see those evolutionary biologists are talking nonsense." Creationists would do better to attend to the domain expertise of evolutionary biologists. (See also: my post on the statistician's fallacy.)

I'm also curious as to how much of your willingness to agree with me in dismissing Plantinga is based on him being just one person. Would you be more inclined to take a sizeable online community of Plantingas seriously?

Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice "be less charitable" is more helpful than the advice "be more charitable".

As I said above, you can try to pinpoint where to apply this advice. You don't need to be charitable to really stupid people with no knowledge of a field. But once you've determined someone is in a reference class where there's a high prior on them having good ideas - they're smart, well-educated, have a basic committment to rationality - advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less - it might be useful in a couple of extreme cases, but I really doubt it's where the gain for the average person lies.

On the one hand, I dislike the rhetoric of charity as I see it happen on LessWrong. On the other hand, in practice, you're probably right that people aren't too charitable. In practice, the problem is selective charity—a specific kind of selective charity, slanted towards favoring people's in-group. And you seem to endorse this selective charity.

I've already said why I don't think high IQ is super-relevant to deciding who you should read charitably. Overall education also doesn't strike me as super-relevant either. In the US, better educated Republicans are more likely to deny global warming and think that Obama's a Muslim. That appears to be because (a) you can get a college degree without ever taking a class on climate science and (b) more educated conservatives are more likely to know what they're "supposed" to believe about certain issues. Of course, when someone has a Ph.D. in a relevant field, I'd agree that you should be more inclined to assume they're not saying anything stupid about that field (though even that presumption is weakened if they're saying something that would be controversial among their peers).

As for "basic commitment to rationality," I'm not sure what you mean by that. I don't know how I'd turn it into a useful criterion, aside from defining it to mean people I'd trust for other reasons (e.g. endorsing standard attitudes of mainstream academia). It's quite easy for even creationists to declare their commitment to rationality. On the other hand, if you think someone's membership in the online rationalist community is a strong reason to treat what they say charitably, yeah, I'm calling that self-congratulatory nonsense.

And that's the essence of my reply to your point #5. It's not people having self-congratulatory attitudes on an individual level. It's the self-congratulatory attitudes towards their in-group.

Comment author: Yvain 03 March 2014 04:22:06PM *  10 points [-]

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.

Are ethics supposed to be Aumann-agreeable? I'm not at all sure the original proof extends that far. If it doesn't, that would cover your disagreement with Alicorn as well as a very large number of other disagreements here.

I don't think it would cover Eliezer vs. Robin, but I'm uncertain how "real" that disagreement is. If you forced both of them to come up with probability estimates for an em scenario vs. a foom scenario, then showed them both each other's estimates and put a gun to their heads and asked them whether they wanted to Aumann-update or not, I'm not sure they wouldn't agree to do so.

Even if they did, it might be consistent with their current actions: if there's a 20% chance of ems and 20% chance of foom (plus 60% chance of unpredictable future, cishuman future, or extinction) we would still need intellectuals and organizations planning specifically for each option, the same way I'm sure the Cold War Era US had different branches planning for a nuclear attack by USSR and a nonnuclear attack by USSR.

I will agree that there are some genuinely Aumann-incompatible disagreements on here, but I bet it's fewer than we think.

I guess I need to clarify that I think IQ is a terrible proxy for rationality, that the correlation is weak at best. And your suggested heuristic will do nothing to stop high IQ crackpots from ignoring the mainstream scientific consensus. Or even low IQ crackpots who can find high IQ crackpots to support them.

So I want to agree with you, but there's this big and undeniable problem we have and I'm curious how you think we should solve it if not through something resembling IQ.

You agree people need to be more charitable, at least toward out-group members. And this would presumably involve taking people whom we are tempted to dismiss, and instead not dismissing them and studying them further. But we can't do this for everyone - most people who look like crackpots are crackpots. There are very likely people who look like crackpots but are actually very smart out there (the cryonicists seem to be one group we can both agree on) and we need a way to find so we can pay more attention to them.

We can't use our subjective feeling of is-this-guy-a-crackpot-or-not, because that's what got us into this problem in the first place. Presumably we should use the Outside View. But it's not obvious what we should be Outside Viewing on. The two most obvious candidates are "IQ" and "rationality", which when applied tend to produce IQ fetishism and in group favoritism (since until Stanovich actually produces his rationality quotient test and gives it to everybody, being in a self-identified rationalist community and probably having read the whole long set of sequences on rationality training is one of the few proxies for rationality we've got available).

I admit both of these proxies are terrible. But they seem to be the main thing keeping us from, on the one side, auto-rejecting all arguments that don't sound subjectively plausible to us at first glance, and on the other, having to deal with every stupid creationist and homeopath who wants to bloviate at us.

There seems to be something that we do do that's useful in this sphere. Like if someone with a site written in ALL CAPS and size 20 font claims that Alzheimers is caused by a bacterium, I dismiss it without a second thought because we all know it's a neurodegenerative disease. But a friend who has no medical training but whom I know is smart and reasonable recently made this claim, I looked it up, and sure enough there's a small but respectable community of microbiologists and neuroscientists investigating that maybe Alzheimers is triggered by an autoimmune response to some bacterium. It's still a long shot, but it's definitely not crackpottish. So somehow I seem to have some sort of ability for using the source of an implausible claim to determine whether I investigate it further, and I'm not sure how to describe the basis on which I make this decision beyond "IQ, rationality, and education".

I'm also curious as to how much of your willingness to agree with me in dismissing Plantinga is based on him being just one person. Would you be more inclined to take a sizeable online community of Plantingas seriously?

Well, empirically I did try to investigate natural law theology based on there being a sizeable community of smart people who thought it was valuable. I couldn't find anything of use in it, but I think it was a good decision to at least double-check.

On the one hand, I dislike the rhetoric of charity as I see it happen on LessWrong. On the other hand, in practice, you're probably right that people aren't too charitable. In practice, the problem is selective charity—a specific kind of selective charity, slanted towards favoring people's in-group. And you seem to endorse this selective charity.

If you think people are too uncharitable in general, but also that we're selectively charitable to the in-group, is that equivalent to saying the real problem is that we're not charitable enough to the out-group? If so, what subsection of the out-group would you recommend we be more charitable towards? And if we're not supposed to select that subsection based on their intelligence, rationality, education, etc, how do we select them?

And if we're not supposed to be selective, how do we avoid spending all our time responding to total, obvious crackpots like creationists and Time Cube Guy?

On the other hand, if you think someone's membership in the online rationalist community is a strong reason to treat what they say charitably, yeah, I'm calling that self-congratulatory nonsense. And that's the essence of my reply to your point #5. It's not people having self-congratulatory attitudes on an individual level. It's the self-congratulatory attitudes towards their in-group.

Yeah, this seems like the point we're disagreeing on. Granted that all proxies will be at least mostly terrible, do you agree that we do need some characteristics that point us to people worth treating charitably? And since you don't like mine, which ones are you recommending?

Comment author: ChrisHallquist 03 March 2014 06:09:51PM *  1 point [-]

I question how objective these objective criterion you're talking about are. Usually when we judge someone's intelligence, we aren't actually looking at the results of an IQ test, so that's subjective. Ditto rationality. And if you were really that concerned about education, you'd stop paying so much attention to Eliezer or people who have a bachelors' degree at best and pay more attention to mainstream academics who actually have PhDs.

FWIW, actual heuristics I use to determine who's worth paying attention to are

  • What I know of an individual's track record of saying reasonable things.
  • Status of them and their ideas within mainstream academia (but because everyone knows about this heuristic, you have to watch out for people faking it.
  • Looking for other crackpot warning signs I've picked up over time, e.g. a non-expert claiming the mainstream academic view is not just wrong but obviously stupid, or being more interested in complaining that their views are being suppressed than in arguing for those views.

Which may not be great heuristics, but I'll wager that they're better than IQ (wager, in this case, being a figure of speech, because I don't actually know how you'd adjudicate that bet).

It may be helpful, here, to quote what I hope will be henceforth known as the Litany of Hermione: "The thing that people forget sometimes, is that even though appearances can be misleading, they're usually not."

You've also succeeded in giving me second thoughts about being signed up for cryonics, on the grounds that I failed to consider how it might encourage terrible mental habits in others. For the record, it strikes me as quite possible that mainstream neuroscientists are entirely correct to be dismissive of cryonics—my biggest problem is that I'm fuzzy on what exactly they think about cryonics (more here).

Comment author: Yvain 03 March 2014 07:12:26PM 9 points [-]

Your heuristics are, in my opinion, too conservative or not strong enough.

Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with. If you're a creationist, you can rule out paying attention to Richard Dawkins, because if he's wrong about God existing, about the age of the Earth, and about homosexuality being okay, how can you ever expect him to be right about evolution? If you're anti-transhumanism, you can rule out cryonicists because they tend to say lots of other unreasonable things like that computers will be smarter than humans, or that there can be "intelligence explosions", or that you can upload a human brain.

Status within mainstream academia is a really good heuristic, and this is part of what I mean when I say I use education as a heuristic. Certainly to a first approximation, before investigating a field, you should just automatically believe everything the mainstream academics believe. But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them. So when we say we need heuristics to find ideas to pay attention to, I'm assuming we've already started by assuming mainstream academia is always right, and we're looking for which challenges to them we should pay attention to. I agree that "challenges the academics themselves take seriously" is a good first step, but I'm not sure that would suffice to discover the critique of mainstream philosophy. And it's very little help at all in fields like politics.

The crackpot warning signs are good (although it's interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out, and it also seems like people have a bad habit of being very sensitive to crackpot warning signs the opposing side displays and very obtuse to those their own side displays). But once again, these signs are woefully inadequate. Plantinga doesn't look a bit like a crackpot.

You point out that "Even though appearances can be misleading, they're usually not." I would agree, but suggest you extend this to IQ and rationality. We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it's hard to remember that stupid things are still much, much likelier to be believed by stupid people.

(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of "deserve further investigation and charitable treatment".)

You are right that I rarely have the results of an IQ test (or Stanovich's rationality test) in front of me. So when I say I judge people by IQ, I think I mean something like what you mean when you say "a track record of making reasonable statements", except basing "reasonable statements" upon "statements that follow proper logical form and make good arguments" rather than ones I agree with.

So I think it is likely that we both use a basket of heuristics that include education, academic status, estimation of intelligence, estimation of rationality, past track record, crackpot warning signs, and probably some others.

I'm not sure whether we place different emphases on those, or whether we're using about the same basket but still managing to come to different conclusions due to one or both of us being biased.

Comment author: TheAncientGeek 24 April 2014 09:55:41AM 3 points [-]

Has anyone noticed that, given the fact that most of the material on this site is esemtially about philosophy, "academic philosophy sucks" is a Crackpot Warning Sign, ie "don't listen to the hidebound establishment".

Comment author: ChrisHallquist 05 July 2014 11:41:11PM 1 point [-]

So I normally defend the "trust the experts" position, and I went to grad school for philosophy, but... I think philosophy may be an area where "trust the experts" mostly doesn't work, simply because with a few exceptions the experts don't agree on anything. (Fuller explanation, with caveats, here.)

Comment author: Protagoras 06 July 2014 12:49:58AM 4 points [-]

Also, from the same background, it is striking to me that a lot of the criticisms Less Wrong people make of philosophers are the same as the criticisms philosophers make of one another. I can't really think of a case where Less Wrong stakes out positions that are almost universally rejected by mainstream philosophers. And not just because philosophers disagree so much, though that's also true, of course; it seems rather that Less Wrong people greatly exaggerate how different they are and how much they disagree with the philosophical mainstream, to the extent that any such thing exists (again, a respect in which their behavior resembles how philosophers treat one another).

Comment author: TheAncientGeek 06 July 2014 03:02:33PM 0 points [-]

If what philosophers specialise in clarifying questions, they can trusted to get the question right.

A typical failure mode of amateur philosophy is to substitute easier questions for harder ones.

Comment author: Vaniver 24 April 2014 02:31:31PM *  0 points [-]

You might be interested in this article and this sequence (in particular, the first post of that sequence). "Academic philosophy sucks" is a Crackpot Warning Sign because of the implied brevity. A measured, in-depth criticism is one thing; a smear is another.

Comment author: TheAncientGeek 24 April 2014 06:05:09PM *  0 points [-]

Read them ,not generally impressed.

Comment author: ChrisHallquist 04 March 2014 02:08:18AM 0 points [-]

But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them.

With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don't agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accomplishes anything.

The crackpot warning signs are good (although it's interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out...

Examples?

We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it's hard to remember that stupid things are still much, much likelier to be believed by stupid people.

(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of "deserve further investigation and charitable treatment".)

I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority (and the same study found many more academics take some sort of weaker pro-religion stance). And in my experience, even highly respected academics, when they try to defend religion, routinely make juvenile mistakes that make Plantinga look good by comparison. (Remember, I used Plantinga in the OP not because he makes the dumbest mistakes per se but as an example of how bad arguments can signal high intelligence.)

So when I say I judge people by IQ, I think I mean something like what you mean when you say "a track record of making reasonable statements", except basing "reasonable statements" upon "statements that follow proper logical form and make good arguments" rather than ones I agree with.

Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true." "Good arguments" is much harder to judge, and seems to defeat the purpose of having a heuristic for deciding who to treat charitably: if I say "this guy's arguments are terrible," and you say, "you should read those arguments more charitably," it doesn't do much good for you to defend that claim by saying, "well, he has a track record of making good arguments."

Comment author: Yvain 07 March 2014 08:23:45PM 3 points [-]

I agree that disagreement among philosophers is a red flag that we should be looking for alternative positions.

But again, I don't feel like that's strong enough enough. Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

Examples?

Well, take Barry Marshall. Became convinced that ulcers were caused by a stomach bacterium (he was right; later won the Nobel Prize). No one listened to him. He said that "my results were disputed and disbelieved, not on the basis of science but because they simply could not be true...if I was right, then treatment for ulcer disease would be revolutionized. It would be simple, cheap and it would be a cure. It seemed to me that for the sake of patients this research had to be fast tracked. The sense of urgency and frustration with the medical community was partly due to my disposition and age."

So Marshall decided since he couldn't get anyone to fund a study, he would study it on himself, drank a serum of bacteria, and got really sick.

Then due to a weird chain of events, his results ended up being published in the Star, a tabloid newspaper that by his own admission "talked about alien babies being adopted by Nancy Reagan", before they made it into legitimate medical journals.

I feel like it would be pretty easy to check off a bunch of boxes on any given crackpot index..."believes the establishment is ignoring him because of their biases", "believes his discovery will instantly solve a centuries-old problem with no side effects", "does his studies on himself", "studies get published in tabloid rather than journal", but these were just things he naturally felt or had to do because the establishment wouldn't take him seriously and he couldn't do things "right".

I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority

I think it is much much less than the general public, but I don't think that has as much to do with IQ per se as with academic culture. But although I agree that the finding that IQ isn't a stronger predictor of correct beliefs than it is is interesting, I am still very surprised that you don't seem to think it matters at all (or at least significantly). What if we switched gears? Agreeing that the fact that a contrarian theory is invented or held by high IQ people is no guarantee of its success, can we agree that the fact that a contrarian theory is invented and mostly held by low IQ people is a very strong strike against it?

Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true."

Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?

Comment author: Jiro 08 March 2014 03:12:50AM 3 points [-]

The extent to which science rejected the ulcer bacterium theory has been exaggerated. (And that article also addresses some quotes from Marshall himself which don't exactly match up with the facts.)

Comment author: ChrisHallquist 08 March 2014 07:05:05PM 0 points [-]

Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

What's your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we're basically talking about people who all have PhDs, so education can't be the heuristic. You also proposed IQ and rationality, but admitted we aren't going to have good ways to measure them directly, aside from looking for "statements that follow proper logical form and make good arguments." I pointed out that "good arguments" is circular if we're trying to decide who to read charitably, and you had no response to that.

That leaves us with "proper logical form," about which you said:

Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?

In response to this, I'll just point out that this is not an argument in proper logical form. It's a lone assertion followed by a rhetorical question.

Comment author: torekp 06 March 2014 01:42:38AM 0 points [-]

Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with.

Counterexample: your own investigation of natural law theology. Another: your investigation of the Alzheimer's bacterium hypothesis. I'd say your own intellectual history nicely demonstrates just how to pull off the seemingly impossible feat of detecting reasonable people you disagree with.

Comment author: Kawoomba 03 March 2014 04:32:57PM 1 point [-]

Are ethics supposed to be Aumann-agreeable?

If they were, uFAI would be a non-issue. (They are not.)

Comment author: TheAncientGeek 24 April 2014 09:45:09AM *  0 points [-]

Not being charitable to people isn't a problem, providing you don't mistake your lack of charity for evidence that they are stupid or irrational.

Comment author: blacktrance 03 March 2014 04:30:06PM 2 points [-]

the problem is selective charity—a specific kind of selective charity, slanted towards favoring people's in-group.

The danger of this approach is obvious, but it can have benefits as well. You may not know that a particular LessWronger is sane, but you do know that on average LessWrong has higher sanity than the general population. That's a reason to be more charitable.

Comment author: Solvent 03 March 2014 01:32:07AM 2 points [-]

I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me.

That's a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn't be able to reach consensus on that no matter how hard you tried.

Comment author: fubarobfusco 03 March 2014 01:53:30AM *  7 points [-]

Three somewhat disconnected responses —

For a moral realist, moral disagreements are factual disagreements.

I'm not sure that humans can actually have radically different terminal values from one another; but then, I'm also not sure that humans have terminal values.

It seems to me that "deontologist" and "consequentialist" refer to humans who happen to have noticed different sorts of patterns in their own moral responses — not groups of humans that have fundamentally different values written down in their source code somewhere. ("Moral responses" are things like approving, disapproving, praising, punishing, feeling pride or guilt, and so on. They are adaptations being executed, not optimized reflections of fundamental values.)

Comment author: Sniffnoy 01 March 2014 10:32:12PM 6 points [-]

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

Getting principle of charity right can be hard in general. A common problem is when something can be interpreted as stupid in two different ways; namely, it has an interpretation which is obviously false, and another interpretation which is vacuous or trivial. (E.g.: "People are entirely selfish.") In cases like this, where it's not clear what the charitable reading is, it may just be best to point out what's going on. ("I'm not certain what you mean by that. I see two ways of interpreting your statement, but one is obviously false, and the other is vacuous.") Assuming they don't mean the wrong thing is not the right answer, as if they do, you're sidestepping actual debate. Assuming they don't mean the trivial thing is not the right answer, because sometimes these statements are worth making. Whether a statement is considered trivial or not depends on who you're talking to, and so what statements your interlocutor considers trivial will depend on who they've been talking to and reading. E.g., if they've been hanging around with non-reductionists, they might find it worthwhile to restate the basic principles of reductionism, which here we would consider trivial; and so it's easy to make a mistake and be "charitable" to them by assuming they're arguing for a stronger but incorrect position (like some sort of greedy reductionism). Meanwhile people are using the same words to mean different things because they haven't calibrated abstract words against actual specifics and the debate becomes terribly unproductive.

Really, being explicit about how you're interpreting something if it's not the obvious way is probably best in general. ("I'm going to assume you mean [...], because as written what you said has an obvious error, namely, [...]".) A silent principle of charity doesn't seem very helpful.

But for a helpful principle of charity, I don't think I'd go for anything about what assumptions you should be making. ("Assume the other person is arguing in good faith" is a common one, and this is a good idea, but if you don't already know what it means, it's not concrete enough to be helpful; what does that actually cash out to?) Rather, I'd go for one about what assumptions you shouldn't make. That is to say: If the other person is saying something obviously stupid (or vacuous, or whatever), consider the possibility that you are misinterpreting them. And it would probably be a good idea to ask for clarification. ("Apologies, but it seems to me you're making a statement that's just clearly false, because [...]. Am I misunderstanding you? Perhaps your definition of [...] differs from mine?") Then perhaps you can get down to figuring out where your assumptions differ and where you're using the same words in different ways.

But honestly a lot of the help of the principle of charity may just be to get people to not use the "principle of anti-charity", where you assume your interlocutor means the worst possible (in whatever sense) thing they could possibly mean. Even a bad principle of charity is a huge improvement on that.

Comment author: JoshuaZ 28 June 2014 11:47:18PM 1 point [-]

There are I think two other related aspects that are relevant. First, there's some tendency to interpret what other people say in a highly non-charitable or anti-charitable fashion when one already disagrees with them about something. So a principle of charity helps to counteract that. Second, even when one is using a non-silent charity principle, it can if one is not careful, come across as condescending, so it is important to phrase it in a way that minimizes those issues.

Comment author: Bugmaster 04 March 2014 11:50:59PM 5 points [-]

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid.

As far as I understand, the Principle of Charity is defined differently; it states that you should interpret other people's arguments on the assumption that these people are arguing in good faith. That is to say, you should assume that your interlocutor honestly believes in everything he's saying, and that he has no ulterior motive beyound getting his point across. He may be entirely ignorant, stupid, or both; but he's not a liar or a troll.

This principle allows all parties to focus on the argument, and to stick to the topic at hand -- as opposed to spiraling into the endless rabbit-holes of psychoanalyzing each other.

Comment author: fubarobfusco 05 March 2014 07:31:19PM 3 points [-]

Wikipedia quotes a few philosophers on the principle of charity:

Blackburn: "it constrains the interpreter to maximize the truth or rationality in the subject's sayings."

Davidson: "We make maximum sense of the words and thoughts of others when we interpret in a way that optimises agreement."

Also, Dennett in The Intentional Stance quotes Quine that "assertions startlingly false on the face of them are likely to turn on hidden differences of language", which seems to be a related point.

Comment author: AshwinV 25 April 2014 07:14:28AM 0 points [-]

Interesting point of distinction.

Irrespective of how you define the principle of charity (i.e. motivation based or intelligence based), I do believe that the principle on the whole should not become a universal guideline and it is important to distinguish it, a sort of "principle of differential charity". This is obviously similar to basic real world things (eg. expertise when it comes to the intelligent charity issue and/or political/official positioning when it comes to the motivation issue).

I also realise that being differentially charitable may come with the risk of becoming even more biased, if you're priors themselves are based on extremely biased findings. However, I would think that by and large it works well, and is a great time saver when deciding how much effort to put into evaluating claims and statements alike.

Comment author: John_Maxwell_IV 03 March 2014 07:09:23AM *  4 points [-]

Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.

I agree with your assessment. My suspicion is that this is due to nth-degree imitations of certain high status people in the LW community who have been rather shameless about speaking in extremely confident tones about things that they are only 70% sure about. The strategy I have resorted to for people like this is asking them/checking if they have PredictionBook account and if not, assuming that they are overconfident just like is common with regular human beings. At some point I'd like to write an extended rebuttal to this post.

To provide counterpoint, however, there are certainly a lot of people who go around confidently saying things who are not as smart or rational as a 5th percentile LWer. So if the 5th percentile LWer is having an argument with one of these people, it's arguably an epistemological win if they are displaying a higher level of confidence than the other person in order to convince bystanders. An LWer friend of mine who is in the habit of speaking very confidently about things made me realize that maybe it was a better idea for me to develop antibodies to smart people speaking really confidently and start speaking really confidently myself than it was for me to get him to stop speaking as confidently.

Comment author: brazil84 02 March 2014 01:10:59PM 4 points [-]

The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

Well perhaps you should adopt a charitable interpretation of the principle of charity :) It occurs to me that the phrase itself might not be ideal since "charity" implies that you are giving something which the recipient does not necessarily deserve. Anyway, here's an example which I saw just yesterday:

The context is a discussion board where people argue, among other things, about discrimination against fat people.


Person 1: Answer a question for me: if you were stuck on the 3rd floor of a burning house and passed out, and you had a choice between two firefighter teams, one composed of men who weighted 150-170lbs and one composed of men well above 300, which team would you choose to rescue you?

Person 2: My brother is 6’ 9”, and with a good deal of muscle and a just a little pudge he’d be well over 350 (he’s currently on the thin side, probably about 290 or so). He’d also be able to jump up stairs and lift any-fucking-thing. Would I want him to save me? Hell yes. Gosh, learn to math,


It seems to me the problem here is that Person 2 seized upon an ambiguity in Person 1's question in order to dodge the central point of the question. The Principle of Charity would have required Person 2 to assume that the 300 pound men in the hypothetical were of average height and not 6'9"

I think it's a somewhat important principle because it's very difficult to construct statements and questions without ambiguities which can be seized upon by those who are hostile to one's argument. If I say "the sky is blue," every reasonable person knows what I mean. And it's a waste of everyone's time and energy to make me say something like "The sky when viewed from the surface of the Earth generally appears blue to humans with normal color vision during the daytime when the weather is clear."

So call it whatever you want, the point is that one should be reasonable in interpreting others' statements and questions.

Comment author: fubarobfusco 01 March 2014 06:51:17PM 8 points [-]

One thing I hear you saying here is, "We shouldn't build social institutions and norms on the assumption that members of our in-group are unusually rational." This seems right, and obviously so. We should expect people here to be humans and to have the usual human needs for community, assurance, social pleasantries, and so on; as well as the usual human flaws of defensiveness, in-group biases, self-serving biases, motivated skepticism, and so on.

Putting on the "defensive LW phyggist" hat: Eliezer pointed out a long time ago that knowing about biases can hurt people, and the "clever arguer" is a negative trope throughout that swath of the sequences. The concerns you're raising aren't really news here ...

Taking the hat off again: ... but it's a good idea to remind people of them, anyway!


Regarding jargon: I don't think the "jargon as membership signaling" approach can be taken very far. Sure, signaling is one factor, but there are others, such as —

  • Jargon as context marker. By using jargon that we share, I indicate that I will understand references to concepts that we also share. This is distinct from signaling that we are social allies; it tells you what concepts you can expect me to understand.
  • Jargon as precision. Communities that talk about a particular topic a lot will develop more fine-grained distinctions about it. In casual conversation, a group of widgets is more-or-less the same as a set of widgets; but to a mathematician, "group" and "set" refer to distinct concepts.
  • Jargon as vividness. When a community have vivid stories about a topic, referring to the story can communicate more vividly than merely mentioning the topic. Dropping a Hamlet reference can more vividly convey indecisiveness than merely saying "I am indecisive."
Comment author: private_messaging 24 April 2014 07:37:32AM *  3 points [-]

Particularly problematic is this self congratulatory process:

some simple mistake leads to non mainstream conclusion -> the world is insane and I'm so much more rational than everyone else -> endorphins released -> circuitry involved in mistake-making gets reinforced.

For example: the IQ is the best predictor of job performance, right? So the world is insane that it mostly hires based on experience, test questions, and so on (depending on the field) rather than IQ, right? Cue the endorphins and reinforcement of careless thinking.

If you're not after endorphins, though: IQ is a good predictor of performance within the population of people who got hired traditionally, which is a very different population than the job applicants.

Comment author: TheAncientGeek 24 April 2014 08:50:53AM 3 points [-]

These things can be hard to budge ....they certainly look it ... perhaps because the "Im special" delusion and "world is crazy" delusion need to fall at the same time.

Comment author: private_messaging 24 April 2014 11:16:52AM *  1 point [-]

Plus in many cases all that had been getting strengthened via reinforcement learning for decades.

It's also ridiculous how easy it is to be special in that imaginary world. Say I want to hire candidates really well - better than competition. I need to figure out the right mix of interview questions and prior experience and so on. I probably need to make my own tests. It's hard! It's harder still if I want to know if my methods work!

But that crazy world, in it, there's a test readily available, widely known, and widely used, and nobody's using it for that, because they're so irrational. And you can know you're special by just going "yeah, it sounds about right". Like coming across 2x+2y=? and going on speculating about the stupid reasons why someone would be unable to apply 2+2=4 and 2*2=4 and conclude it's 4xy .

Comment author: Oscar_Cunningham 01 March 2014 11:20:17AM 19 points [-]

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

Comment author: JWP 01 March 2014 05:23:38PM 10 points [-]

Identifying as a "rationalist" is encouraged by the welcome post.

We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist

Comment author: Eliezer_Yudkowsky 01 March 2014 08:15:49PM 13 points [-]

Edited the most recent welcome post and the post of mine that it linked to.

Does anyone have a 1-syllable synonym for 'aspiring'? It seems like we need to impose better discipline on this for official posts.

Comment author: somervta 04 March 2014 12:49:44AM 4 points [-]

Consider "how you came to aspire to rationality/be a rationalist" instead of "identify as an aspiring rationalist".

Or, can the identity language and switch to "how you came to be interested in rationality".

Comment author: CCC 02 March 2014 04:56:10AM 2 points [-]

Looking at a thesaurus, "would-be" may be a suitable synonym.

Other alternatives include 'budding', or maybe 'keen'.

Comment author: wwa 02 March 2014 02:48:30AM *  2 points [-]

demirationalist - on one hand, something already above average, like in demigod. On the other, leaves the "not quite there" feeling. My second best was epirationalist

Didn't find anything better in my opinion, but in case you want to give it a (somewhat cheap) shot yourself... I just looped over this

Comment author: Bugmaster 05 March 2014 01:13:08AM 1 point [-]

FWIW, "aspiring rationalist" always sounded quite similar to "Aspiring Champion" to my ears.

That said, why do we need to use any syllables at all to say "aspiring rationalist" ? Do we have some sort of a secret rite or a trial that an aspiring rationalist must pass in order to become a true rationalist ? If I have to ask, does that mean, I'm not a rationalist ? :-/

Comment author: brazil84 02 March 2014 11:22:48PM 0 points [-]

The only thing I can think of is "na" e.g. in Dune, Feyd Rauthah was the "na-baron," meaning that he had been nominated to succeed the baron. (And in the story he certainly was aspiring to be Baron.)

Not quite what you are asking for but not too far either.

Comment author: Oscar_Cunningham 01 March 2014 07:17:58PM 3 points [-]

And the phrase "how you came to identify as a rationalist" links to the very page where in the comments Robin Hanson suggests not using the term "rationalist", and the alternative "aspiring rationalist" is suggested!

Comment author: ChrisHallquist 03 March 2014 07:58:45AM 4 points [-]

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

My initial reaction to this was warm fuzzy feelings, but I don't think it's correct, any more than calling yourself a theist indicates believing you are God. "Rationalist" means believing in rationality (in the sense of being pro-rationality), not believing yourself to be perfectly rational. That's the sense of rationalist that goes back at least as far as Bertrand Russell. In the first paragraph of his "Why I Am A Rationalist", for example, Russell identifies as a rationalist but also says, "We are not yet, and I suppose men and women never will be, completely rational."

This also seems like it would be a futile linguistic fight. A better solution might be to consciously avoid using "rationalist" when talking about Aumann's agreement theorem—use "ideal rationalists" or "perfect rationalist". I also tend to use phrases like "members of the online rationalist community," but that's more to indicate I'm not talking about Russell or Dawkins (much less Descartes).

Comment author: Nornagest 05 March 2014 01:48:01AM *  4 points [-]

The -ist suffix can mean several things in English. There's the sense of "practitioner of [an art or science, or the use of a tool]" (dentist, cellist). There's "[habitual?] perpetrator of" or "participant in [an act]" (duelist, arsonist). And then there's "adherent of [an ideology, doctrine, or teacher]" (theist, Marxist). Seems to me that the problem has to do with equivocation between these senses as much as with the lack of an "aspiring". And personally, I'm a lot more comfortable with the first sense than the others; you can after all be a bad dentist.

Perhaps we should distinguish between rationaledores and rationalistas? Spanglish, but you get the picture.

Comment author: Vaniver 05 March 2014 03:46:25PM 0 points [-]

"Reasoner" captures this sense of "someone who does an act," but not quite the "practitioner" sense, and it does a poor job of pointing at the cluster we want to point at.

Comment author: polymathwannabe 05 March 2014 02:04:38AM 0 points [-]

The -dor suffix is only added to verbs. The Spanish word would be razonadores ("ratiocinators").

Comment author: [deleted] 01 March 2014 08:29:24PM 2 points [-]
Comment author: RichardKennaway 14 April 2014 08:36:49AM *  8 points [-]

As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary.

This is one of the loonier[1] ideas to be found on Overcoming Bias (and that's quite saying something). Exercise for the reader: test this idea that sharing opinions screens off the usefulness of sharing evidence with the following real-world scenario. I have participated in this scenario several times and know what the correct answer is.

You are on the programme committee of a forthcoming conference, which is meeting to decide which of the submitted papers to accept. Each paper has been refereed by several people, each of whom has given a summary opinion (definite accept, weak accept, weak reject, or definite reject) and supporting evidence for the opinion.

To transact business most efficiently, some papers are judged solely on the summary opinions. Every paper rated a definite accept by every referee for that paper is accepted without further discussion, because if three independent experts all think it's excellent, it probably is, and further discussion is unlikely to change that decision. Similarly, every paper firmly rejected by every referee is rejected. For papers that get a uniformly mediocre rating, the committee have to make some judgement about where to draw the line between filling out the programme and maintaining a high standard.

That leaves a fourth class: papers where the referees disagree sharply. Here is a paper where three referees say definitely accept, one says definitely reject. On another paper, it's the reverse. Another, two each way.

How should the committee decide on these papers? By combining the opinions only, or by reading the supporting evidence?

ETA: [1] By which I mean not "so crazy it must be wrong" but "so wrong it's crazy".

Comment author: gwern 25 April 2014 06:29:49PM *  0 points [-]

This is one of the loonier[1] ideas to be found on Overcoming Bias (and that's quite saying something). Exercise for the reader: test this idea that sharing opinions screens off the usefulness of sharing evidence with the following real-world scenario. I have participated in this scenario several times and know what the correct answer is.

Verbal abuse is not a productive response to the results of an abstract model. Extended imaginary scenarios are not a productive response either. Neither explains why the proofs are wrong or inapplicable, or if inapplicable, why they do not serve useful intellectual purposes such as proving some other claim by contradiction or serving as an ideal to aspire to. Please try to do better.

Comment author: RichardKennaway 25 April 2014 08:01:00PM 2 points [-]

This is one of the loonier[1] ideas to be found on Overcoming Bias (and that's quite saying something).

That was excessive, and I now regret having said it.

Comment author: RichardKennaway 25 April 2014 06:49:22PM 2 points [-]

Extended imaginary scenarios are not a productive response either.

As I said, the scenario is not imaginary.

Please try to do better.

I might have done so, had you not inserted that condescending parting shot.

Comment author: ChristianKl 25 April 2014 11:59:23PM 0 points [-]

As I said, the scenario is not imaginary.

Your real world scenario tells you that sometimes sharing evidence will move judgements in the right direction.

Thinking that Robert Hanson or someone else on Overcoming Bias hasn't thought of that argument is naive. Robert Hanson might sometimes make arguments that are wrong but he's not stupid. If you are treating him as if he would be, then you are likely arguing against a strawman.

Apart from that your example also has strange properties like only four different kind of judgements that reviewers are allowed to make. Why would anyone choose four?

Comment author: RichardKennaway 30 April 2014 08:11:08AM *  1 point [-]

Your real world scenario tells you that sometimes sharing evidence will move judgements in the right direction.

It is a lot more than "sometimes". In my experience (mainly in computing) no journal editor or conference chair will accept a referee's report that provides nothing but an overall rating of the paper. The rubric for the referees often explicitly states that. Where ratings of the same paper differ substantially among referees, the reasons for those differing judgements are examined.

Apart from that your example also has strange properties like only four different kind of judgements that reviewers are allowed to make. Why would anyone choose four?

The routine varies but that one is typical. A four-point scale (sometimes with a fifth not on the same dimension: "not relevant to this conference", which trumps the scalar rating). Sometimes they ask for different aspects to be rated separately (originality, significance, presentation, etc.). Plus, of course, the rationale for the verdict, without which the verdict will not be considered and someone else will be found to referee the paper properly.

Anyone is of course welcome to argue that they're all doing it wrong, or to found a journal where publication is decided by simple voting rounds without discussion. However, Aumann's theorem is not that argument, it's not the optimal version of Delphi (according to the paper that gwern quoted), and I'm not aware of any such journal. Maybe Plos ONE? I'm not familiar with their process, but their criteria for inclusion are non-standard.

Comment author: gwern 25 April 2014 06:58:09PM -3 points [-]

As I said, the scenario is not imaginary.

Yes, it is. You still have not addressed what is either wrong with the proofs or why their results are not useful for any purpose.

I might have done so, had you not inserted that condescending parting shot.

Wow. So you started it, and now you're going to use a much milder insult as an excuse not to participate? Please try to do better.

Comment author: RichardKennaway 25 April 2014 07:24:44PM 1 point [-]

Well, the caravan moves on. That -1 on your comment isn't mine, btw.

Comment author: ChristianKl 25 April 2014 04:16:44PM 0 points [-]

I think the most straightforward way is to do a second round. Let every referee read the opinions of the other referees and see whether they converge onto a shared judgement.

If you want a more formal name the Delphi method

Comment author: RichardKennaway 25 April 2014 04:39:00PM 2 points [-]

What actually happens is that the reasons for the summary judgements are examined.

Three for, one against. Is the dissenter the only one who has not understood the paper, or the only one who knows that although the work is good, almost the same paper has just been accepted to another conference? The set of summary judgements is the same but the right final judgement is different. Therefore there is no way to get the latter from the former.

Aumann agreement requires common knowledge of each others' priors. When does this ever obtain? I believe Robin Hanson's argument about pre-priors just stands the turtle on top of another turtle.

Comment author: TheAncientGeek 25 April 2014 04:49:11PM *  1 point [-]

People don't coincide in their priors, don't have access to the same evidence and aren't running off the same epistemology, and can't settle epistemologiical debates non-circularly......

Threr's a lot wrong with Aumannn, or at least the way some people use it.

Comment author: ialdabaoth 01 March 2014 05:31:10PM *  5 points [-]

Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy.

Not if you want to be accepted by that group. Being bad at signaling games can be crippling - as much as intellectual signaling poisons discourse, it's also the glue that holds a community together enough to make discourse possible.

Example: how likely you are to get away with making a post or comment on signaling games is primarily dependent on how good you are at signaling games, especially how good you are at the "make the signal appear to plausibly be something other than a signal" part of signaling games.

Comment author: ChrisHallquist 03 March 2014 04:21:33AM 0 points [-]

You're right, being bad at signaling games can be crippling. The point, though, is to watch out for them and steer away from harmful ones. Actually, I wish I'd emphasized this in the OP: trying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.

Comment author: ialdabaoth 03 March 2014 04:25:18AM 2 points [-]

[T]rying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.

Borrowing from the "Guess vs. Tell (vs. Ask)" meta-discussion, then, perhaps it would be useful for the community to have an explicit discussion about what kinds of signals we want to converge on? It seems that people with a reasonable understanding of game theory and evolutionary psychology would stand a better chance deliberately engineering our group's social signals than simply trusting our subconsciouses to evolve the most accurate and honest possible set.

Comment author: ChrisHallquist 03 March 2014 04:34:36AM 0 points [-]

The right rule is probably something like, "don't mix signaling games and truth seeking." If it's the kind of thing you'd expect in a subculture that doesn't take itself too seriously or imagine its quirks are evidence of its superiority to other groups, it's probably fine.

Comment author: CellBioGuy 05 March 2014 05:44:30AM *  4 points [-]

But a humble attempt at rationalism is so much less funny...

More seriously, I could hardly agree more with the statement that intelligence has remarkably little to do with susceptibility to irrational ideas. And as much as I occasionally berate others for falling into absurd patterns, I realize that it pretty much has to be true that somewhere in my head is something just as utterly inane that I will likely never be able to see, and it scares me. As such sometimes I think dissensus is not only good, but necessary.

Comment author: 7EE1D988 01 March 2014 11:03:17AM 2 points [-]

Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."

I couldn't agree more to that - to a first approximation.

Now of course, the first problem is with people who think a person is either rational in general or not, right in general, or not. Being right or rational is conflated with intelligence, for people can't seem to imagine that a cognitive engine which output so many right ideas in the past could be anything but a cognitive engine which outputs right ideas in general.

For instance and in practice, I'm pretty sure I strongly disagree with some of your opinions. Yet I agree with this bit over there, and other bits as well. Isn't it baffling how some people can be so clever, so right about a huge bundle of things (read : how they have opinion so very much like mine), and then suddenly you find they believe X, where X seems incredibly stupid and wrong for obvious reasons to you.

I posit that people want to find others like them (in a continuum with finding a community of people like them, some place where they can belong), and it stings to realize that even people who hold many similar opinions still aren't carbon copies of you, that their cognitive engine doesn't work exactly the same way as yours, and that you'll have to either change yourself, or change others (both of which can be hard, unpleasant work), if you want there to be less friction between you (unless you agree to disagree, of course).

Problem number two is simply that whether you think yourself right about a certain problem, have thought about it for a long time before coming to your own conclusion, doesn't preclude new, original information, or intelligent arguments to sway your opinion. I'm often pretty darn certain about my beliefs (those I care about anyway, that is, usually the instrumental beliefs and methods I need to attain my goals) but I know better than not to change my opinion or belief for a topic about which I care if I'm conclusively shown to be wrong (but that should go without saying in a rationalist community).

Comment author: elharo 01 March 2014 07:46:36PM 0 points [-]

Rationality, intelligence, and even evidence are not sufficient to resolve all differences. Sometimes differences are a deep matter of values and preferences. Trivially, I may prefer chocolate and you prefer vanilla. There's no rational basis for disagreement, nor for resolving such a dispute. We simply each like what we like.

Less trivially, some people take private property as a fundamental moral right. Some people treat private property as theft. And a lot of folks in the middle treat it as a means to an end. Folks in the middle can usefully dispute the facts and logic of whether particular incarnations of private property do or do not serve other ends and values, such as general happiness and well-being. However perfectly rational and intelligent people who have different fundamental values with respect to private property are not going to agree, even when they agree on all arguments and points of evidence.

There are many other examples where core values come into play. How and why people develop and have different core values than other people is an interesting question. However even if we can eliminate all partisan-shaded argumentation, we will not eliminate all disagreements.

Comment author: brilee 01 March 2014 02:23:33PM 0 points [-]

'''I posit that people want to find others like them (in a continuum with finding a community of people like them, some place where they can belong), and it stings to realize that even people who hold many similar opinions still aren't carbon copies of you, that their cognitive engine doesn't work exactly the same way as yours, and that you'll have to either change yourself, or change others (both of which can be hard, unpleasant work), if you want there to be less friction between you (unless you agree to disagree, of course).'''

Well said.

Comment author: waveman 01 March 2014 11:51:30AM *  4 points [-]

Everyone (and every group) thinks they are rational. This is not a distinctive feature of LW. Christianity and Buddhism make a lot of their rationality. Even Nietzsche acknowledged that it was the rationality of Christianity that led to its intellectual demise (as he saw it), as people relentlessly applied rationality tools to Christianity.

My own model of how rational we are is more in line with Ed Seykota's (http://www.seykota.com/tribe/TT_Process/index.htm) than the typical geek model that we are basically rational with a few "biases" added on top. Ed Seykota was a very successful trader, featured in the book "Market Wizards" who concluded that trading success is not that difficult intellectually, the issues are all on the feelings side. He talks about trading but the concepts apply across the board.

For everyone who thinks that they are rational, consider a) Are you in the healthy weight range? b) Did you get the optimum amount of exercise this week? c) Are your retirement savings on track? d) Did you waste zero time today? (I score 2/4).

Personally I think it would be progress if we took as a starting point the assumption that most of the things we believe are not rational. That everything needs to be stringently tested. That taking someone's word for it, unless they have truly earned it, does not make sense.

Also: I totally agree with OP that it is routine to see intelligent people who think of themselves as rational doing things and believing things that are complete nonsense. Intelligence and rationality are, to a first approximation, orthogonal.

Comment author: DanArmak 01 March 2014 12:27:52PM *  5 points [-]

Everyone (and every group) thinks they are rational. This is not a distinctive feature of LW. Christianity and Buddhism make a lot of their rationality.

To the contrary, lots of groups make a big point of being anti-rational. Many groups (religious, new-age, political, etc.) align themselves in anti-scientific or anti-evidential ways. Most Christians, to make an example, assign supreme importance to (blind) faith that triumphs over evidence.

But more generally, humans are a-rational by default. Few individuals or groups are willing to question their most cherished beliefs, to explicitly provide reasons for beliefs, or to update on new evidence. Epistemic rationality is not the human default and needs to be deliberately researched, taught and trained.

And people, in general, don't think of themselves as being rational because they don't have a well-defined, salient concept of rationality. They think of themselves as being right.

Comment author: brazil84 02 March 2014 05:17:13PM 3 points [-]

To the contrary, lots of groups make a big point of being anti-rational

Here's a hypothetical for you: Suppose you were to ask a Christian "Do you think the evidence goes more for or more against your belief in Christ?" How do you think a typical Christian would respond? I think most Christians would respond that the evidence goes more in favor of their beliefs.

Comment author: Eugine_Nier 01 March 2014 07:16:45PM -1 points [-]

Most Christians, to make an example, assign supreme importance to (blind) faith that triumphs over evidence.

That's not what most Christians mean by faith.

Comment author: DanArmak 01 March 2014 09:56:13PM *  1 point [-]

The comment you link to gives a very interesting description of faith:

The sense of "obligation" in faith is that of duty, trust, and deference to those who deserve it. If someone deserves our trust, then it feels wrong, or insolent, or at least rude, to demand independent evidence for their claims.

I like that analysis! And I would add: obligation to your social superiors, and to your actual legal superiors (in a traditional society), is a very strong requirement and to deny faith is not merely to be rude, but to rebel against the social structure which is inseparable from institutionalized religion.

However, I think this is more of an explanation of how faith operates, not what it feels like or how faithful people describe it. It's a good analysis of the social phenomenon of faith from the outside, but it's not a good description of how it feels from the inside to be faithful.

This is because the faith actually required of religious people is faith in the existence of God and other non-evident truths claimed by their religion. As a faithful person, you can't feel faith is "duty, trust, obligation" - you feel that is is belief. You can't feel that to be unfaithful would be to wrong someone or to rebel; you feel that it would be to be wrong about how the world really is.

However, I've now read Wikipedia on Faith in Christianity and I see there are a lot of complex opinions about the meaning of this word. So now I'm less sure of my opinion. I'm still not convinced that most Christians mean "duty, trust, deference" when they say "faith", because WP quotes many who disagree and think it means "belief".

Comment author: orbenn 01 March 2014 05:16:22PM *  1 point [-]

I think we're getting some word-confusion. Groups that claim "make a big point of being anti-rational" are against the things with the label "rational". However they do tend to think of their own beliefs as being well thought out (i.e. rational).

Comment author: DanArmak 01 March 2014 06:39:04PM 0 points [-]

No, I think we're using words the same way. I disagree with your statement that all or most groups "think of their own beliefs as being well thought out (i.e. rational).". They think of their beliefs of being right, but not well thought out.

"Well thought out" should mean:

  1. Being arrived at through thought (science, philosophy, discovery, invention), rather than writing the bottom line first and justifying it later or not at all (revelation, mysticism, faith deliberately countering evidence, denial of the existence of objective truth).
  2. Thought out to its logical consequences, without being selective about which conclusions you adopt or compartmentalizing them, making sure there are no internal contradictions, and dealing with any repugnant conclusions.
Comment author: elharo 01 March 2014 07:54:17PM *  2 points [-]

a) Why do you expect a rational person would necessarily avoid the environmental problems that cause overweight and obesity? Especially given that scientists are very unclear amongst themselves as to what causes obesity and weight gain? Even if you adhere to the notion that weight gain and loss is simply a matter of calorie consumption and willpower, why would you assume a rational person has more willpower?

b) Why do you expect that a rational person would necessarily value the optimum amount of exercise (presumably optimal for health) over everything else they might have done with their time this week? And again given that scientists have even less certainty about the optimum amount or type of exercise, than they do about the optimum amount of food we should eat.

c) Why do you assume that a rational person is financially able to save for retirement? There are many people on this planet who live on less than a dollar a day. Does being born poor imply a lack of rationality?

d) Why do you assume a rational person does not waste time on occasion?

Rationality is not a superpower. It does not magically produce health, wealth, or productivity. It may assist in the achievement of those and other goals, but it is neither necessary nor sufficient.

Comment author: AspiringRationalist 06 March 2014 02:11:05AM 0 points [-]

c) Why do you assume that a rational person is financially able to save for retirement? There are many people on this planet who live on less than a dollar a day. Does being born poor imply a lack of rationality?

The question was directed at people discussing rationality on the internet. If you can afford some means of internet access, you are almost certainly not living on less than a dollar a day.

Comment author: CAE_Jones 06 March 2014 04:32:31AM 1 point [-]

I receive less in SSI than I'm paying on college debt (no degree), am legally blind, unemployed, and have internet access because these leave me with no choice but to live with my parents (no friends within 100mi). Saving for retirement is way off my radar.

(I do have more to say on how I've handled this, but it seems more appropriate for the rationality diaries. I will ETA a link if I make such a comment.)

Comment author: brazil84 02 March 2014 05:51:02PM *  0 points [-]

Why do you expect a rational person would necessarily avoid the environmental problems that cause overweight and obesity? Especially given that scientists are very unclear amongst themselves as to what causes obesity and weight gain? Even if you adhere to the notion that weight gain and loss is simply a matter of calorie consumption and willpower, why would you assume a rational person has more willpower?

A more rational person might have a better understanding of how his mind works and use that understanding to deploy his limited willpower to maximum effect.

Comment author: Vaniver 02 March 2014 09:32:04AM *  0 points [-]

d) Why do you assume a rational person does not waste time on occasion?

Even if producing no external output, one can still use time rather than waste it. waveman's post is about the emotional difficulties of being effective- and so to the extent that rationality is about winning, a rational person has mastered those difficulties.

Comment author: AspiringRationalist 06 March 2014 01:58:44AM 1 point [-]

For everyone who thinks that they are rational, consider a) Are you in the healthy weight range? b) Did you get the optimum amount of exercise this week? c) Are your retirement savings on track? d) Did you waste zero time today? (I score 2/4).

That sentence motivated me to overcome the trivial inconvenience of logging in on my phone so I could up vote it.

Comment author: eli_sennesh 02 March 2014 04:17:23PM 0 points [-]

For everyone who thinks that they are rational, consider a) Are you in the healthy weight range? b) Did you get the optimum amount of exercise this week? c) Are your retirement savings on track? d) Did you waste zero time today? (I score 2/4).

I wasted some time today. Is 3-4 times per week of strength training and 1/2 hour cardio enough exercise? Then I think I get 3/4. Woot, but I actually don't see the point of the exercise, since I don't even aspire to be perfectly rational (especially since I don't know what I would be perfectly rational about).

Comment author: Sophronius 06 March 2014 08:50:03PM *  3 points [-]

Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"

I'll go ahead and disagree with this. Sure, there's a lot of smart people who aren't rational, but then I would say that rationality is less common than intelligence. On the other hand, all the rational people I've met are very smart. So it seems really high intelligence is a necessary but not a sufficient condition. Or as Draco Malfoy would put it: "Not all Slytherins are Dark Wizards, but all Dark Wizards are from Slytherin."

I largely agree with the rest of your post Chris (upvoted), though I'm not convinced that the self-congratulatory part is Less Wrong's biggest problem. Really, it seems to me that a lot of people on Less Wrong just don't get rationality. They go through all the motions and use all of the jargon, but don't actually pay attention to the evidence. I frequently find myself wanting to yell "stop coming up with clever arguments and pay attention to reality!" at the screen. A large part of me worries that rationality really can't be taught; that if you can't figure out the stuff on Less Wrong by yourself, there's no point in reading about it. Or, maybe there's a selection effect and people who post more comments tend to be less rational than those who lurk?

Comment author: RichardKennaway 04 July 2014 09:56:51PM 5 points [-]

A large part of me worries that rationality really can't be taught; that if you can't figure out the stuff on Less Wrong by yourself, there's no point in reading about it.

The teaching calls to what is within the pupil. To borrow a thought from Georg Christoph Lichtenberg, if an ass looks into LessWrong, it will not see a sage looking back.

I have a number of books of mathematics on my shelves. In principle, I could work out what is in them, but in practice, to do so I would have to be of the calibre of a multiple Field and Nobel medallist, and exercise that ability for multiple lifetimes. Yet I can profitably read them, understand them, and use that knowledge; but that does still require at least a certain level of ability and previous learning.

Or to put that another way, learning is in P, figuring out by yourself is in NP.

Comment author: Sophronius 04 July 2014 10:35:19PM *  2 points [-]

Agreed. I'm currently under the impression that most people cannot become rationalists even with training, but training those who do have the potential increases the chance that they will succeed. Still I think rationality cannot be taught like you might teach a university degree: A large part of it is inspiration, curiosity, hard work and wanting to become stronger. And it has to click. Just sitting in the classroom and listening to the lecturer is not enough.

Actually now that I think about it, just sitting in the classroom and listening to the lecturer for my economics degree wasn't nearly enough to gain a proper understanding either, yet that's all that most people did (aside from a cursory reading of the books of course). So maybe the problem is not limited to rationality but more about becoming really proficient at something in general.

Comment author: David_Gerard 04 July 2014 10:03:00AM *  1 point [-]

On the other hand, all the rational people I've met are very smart.

Surely you know people of average intelligence who consistently show "common sense" (so rare it's pretty much a superpower). They may not be super-smart, but they're sure as heck not dumb.

Comment author: Sophronius 04 July 2014 01:57:31PM 1 point [-]

Common sense does seem like a superpower sometimes, but that's not a real explanation. I think that what we call common sense is mostly just the result of clear thinking and having a distaste for nonsense. If you favour reality over fancies, you are more likely to pay more attention to reality --> better mental habits --> stronger intuition = common sense.

But to answer your question, yes I do know people like that and I do respect them for it (though they still have above average intelligence, mostly). However, I would not trust them with making decisions on anything counter-intuitive like economics, unless they're also really good at knowing what experts to listen to.

Comment author: David_Gerard 04 July 2014 04:24:03PM *  1 point [-]

However, I would not trust them with making decisions on anything counter-intuitive like economics, unless they're also really good at knowing what experts to listen to.

Yeah, but I'd say that about the smart people too.

Related, just seen today: The curse of smart people. SPOILER: "an ability to convincingly rationalize nearly anything."

Comment author: XiXiDu 04 July 2014 05:11:18PM *  3 points [-]

Related, just seen today: The curse of smart people. SPOILER: "an ability to convincingly rationalize nearly anything."

The AI box experiment seems to support this. People who have been persuaded that it would be irrational to let an unfriendly AI out of the box are being persuaded to let it out of the box.

The ability of smarter or more knowledgeable people to convince less intelligent or less educated people of falsehoods (e.g. parents and children) shows that we need to put less weight on arguments and more weight on falsifiability.

Comment author: Sophronius 04 July 2014 06:17:21PM *  2 points [-]

I wouldn't use the Ai box experiment as an example for anything because it is specifically designed to be a black box: It's exciting precisely because the outcome confuses the heck out of people. I'm having trouble parsing this in Bayesian terms but I think you're committing a rationalist sin by using an event that your model of reality couldn't predict in advance as evidence that your model of reality is correct.

I strongly agree that we need to put less weight on arguments but I think falsifiability is impractical in everyday situations.

Comment author: Sophronius 04 July 2014 06:31:35PM *  2 points [-]

S1) Most smart people aren't rational but most rational people are smart
D1) There are people of average intelligence with common sense
S2) Yes they have good intuition but you cannot trust them with counter-intuitive subjects (people with average intelligence are not rational)
D2) You can't trust smart people with counter-intuitive subjects either (smart people aren't rational)

D2) does not contradict S1 because "most smart people aren't rational" isn't the same as "most rational people aren't smart", which is of course the main point of S1).

Interesting article, it confirms my personal experiences in corporations. However, I think the real problem is deeper than smart people being able to rationalize anything. The real problem is that overconfidence and rationalizing your actions makes becoming a powerful decision-maker easier. The mistakes they make due to irrationality don't catch up with them until after the damage is done, and then the next overconfident guy gets selected.

Comment author: dthunt 04 July 2014 03:45:40PM 1 point [-]

Reading something and understanding/implementing it are not quite the same thing. It takes clock time and real effort to change your behavior.

I do not think it is unexpected that a large portion of the population on a site dedicated to writing, teaching, and discussing the skills of rationality is going to be, you know, still very early in the learning, and that some people will have failed to grasp a lesson they think they have grasped, and that others will think others have failed to grasp a lesson that they have failed to grasp, and that you will have people who just like to watch stuff burn.

I'm sure it's been asked elsewhere, and I liked the estimation questions on the 2013 survey; has there been a more concerted effort to see what being an experienced LWer translates to, in terms of performance on various tasks that, in theory, people using this site are trying to get better at?

Comment author: Sophronius 04 July 2014 06:05:26PM *  1 point [-]

Yes, you hit the nail on the head. Rationality takes hard work and lots of practice, and too often people on Less Wrong just spend time making clever arguments instead of doing the actual work of asking what the actual answer is to the actual question. It makes me wonder whether Less Wrongers care more about being seen as clever than they care about being rational.

As far as I know there's been no attempt to make a rationality/Bayesian reasoning test, which I think is a great pity because I definitely think that something like that could help with the above problem.

Comment author: dthunt 04 July 2014 06:41:11PM *  0 points [-]

There are many calibration tests you can take (there are many articles on this site with links to see if you are over-or-underconfident on various subject tests - search for calibration).

What I don't know is if there has been some effort to do this across many questions, and compile the results anonymously for LWers.

I caution against jumping quickly to conclusions about "signalling". Frankly, I suspect you are wrong, and that most of the people here are in fact trying. Some might not be, and are merely looking for sparring matches. Those people are still learning things (albeit perhaps with less efficiency).

As far as "seeming clever", perhaps as a community it makes sense to advocate people take reasoning tests which do not strongly correlate with IQ, and that people generally do quite poorly on (I'm sure someone has a list, though it may be a relatively short list of tasks), which might have the effect of helping people to see stupid as part of the human condition, and not merely a feature of "non-high-IQ" humans.

Comment author: Sophronius 04 July 2014 06:52:14PM *  0 points [-]

Fair enough, that was a bit too cynical/negative. I agree that people here are trying to be rational, but you have to remember that signalling does not need to be on purpose. I definitely detect a strong impulse amongst the less wrong crowd to veer towards controversial and absurd topics rather than the practical and to make use of meta level thinking and complex abstract arguments instead of simple and solid reasoning. It may not feel that way from the inside, but from the outside point of view it does kind of look like Less Wrong is optimizing for being clever and controversial rather than rational.

I definitely say yes to (bayesian) reasoning tests. Someone who is not me needs to go do this right now.

Comment author: dthunt 04 July 2014 07:06:47PM *  0 points [-]

I don't know that there is anything to do, or that should be done, about that outside-view problem. Understanding why people think you're being elitist or crazy doesn't necessarily help you avoid the label.

http://lesswrong.com/lw/kg/expecting_short_inferential_distances/

Comment author: Sophronius 04 July 2014 07:29:28PM *  0 points [-]

Huh? If the outside view tells you that there's something wrong, then the problem is not with the outside view but with the thing itself. It has nothing to do with labels or inferential distance. The outside-view is a rationalist technique used for viewing a matter you're personally involved in objectively by taking a step back. I'm saying that when you take a step back and look at things objectively, it looks like Less Wrong spends more time and effort on being clever than on being rational.

But now that you've brought it up, I'd also like to add that the habit on Less Wrong to assume that any criticism or disagreement must be because of inferential distance (really just a euphemism for saying the other guy is clueless) is an extremely bad one.

Comment author: Nornagest 04 July 2014 07:45:50PM *  2 points [-]

The outside view isn't magic. Finding the right reference class to step back into, in particular, can be tricky, and the experiments the technique is drawn from deal almost exclusively with time forecasting; it's hard to say how well it generalizes outside that domain.

Don't take this as quoting scripture, but this has been discussed before, in some detail.

Comment author: Sophronius 04 July 2014 08:34:20PM *  6 points [-]

Okay, you're doing precisely the thing I hate and which I am criticizing about Less Wrong. Allow me to illustrate:

LW1: Guys, it seems to me that Less Wrong is not very rational. What do you think?
LW2: What makes you think Less Wrong isn't rational?
LW1: Well if you take a step back and use the outside view, Less Wrong seems to be optimizing for being clever rather than optimizing for being rational. That's a pretty decent indicator.
LW3: Well, the outside view has theoretical limitations, you know. Eliezer wrote a post about how it is possible to misuse the outside point of view as a conversation stopper.
LW1: Uh, well unless I actually made a mistake in applying the outside view I don't see why that's relevant? And if I did make a mistake in applying it it would be more helpful to say what it was I specifically did wrong in my inference.
LW4: You are misusing the term inference! Here, someone wrote a post about this at some point.
LW5: Yea but that post has theoretical limitations.
LW1: I don't care about any of that, I want to know whether or not Less Wrong is succeeding at being rational. Stop making needlessly theoretical abstract arguments and talk about the actual thing we were actually talking about.
LW6: I agree, people here use LW jargon as as a form of applause light!
LW1: Uh...
LW7: You know, accusing others of using applause lights is a fully generalized counter argument!
LW6: Oh yea? Well fully generalized counter arguments are fully generalized counter arguments themselves, so there!

We're only at LW3 right now so maybe this conversation can still be saved from becoming typical Less Wrong-style meta screwery. Or to make my point more politely: Please tell me whether or not you think Less Wrong is rational and whether or not something should be done, because that's the thing we're actually talking about.

Comment author: Nornagest 04 July 2014 08:39:19PM *  1 point [-]

Dude, my post was precisely about how you're making a mistake in applying the outside view. Was I being too vague, too referential? Okay, here's the long version, stripped of jargon because I'm cool like that.

The point of the planning fallacy experiments is that we're bad at estimating the time we're going to spend on stuff, mainly because we tend to ignore time sinks that aren't explicitly part of our model. My boss asks me how long I'm going to spend on a task: I can either look at all the subtasks involved and add up the time they'll take (the inside view), or I can look at similar tasks I've done in the past and report how long they took me (the outside view). The latter is going to be larger, and it's usually going to be more accurate.

That's a pretty powerful practical rationality technique, but its domain is limited. We have no idea how far it generalizes, because no one (as far as I know) has rigorously tried to generalize it to things that don't have to do with time estimation. Using the outside view in its LW-jargon sense, to describe any old thing, therefore is almost completely meaningless; it's equivalent to saying "this looks to me like a $SCENARIO1". As long as there also exists a $SCENARIO2, invoking the outside view gives us no way to distinguish between them. Underfitting is a problem. Overfitting is also a problem. Which one's going to be more of a problem in a particular reference class? There are ways of figuring that out, like Yvain's centrality heuristic, but crying "outside view" is not one of them.

As to whether LW is rational, I got bored of that kind of hand-wringing years ago. If all you're really looking for is an up/down vote on that, I suggest a poll, which I will probably ignore because it's a boring question.

Comment author: dthunt 04 July 2014 09:47:09PM 0 points [-]

My guess is that the site is "probably helping people who are trying to improve", because I would expect some of the materials here to help. I have certainly found a number of materials useful.

But a personal judgement probably helping" isn't the kind of thing you'd want. It'd be much better to find some way to measure the size of the effect. Not tracking your progress is a bad, bad sign.

Comment author: dthunt 04 July 2014 07:46:21PM 0 points [-]

My apologies, I thought you we referring to how people who do not use this site perceive people using the site, which seemed more likely to be what you were trying to communicate than the alternative.

Yes, the site viewed as a machine does not look like a well-designed rational-people-factory to me, either, unless I've missed the part where it's comparing its output to its input to see how it is performing. People do, however, note cognitive biases and what efforts to work against them have produced, from time to time, and there are other signs that seem consistent with a well-intentioned rational-people-factory.

And, no, not every criticism does. I can only speak for myself, and acknowledge that I have a number of times in the past failed to understand what someone was saying and assumed they were being dumb or somewhat crazy as a result. I sincerely doubt that's a unique experience.

Comment author: dthunt 04 July 2014 08:04:48PM 0 points [-]

http://lesswrong.com/lw/ec2/preventing_discussion_from_being_watered_down_by/, and other articles, I now read, because they are pertinent, and I want to know what sorts of work have been done to figure out how LW is perceived and why.

Comment author: somervta 01 March 2014 09:03:52AM 2 points [-]

sake-handling -> snake-handling

Anecdotally, I feel like I treat anyone on LW as someone to take much more seriously because f that, it's just not different enough for any of the things-perfect-rationalists-should-do to start to apply.

Comment author: ChrisHallquist 04 March 2014 05:48:16AM 1 point [-]

Skimming the "disagreement" tag in Robin Hanson's archives, I found I few posts that I think are particularly relevant to this discussion:

Comment author: Sniffnoy 01 March 2014 10:06:03PM *  1 point [-]

For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.

That... isn't jargon? There are probably plenty of actual examples you could have used here, but that isn't one.

Edit: OK, you did give an actual example below that ("blue-green politics"). Nonetheless, "mental model" is not jargon. It wasn't coined here, it doesn't have some specialized meaning here that differs from its use outside, it's entirely compositional and thus transparent -- nobody has to explain to you what it means -- and at least in my own experience it just isn't a rare phrase in the first place.

Comment author: Jiro 02 March 2014 07:49:12AM *  -1 points [-]

it doesn't have some specialized meaning here that differs from its use outside

It doesn't have a use outside.

I measn, yeah, literally, the words do mean the same thing and you could find someone outside lesswrong who says it, but it's an unnecessarily complicated way to say things that generally is not used. It takes more mental effort to understand, it's outside most people's expectations for everyday speech, and it may as well be jargon, even if technically it isn't. Go ahead, go down the street and the next time you ask someone for directions and they tell you something you can't understand, reply "I have a poor mental model of how to get to my destination". They will probably look at you like you're insane.

Comment author: VAuroch 02 March 2014 09:47:18AM 2 points [-]

"Outside" doesn't have to include a random guy on the street. Cognitive science as a field is "outside", and uses "mental model".

Also, "I have a poor mental model of how to get to my destination" is, descriptively speaking, wrong usage of 'poor mental model'; it's inconsistent with the connotations of the phrase, which connotes an attempted understanding which is wrong. I don't "have a poor mental model" of the study of anthropology; I just don't know anything about it or have any motivation to learn. I do "have a poor mental model" of religious believers; my best attempts to place myself in the frame of reference of a believer do not explain their true behavior, so I know that my model is poor.

Comment author: Jiro 02 March 2014 04:09:11PM 0 points [-]

it's inconsistent with the connotations of the phrase, which connotes an attempted understanding which is wrong

I suggested saying it in response to being given directions you don't understand. If so, then you did indeed attempt to understand and couldn't figure it out.

"Outside" doesn't have to include a random guy on the street.

But there's a gradation. Some phrases are used only by LWers. Some phrases are used by a slightly wider range of people, some by a slightly wider than that. Whether a phrase is jargon-like isn't a yes/no thing; using a phrase which is used by cognitive scientists but which would not be understood by the man on the street, when there is another way of saying the same thing that would be understood by the man on the street, is most of the way towards being jargon, even if technically it's not because cognitive scientists count as an outside group.

Furthermore, just because cognitive scientists know the phrase doesn't mean they use it in conversation about subjects that are not cognitive science. I suspect that even cognitive scientists would, when asking each other for directions, not reply to incomprehensible directions by saying they have a poor mental model, unless they are making a joke or unless they are a character from the Big Bang Theory (and the Big Bang Theory is funny because most people don't talk like that, and the few who do are considered socially inept.)

Comment author: trist 01 March 2014 04:16:05PM *  0 points [-]

I wonder how much people's interactions with other aspiring rationalists in real life has any effect on this problem. Specifically, I think people who have become/are used to being significantly better at forming true beliefs than everyone around them will tend to discount other people's opinions more.

Comment author: brazil84 02 March 2014 09:17:55PM 1 point [-]

By the way, I agree with you that there is a problem with rationalists who are a lot less rational than they realize.

What would be nice is if there were a test for rationality just like one can test for intelligence. It seems that it would be hard to make progress without such a test.

Unfortunately there would seem to be a lot of opportunity for a smart but irrational person to cheat on such a test without even realizing it. For example, if it were announced that atheism is a sign of rationality, our hypothetical smart but irrational person would proudly announce his atheism and would tell himself and others that he is an atheist because he is a smart, rational person and that's how he has processed the evidence.

Another problem is that there is no practical way to assess the rationality of the person who is designing the rationality test.

Someone mentioned weight control as a rationality test. This is an intriguing idea -- I do think that self-deception plays an important role in obesity. I would like to think that in theory, a rational fat person could think about the way his brain and body work; create a reasonably accurate model; and then develop and implement a strategy for weight loss based on his model.

Perhaps some day you will be able to wear a mood-ring type device which beeps whenever you are starting to engage in self-deception.

Comment author: Viliam_Bur 03 March 2014 03:02:07PM *  2 points [-]

if it were announced that atheism is a sign of rationality, our hypothetical smart but irrational person would proudly announce his atheism

Rationality tests shouldn't be about professing things; not even things correlated with rationality. Intelligence tests also aren't about professing intelligent things (whatever those would be), they are about solving problems. Analogically, rationality tests should require people to use rationality to solve novel situations, not just guess the teacher's password.

there is no practical way to assess the rationality of the person who is designing the rationality test

If the test depends too much on trusting the rationality of the person designing the test, they are doing it wrong. Again, IQ tests are not made by finding the highest-IQ people on the planet and telling them: "Please use your superior rationality in ways incomprehensive to us mere mortals to design a good IQ test."

Both intelligence and rationality are necessary in designing an IQ test or a rationality test, but that's in a similar way that intelligence and rationality are necessary to design a new car. The act of designing requires brainpower; but it's not generally true that tests of X must be designed by people with high X.

Comment author: brazil84 03 March 2014 03:37:30PM 1 point [-]

Analogically, rationality tests should require people to use rationality to solve novel situations, not just guess the teacher's password.

I agree with this. But I can't think of such a rationality test. I think part of the problem is that a smart but irrational person could use his intelligence to figure out the answers that a rational person would come up with and then choose those answers.

On an IQ test, if you are smart enough to figure out the answers that a smart person would choose, then you yourself must be pretty smart. But I don't think the same thing holds for rationality.

If the test depends too much on trusting the rationality of the person designing the test, they are doing it wrong.

Well yes, but it's hard to think of how to do it right. What's an example of a question you might put on a rationality test?

Comment author: Viliam_Bur 04 March 2014 08:34:27AM 1 point [-]

I agree that rationality tests will be much more difficult than IQ tests. First, we already have the IQ tests so if we tried to create a new one, we already know what to do and what to expect. Second, the rationality tests can be inherently more difficult.

Still I think that if we look at the history of the IQ tests, we can take some lessons from there. I mean; imagine that there are no IQ tests yet, and you are supposed to invent the first one. The task would probably seem impossible, and there would be similar objections. Today we know that the first IQ tests got a few things wrong. And we also know that the "online IQ tests" are nonsense from the psychometrics point of view, but to people without psychological education they seem right, because their intuitive idea of IQ is "being able to answer difficult questions invented by other intelligent people", when if fact the questions in Raven's progressive matrices are rather simple.

20 years later we may have analogical knowledge about the rationality tests, and some things may seem obvious in hindsight. At this moment, while respecting that intelligence is not the same thing as rationality, IQ tests are the outside-view equivalent I will use for making guesses, because I have no better analogy.

The IQ tests were first developed for small children. The original purpose of the early IQ tests was to tell whether a 6 years old child is ready to go to elementary school, or whether we should give them another year. They probably even weren't called IQ tests yet, but school readiness tests. Only later was the idea of some people being "smarter/dumber for their age" generalized to all ages.

Analogically, we could probably start measuring rationality where it is easiest; on children. I'm not saying it will be easy, just easier that with adults. Many of the small children's logical mistakes will be less politically controversial. And it is easier to reason about the mistakes that you are already not prone to making. Some of the things we learn on children may be later useful also for studying adults.

Within intelligence, there was a controversy (and some people still try to keep it alive) whether "intelligence" is just one thing, or many different things (multiple intelligences). There will be analogical questions about "rationality". And the proper way to answer these questions is to create tests for individual hypothetical components, and then to gather the data and see how these abilities correlate. Measurement and math; not speculation. Despite making an analogy here, I am not saying the answer will be the same. Maybe "resisting peer pressure" and "updating on new evidence" and "thinking about multiple possibilities before choosing and defending one of them" and "not having a strong identity that dictates all answers" will strongly correlate with each other; maybe they will be independent or even contradictory; maybe some of them will correlate together and the other will not, so we get two or three clusters of traits. This is an empirical question and must be answered my measurement.

Some of the intelligence tests in the past were strongly culturally biased (e.g. contained questions from history or literature, knowledge of proverbs or cultural norms), some of them required specific skills (e.g. mathematical). But some of them were not. Now that we have many different solutions, we can pick the less biased ones. But even the old ones were better than nothing; useful approximations within a given cultural group. If the first rationality tests will be similarly flawed, that also will not mean the entire field is doomed; later the tests an be improved, the heavily culture-specific questions removed, getting closer to the abstract essence of rationality.

I agree there is a risk that an irrational person might have a good model of what would a rational person do (while it is impossible for a stupid person to predict how a smart person would solve a difficult problem). I can imagine a smart religious fanatic thinking: "What would HJPEV, the disgusting little heathen, do in this situation?" and running a rationality routine in a sandbox. In that case, the best we could achieve would be the tests measuring someone's capacity to think rationally if they choose to. Such person could still later become an ugly surprise. Well... I suppose we just have to accept this, and add it to the list of warnings of what the rationality tests don't show.

As an example of the questions in tests; I would probably not try to test "rationality" as a whole in a single answer, but make separate answers focused on each component. For example, a test of resisting peer pressure would describe a story where one person provides a good evidence for X, but many people provide obviously bad reasoning for Y; and you have to choose which is more likely. For a test of updating, I would provide multiple pieces of evidence, where the first three point towards an answer X, but the following seven point towards an answer Y, and might even contain explanation why the first three pieces were misleading. The reader would be asked to write an answer answer reading first three pieces, and after reading all of them. For seeing multiple solutions, I would present some puzzle with multiple solutions, and task would be to find within a time limit as much as possible.

Each of these questions has some obvious flaws. But, analogically with the IQ tests, I believe the correct approach is to try dozens of flawed questions, gather data, and see how much they correlate with each other, make a factor analysis, gradually replace them with more pure versions, etc.

Comment author: brazil84 04 March 2014 10:05:27AM 1 point [-]

Still I think that if we look at the history of the IQ tests, we can take some lessons from there. I mean; imagine that there are no IQ tests yet, and you are supposed to invent the first one. The task would probably seem impossible, and there would be similar objections.

It's hard to say given that we have the benefit of hindsight, but at least we wouldn't have to deal with what I believe to be the killer objection -- that irrational people would subconsciously cheat if they know they are being tested.

If the first rationality tests will be similarly flawed, that also will not mean the entire field is doomed; later the tests an be improved, the heavily culture-specific questions removed, getting closer to the abstract essence of rationality.

I agree, but that still doesn't get you any closer to overcoming the problem I described.

I agree there is a risk that an irrational person might have a good model of what would a rational person do (while it is impossible for a stupid person to predict how a smart person would solve a difficult problem). I can imagine a smart religious fanatic thinking: "What would HJPEV, the disgusting little heathen, do in this situation?" and running a rationality routine in a sandbox. In that case, the best we could achieve would be the tests measuring someone's capacity to think rationally if they choose to.

To my mind that's not very helpful because the irrational people I meet have been pretty good at thinking rationally if they choose to. Let me illustrate with a hypothetical: Suppose you meet a person with a fervent belief in X, where X is some ridiculous and irrational claim. Instead of trying to convince them that X is wrong, you offer them a bet, the outcome of which is closely tied to whether X is true or not. Generally they will not take the bet. And in general, when you watch them making high or medium stakes decisions, they seem to know perfectly well -- at some level -- that X is not true.

Of course not all beliefs are capable of being tested in this way, but when they can be tested the phenomenon I described seems pretty much universal. The reasonable inference is that irrational people are generally speaking capable of rational thought. I believe this is known as "standby rationality mode."

Comment author: Viliam_Bur 04 March 2014 10:53:47AM *  0 points [-]

Okay, I think there is a decent probability that you are right, but at this moment we need more data, which we will get by trying to create different kinds of rationality tests.

A possible outcome is that we won't get true rationality tests, but at least something partially useful, e.g. tests selecting the people capable of rational though, which includes a lot of irrational people, but still not everyone. Which may still appear to be just another form of intelligence tests (a sufficiently intelligent irrational person is able to make rational bets, and still believe they have an invisible dragon in the garage).

So... perhaps this is a moment where I should make a bet about my beliefs. Assuming that Stanovich does not give up, and other people will follow him (that is, assuming that enough psychologists will even try to create rationality tests), I'd guess... probability 20% within 5 years, 40% within 10 years, 80% ever (pre-Singularity) that there will be a test which predicts rationality significantly better than an IQ test. Not completely reliably, but sufficiently that you would want your employees to be tested by that test instead of an IQ test, even if you had to pay more for it. (Which doesn't mean that employers actually will want to use it. Or will be legally allowed to.) And probability 10% within 10 years, 60% ever that a true "rationality test" will be invented, at least for values up to 130 (which still many compartmentalizing people will pass). These numbers are just a wild guess, tomorrow I would probably give different values; I just thought it would be proper to express my beliefs in this format, because it encourages rationality in general.

Comment author: brazil84 04 March 2014 12:41:40PM 1 point [-]

Which may still appear to be just another form of intelligence tests (

Yes, I have a feeling that "capability of rationality" would be highly correlated with IQ.

Not completely reliably, but sufficiently that you would want your employees to be tested by that test instead of an IQ test

Your mention of employees raises another issue, which is who the test would be aimed at. When we first started discussing the issue, I had an (admittedly vague) idea in my head that the test could be for aspiring rationalists. i.e. that it could it be used to bust irrational lesswrong posters who are far less rational than they realize. It's arguably more of a challenge to come up with a test to smoke out the self-proclaimed paragon of rationality who has the advantage of careful study and who knows exactly what he is being tested for.

By analogy, consider the Crown-Marlow Social Desirability Scale, which has been described as a test which measures "the respondent's desire to exaggerate his own moral excellence and to present a socially desirable facade" Here is a sample question from the test:

  1. T F I have never intensely disliked anyone

Probably the test works pretty well for your typical Joe or Jane Sixpack. But someone who is intelligent; who has studied up in this area; and who knows what's being tested will surely conceal his desire to exaggerate his moral excellence.

That said, having thought about it, I do think there is a decent chance that solid rationality tests will be developed. At least for subjects who are unprepared. One possibility is to measure reaction times as with "Project Implicit." Perhaps self-deception is more congnitively demanding than self-honesty and therefore a clever test might measure it. But you still might run into the problem of subconscious cheating.

Comment author: Nornagest 06 March 2014 11:57:05PM *  2 points [-]

Perhaps self-deception is more congnitively demanding than self-honesty and therefore a clever test might measure it.

If anything, I might expect the opposite to be true in this context. Neurotypical people have fast and frugal conformity heuristics to fall back on, while self-honestly on a lot of questions would probably take some reflection; at least, that's true for questions that require aggregating information or assessing personality characteristics rather than coming up with a single example of something.

It'd definitely be interesting to hook someone up to a polygraph or EEG and have them take the Crowne-Marlowe Scale, though.

Comment author: brazil84 07 March 2014 06:30:22AM 0 points [-]

If anything, I might expect the opposite to be true in this context.

Well consider the hypothetical I proposed:

suppose you are having a Socratic dialogue with someone who holds irrational belief X. Instead of simply laying out your argument, you ask the person whether he agrees with Proposition Y, where Proposition Y seems pretty obvious and indisputable. Our rational person might quickly and easily agree or disagree with Y. Whereas our irrational person needs to think more carefully about Y; decide whether it might undermine his position; and if it does, construct a rationalization for rejecting Y. This difference in thinking might be measured in terms of reaction times.

See what I mean?

I do agree that in other contexts, self-deception might require less thought. e.g. spouting off the socially preferable answer to a question without really thinking about what the correct answer is.

It'd definitely be interesting to hook someone up to a polygraph or EEG and have them take the Crowne-Marlowe Scale, though.

Yes.

Comment author: Viliam_Bur 04 March 2014 04:47:22PM 1 point [-]

That sample question reminds me of a "lie score", which is a hidden part of some personality tests. Among the serious questions, there are also some questions like this, where you are almost certain that the "nice" answer is a lie. Most people will lie on one or two of ten such question, but the rule of thumb is that if they lie in five or more, you just throw the questionnaire away and declare them a cheater. -- However, if they didn't lie on any of these question, you do a background check whether they have studied psychology. And you keep in mind that the test score may be manipulated.

Okay, I admit that this problem would be much worse for rationality tests, because if you want a person with given personality, they most likely didn't study psychology. But if CFAR or similar organizations become very popular, then many candidates for highly rational people will be "tainted" by the explicit study of rationality, simply because studying rationality explicitly is probably a rational thing to do (this is just an assumption), but it's also what an irrational person self-identifying as a rationalist would do. Also, practicing for IQ tests is obvious cheating, but practicing for getting better at doing rational tasks is the rational thing to do, and a wannabe rationalist would do it, too.

Well, seems like the rationality tests would be more similar to IQ tests than to personality test. Puzzles, time limits... maybe even the reaction times or lie detectors.

Comment author: PeterDonis 06 March 2014 11:43:06PM *  0 points [-]

Among the serious questions, there are also some questions like this, where you are almost certain that the "nice" answer is a lie.

On the Crowne-Marlowe scale, it looks to me (having found a copy online and taken it) like most of the questions are of this form. When I answered all of the questions honestly, I scored 6, which according to the test, indicates that I am "more willing than most people to respond to tests truthfully"; but what it indicates to me is that, for all but 6 out of 33 questions, the "nice" answer was a lie, at least for me.

The 6 questions were the ones where the answer I gave was, according to the test, the "nice" one, but just happened to be the truth in my case: for example, one of the 6 was "T F I like to gossip at times"; I answered "F", which is the "nice" answer according to the test--presumably on the assumption that most people do like to gossip but don't want to admit it--but I genuinely don't like to gossip at all, and can't stand talking to people who do. Of course, now you have the problem of deciding whether that statement is true or not. :-)

Could a rationality test be gamed by lying? I think that possibility is inevitable for a test where all you can do is ask the subject questions; you always have the issue of how to know they are answering honestly.

Comment author: brazil84 04 March 2014 08:28:37PM 0 points [-]

Well, seems like the rationality tests would be more similar to IQ tests than to personality test. Puzzles, time limits... maybe even the reaction times or lie detectors.

Yes, reaction times seem like an interesting possibility. There is an online test for racism which uses this principle. But it would be pretty easy to beat the test if the results counted for anything. Actually lie detectors can be beaten too.

Perhaps brain imaging will eventually advance to the point where you can cheaply and accurately determine if someone is engaged in deception or self-deception :)

Comment author: hairyfigment 01 March 2014 07:19:44PM 1 point [-]

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments.

When? The closest case I can recall came from someone defending religion or theology - which brought roughly the response you'd expect - and even that was a weaker claim.

If you mean people saying you should try to slightly adjust your probabilities upon meeting intelligent and somewhat rational disagreement, this seems clearly true. Worst case scenario, you waste some time putting a refutation together (coughWLC).

Comment author: orbenn 01 March 2014 04:59:07PM 1 point [-]

"rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme

Perhaps a better branding would be "effective decision making", or "effective thought"?

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

I think this is the core of what you are disliking. Almost all of my reading on LW is in the Sequences rather than the discussion areas, so I haven't been placed to notice anyone's arrogance. But I'm a little sadly surprised by your experience because for me, the result of reading the sequences has been to have less trust that my own level of sanity is high. I'm significantly less certain of my correctness in any argument.

We know that knowing about biases doesn't remove them, so instead of increasing our estimate of our own rationality, it should correct our estimate downwards. This shouldn't even require pride as an expense since we're also adjusting our estimates of everyone else's sanity down a similar amount. As a check to see if we're doing things right, the result should be less time spent arguing and more time spent thinking about how we might be wrong and how to check our answers. Basically it should remind us to use type 2 thinking more whenever possible, and to seek effectiveness training for our type 1 thinking whenever available.

Comment author: Vaniver 02 March 2014 10:06:31AM 0 points [-]

I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.

So, respond with something like "I don't think sanity is a single personal variable which extends to all held beliefs." It conveys the same information- "I don't trust conclusions solely because you reached them"- but it doesn't convey the implication that this is a personal failing on their part.

I've said this before when you've brought up the principle of charity, but I think it bears repeating. The primary benefit of the principle of charity is to help you, the person using it, and you seem to be talking mostly about how it affects discourse, and that you don't like it when other people expect that you'll use the principle of charity when reading them. I agree with you that they shouldn't expect that- but I find it more likely that this is a few isolated incidents (and I can visualize a few examples) than that this is a general tendency.

Comment author: cousin_it 01 March 2014 09:29:25AM *  0 points [-]

Just curious, how does Plantinga's argument prove that pigs fly? I only know how it proves that the perfect cheeseburger exists...

Comment author: Alejandro1 01 March 2014 04:06:06PM 3 points [-]

Copying the description of the argument from the Stanford Encyclopedia of Philosophy, with just one bolded replacement of a definition irrelevant to the formal validity of the argument:

Say that an entity possesses “maximal excellence” if and only if it is a flying pig. Say, further, that an entity possesses “maximal greatness” if and only if it possesses maximal excellence in every possible world—that is, if and only if it is necessarily existent and necessarily maximally excellent. Then consider the following argument:

  • There is a possible world in which there is an entity which possesses maximal greatness.

  • (Hence) There is an entity which possesses maximal greatness.

Comment author: TheOtherDave 01 March 2014 04:49:27PM 2 points [-]

This argument proves that at least one pig can fly. I understand "pigs fly" to mean something more like "for all X, if X is a typical pig, X can fly."

Comment author: Alejandro1 01 March 2014 05:28:02PM 4 points [-]

You are right. Perhaps the argument could be modified by replacing "is a flying pig" by "is a typical pig in all respects, and flies"?

Comment author: TheOtherDave 01 March 2014 09:24:25PM 1 point [-]

Perhaps. It's not clear to me that this is irrelevant to the formal validity of the argument, since "is a typical pig in all respects, and flies" seems to be a contradiction, and replacing a term in an argument with a contradiction isn't necessarily truth-preserving. But perhaps it is, I don't know... common sense would reject it, but we're clearly not operating in the realms of common sense here.

Comment author: ChrisHallquist 03 March 2014 08:05:06AM 3 points [-]

Plantinga's argument defines God as a necessary being, and assumes it's possible that God exists. From this, and the S5 axioms of modal logic, it folllws that God exists. But you can just as well argue, "It's possible the Goldbach Conjecture is true, and mathematical truths are if true necessarily true, therefore the Goldbach Conjecture is true." Or even "Possibly it's a necessary truth that pigs fly, therefore pigs fly."

(This is as much as I can explain without trying to give a lesson in modal logic, which I'm not confident in my ability to do.)

Comment author: cousin_it 03 March 2014 10:18:20AM 1 point [-]

"Possibly it's a necessary truth that pigs fly, therefore pigs fly."

That's nice, thanks!

Comment author: TheAncientGeek 24 April 2014 09:06:35AM *  0 points [-]

If you persistently misinterpret people as saying stupid things, then your evidence that people say a lot of stupid things is false evidence, You're in a sort of echo chamber. The PoC is correct because an actually stupid comment is a comment that can't be interpreted as smart no matter how hard you try.

The fact that some people misapply the POC is not the PoCs fault..

The PoC cannot in any way a guideline about what is worth spending time on. It us a out efficient communication in the sense of interpreting people correctly only. If you haven't got time to charitable interpret someone, you should default to some average or commital appraisal of their intelligence, rather than accumulate false data that they are stupid.

Comment author: Wes_W 03 March 2014 07:44:05PM 0 points [-]

Excellent post. I don't have anything useful to add at the moment, but I am wondering if the second-to-last paragraph:

First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that

is just missing a period at the end, or has a fragmented sentence.

Comment author: devas 01 March 2014 01:29:45PM 0 points [-]

I am surprised by the fact that this post has so little karma. Since one of the...let's call them "tenets" of the rationalism community is the drive to improve one's own self, I would have imagined that this kind of criticism would have been welcomed.

Can anyone explain this to me, please? :-/

Comment author: TheOtherDave 01 March 2014 04:53:10PM 7 points [-]

I'm not sure what the number you were seeing when you wrote this was, and for my own part I didn't upvote it because I found it lacked enough focus to retain my interest, but now I'm curious: how much karma would you expect a welcomed post to have received between the "08:52AM" and "01:29:45PM" timestamps?

Comment author: devas 02 March 2014 12:08:33PM 3 points [-]

I actually hadn't considered the time; in retrospect, though, it does make a lot of sense. Thank you! :-)