Quite a few people complain about the atheist/skeptic/rationalist communities being self-congratulatory. I used to dismiss this as a sign of people's unwillingness to admit that rejecting religion, or astrology, or whatever, was any more rational than accepting those things. Lately, though, I've started to worry.

Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.

Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality. The fact that members of the LessWrong community tend to be smart is no guarantee that they will be rational. And we have much reason to fear "rationality" degenerating into signaling games.

What Disagreement Signifies

Let's start by talking about disagreement. There's been a lot of discussion of disagreement on LessWrong, and in particular of Aumann's agreement theorem, often glossed as something like "two rationalists can't agree to disagree." (Or perhaps that we can't foresee to disagree.) Discussion of disagreement, however, tends to focus on what to do about it. I'd rather take a step back, and look at what disagreement tells us about ourselves: namely, that we don't think all that highly of each other's rationality.

This, for me, is the take-away from Tyler Cowen and Robin Hanson's paper Are Disagreements Honest? In the paper, Cowen and Hanson define honest disagreement as meaning that "meaning that the disputants respect each other’s relevant abilities, and consider each person’s stated opinion to be his best estimate of the truth, given his information and effort," and they argue disagreements aren't honest in this sense.

I don't find this conclusion surprising. In fact, I suspect that while people sometimes do mean it when they talk about respectful disagreement, often they realize this is a polite fiction (which isn't necessarily a bad thing). Deep down, they know that disagreement is disrespect, at least in the sense of not thinking that highly of the other person's rationality. That people know this is shown in the fact that they don't like being told they're wrong—the reason why Dale Carnegie says you can't win an argument

On LessWrong, people are quick to criticize each others' views, so much so that I've heard people cite this as a reason to be reluctant to post/comment (again showing they know intuitively that disagreement is disrespect). Furthermore when people in LessWrong criticize others' views, they very often don't seem to expect to quickly reach agreement. Even people Yvain would classify as "experienced rationalists" sometimes knowingly have persistent disagreements. This suggests that LessWrongers almost never consider each other to be perfect rationalists.

And I actually think this is a sensible stance. For one thing, even if you met a perfect rationalist, it could be hard to figure out that they are one. Furthermore, the problem of knowing what to do about disagreement is made harder when you're faced with other people having persistent disagreements: if you find yourself agreeing with Alice, you'll have to think Bob is being irrational, and vice versa. If you rate them equally rational and adopt an intermediate view, you'll have to think they're both being a bit irrational for not doing likewise.

The situation is similar to Moore's paradox in philosophy—the impossibility of asserting "it's raining, but I don't believe it's raining." Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."

We can find some room for humility in an analog of the preface paradox, the fact that the author of a book can say things like "any errors that remain are mine." We can say this because we might think each individual claim in the book is highly probable, while recognize that all the little uncertainties add up to it being likely there are still errors. Similarly, we can think each of our beliefs are individually rational, while recognizing we still probably have some irrational beliefs—we just don't know which ones And just because respectful disagreement is a polite fiction doesn't mean we should abandon it. 

I don't have a clear sense of how controversial the above will be. Maybe we all already recognize that we don't respect each other's opinions 'round these parts. But I think some features of discussion at LessWrong look odd in light of the above points about disagreement—including some of the things people say about disagreement.

The wiki, for example, says that "Outside of well-functioning prediction markets, Aumann agreement can probably only be approximated by careful deliberative discourse. Thus, fostering effective deliberation should be seen as a key goal of Less Wrong." The point of Aumann's agreement theorem, though, is precisely that ideal rationalists shouldn't need to engage in deliberative discourse, as usually conceived, in order to reach agreement.

As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it." But when dealing with real people who may or may not have a rational basis for their beliefs, that's almost always the right stance to take.

Intelligence and Rationality

Intelligence does not equal rationality. Need I say more? Not long ago, I wouldn't have thought so. I would have thought it was a fundamental premise behind LessWrong, indeed behind old-school scientific skepticism. As Michael Shermer once said, "Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments. When I hear that, I think "whaaat? People on LessWrong make bad arguments all the time!" When this happens, I generally limit myself to trying to point out the flaw in the argument and/or downvoting, and resist the urge to shout "YOUR ARGUMENTS ARE BAD AND YOU SHOULD FEEL BAD." I just think it.

When I reach for an explanation of why terrible arguments from smart people shouldn't surprise anyone, I go to Yvain's Intellectual Hipsters and Meta-Contarianism, one of my favorite LessWrong posts of all time. While Yvain notes that meta-contrarianism often isn't a good thing, though, on re-reading it I noticed what seems like an important oversight:

A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.

A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

According to the survey, the average IQ on this site is around 145. People on this site differ from the mainstream in that they are more willing to say death is bad, more willing to say that science, capitalism, and the like are good, and less willing to say that there's some deep philosophical sense in which 1+1 = 3. That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death.

The pattern of countersignaling Yvain describes here is real. But it's important not to forget that sometimes, the super-wealthy signal their wealth by buying things even the moderately wealthy can't afford. And sometimes, the very intelligent signal their intelligence by holding opinions even the moderately intelligent have trouble understanding. You also get hybrid status moves: designer versions of normally low-class clothes, complicated justifications for opinions normally found among the uneducated.

Robin Hanson has argued that this leads to biases in academia:

I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds. If so, academic beliefs are secondary – the important thing is to clearly show respect to those who make impressive displays like theorems or difficult data analysis. And the obvious way for academics to use their beliefs to show respect for impressive folks is to have academic beliefs track the most impressive recent academic work.

Robin's post focuses on economics, but I suspect the problem is even worse in my home field of philosophy. As I've written before, the problem is that in philosophy, philosophers never agree on whether a philosopher has solved a problem. Therefore, there can be no rewards for being right, only rewards for showing off your impressive intellect. This often means finding clever ways to be wrong.

I need to emphasize that I really do think philosophers are showing off real intelligence, not merely showing off faux-cleverness. GRE scores suggest philosophers are among the smartest academics, and their performance is arguably made more impressive by the fact that GRE quant scores are bimodally distributed based on whether your major required you to spend four years practicing your high school math, with philosophy being one of the majors that doesn't grant that advantage. Based on this, if you think it's wrong to dismiss the views of high-IQ people, you shouldn't be dismissive of mainstream philosophy. But in fact I think LessWrong's oft-noticed dismissiveness of mainstream philosophy is largely justified.

I've found philosophy of religion in particular to be a goldmine of terrible arguments made by smart people. Consider Alvin Plantinga's modal ontological argument. The argument is sufficiently difficult to understand that I won't try to explain it here. If you want to understand it, I'm not sure what to tell you except to maybe read Plantinga's book The Nature of NecessityIn fact, I predict at least one LessWronger will comment on this thread with an incorrect explanation or criticism of the argument. Which is not to say they wouldn't be smart enough to understand it, just that it might take them a few iterations of getting it wrong to finally get it right. And coming up with an argument like that is no mean feat—I'd guess Plantinga's IQ is just as high as the average LessWronger's.

Once you understand the modal ontological argument, though, it quickly becomes obvious that Plantinga's logic works just as well to "prove" that it's a necessary truth that pigs fly. Or that Plantinga's god does not exist. Or even as a general purpose "proof" of any purported mathematical truth you please. The main point is that Plantinga's argument is not stupid in the sense of being something you'd only come up with if you had a low IQ—the opposite is true. But Plantinga's argument is stupid in the sense of being something you'd only come up with it while under the influence of some serious motivated reasoning.

The modal ontological argument is admittedly an extreme case. Rarely is the chasm between the difficulty of the concepts underlying an argument, and the argument's actual merits, so vast. Still, beware the temptation to affiliate with smart people by taking everything they say seriously.

Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"

The Principle of Charity

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

More frustrating than this simple disagreement over charity, though, is when people who invoke the principle of charity do so selectively. They apply it to people who's views they're at least somewhat sympathetic to, but when they find someone they want to attack, they have trouble meeting basic standards of fairness. And in the most frustrating cases, this gets explicit justification: "we need to read these people charitably, because they are obviously very intelligent and rational." I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.

I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses. Whatever its merits, though, they can't depend on the actual intelligence and rationality of the person making an argument. Not only is intelligence no guarantee against making bad arguments, the whole reason we demand other people tell us their reasons for their opinions in the first place is we fear their reasons might be bad ones.

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

Beware Weirdness for Weirdness' Sake

There's a theory in the psychology and sociology of religion that the purpose of seemingly foolish rituals like circumcision and snake-handling is to provide a costly and therefore hard-to-fake signal of group commitment. I think I've heard it suggested—though I can't find by who—that crazy religious doctrines could serve a similar purpose. It's easy to say you believe in a god, but being willing to risk ridicule by saying you believe in one god who is three persons, who are all the same god, yet not identical to each other, and you can't explain how that is but it's a mystery you accept on faith... now that takes dedication.

Once you notice the general "signal group commitment in costly ways" strategy, it seems to crop up everywhere. Subcultures often seem to go out of their way to be weird, to do things that will shock people outside the subculture, ranging from tattoos and weird clothing to coming up with reasons why things regarded as normal and innocuous in the broader culture are actually evil. Even something as simple as a large body of jargon and in-jokes can do the trick: if someone takes the time to learn all the jargon and in-jokes, you know they're committed.

This tendency is probably harmless when done with humor and self-awareness, but it's more worrisome when a group becomes convinced its little bits of weirdness for weirdness' sake are a sign of its superiority to other groups. And it's worth being aware of, because it makes sense of signaling moves that aren't straightforwardly plays for higher status.

The LessWrong community has amassed a truly impressive store of jargon and in-jokes over the years, and some of it's quite useful (I reiterate my love for the term "meta-contrarian"). But as with all jargon, LessWrongian jargon is often just a silly way of saying things you could have said without it. For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.

That bit of LessWrong jargon is merely silly. Worse, I think, is the jargon around politics. Recently, a friend gave "they avoid blue-green politics" as a reason LessWrongians are more rational than other people. It took a day before it clicked that "blue-green politics" here basically just meant "partisanship." But complaining about partisanship is old hat—literally. America's founders were fretting about it back in the 18th century. Nowadays, such worries are something you expect to hear from boringly middle-brow columnists at major newspapers, not edgy contrarians.

But "blue-green politics," "politics is the mind-killer"... never mind how much content they add, the point is they're obscure enough to work as an excuse to feel superior to anyone whose political views are too mainstream. Outsiders will probably think you're weird, invoking obscure jargon to quickly dismiss ideas that seem plausible to them, but on the upside you'll get to bond with members of your in-group over your feelings of superiority.

A More Humble Rationalism?

I feel like I should wrap up with some advice. Unfortunately, this post was motivated by problems I'd seen, not my having thought of brilliant solutions to them. So I'll limit myself to some fairly boring, non-brilliant advice.

First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that.

Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy. Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme. It's just a little too easy to forget where "rationality" is supposed to connect with the real world, increasing the temptation for "rationality" to spiral off into signaling games.

New to LessWrong?

New Comment
395 comments, sorted by Click to highlight new comments since: Today at 9:07 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it."

I disagree with this, and explained why in Probability Space & Aumann Agreement . To quote the relevant parts:

There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeated

... (read more)
8JWP10y
Yes. There are reasons to ask for evidence that have nothing to do with disrespect. * Even assuming that all parties are perfectly rational and that any disagreement must stem from differing information, it is not always obvious which party has better relevant information. Sharing evidence can clarify whether you know something that I don't, or vice versa. * Information is a good thing; it refines one's model of the world. Even if you are correct and I am wrong, asking for evidence has the potential to add your information to my model of the world. This is preferable to just taking your word for the conclusion, because that information may well be relevant to more decisions than the topic at hand.
7paulfchristiano10y
There is truth to this sentiment, but you should keep in mind results like this one by Scott Aaronson, that the amount of info that people actually have to transmit is independent of the amount of evidence that they have (even given computational limitations). It seems like doubting each other's rationality is a perfectly fine explanation. I don't think most people around here are perfectly rational, nor that they think I'm perfectly rational, and definitely not that they all think that I think they are perfectly rational. So I doubt that they've updated enough on the fact that my views haven't converged towards theirs, and they may be right that I haven’t updated enough on the fact that their views haven’t converged towards mine. In practice we live in a world where many pairs of people disagree, and you have to disagree with a lot of people. I don’t think the failure to have common knowledge is much of a vice, either of me or my interlocutor. It’s just a really hard condition.
1Wei Dai10y
The point I wanted to make was that AFAIK there is currently no practical method for two humans to reliably reach agreement on some topic besides exchanging all the evidence they have, even if they trust each other to be as rational as humanly possible. The result by Scott Aaronson may be of theoretical interest (and maybe even of practical use by future AIs that can perform exact computations with the information in their minds), but seem to have no relevance to humans faced with real-world (i.e., as opposed to toy examples) disagreements. I don't understand this. Can you expand?
1Lumifer10y
Huh? There is currently no practical method for two humans to reliably reach agreement on some topic, full stop. Exchanging all evidence might help, but given that we are talking about humans and not straw Vulcans, it is still not a reliable method.
1ChrisHallquist10y
I won't try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong. It seems like two people trading probability estimates shouldn't need to deduce exactly what the other has observed, they just need to make inferences along the lines of, "wow, she wasn't swayed as much as I expected by me telling her my opinion, she must think she has some pretty good evidence." At least that's the inference you would make if you both knew you trust each other's rationality. More realistically, of course, the correct inference is usually "she wasn't swayed by me telling her my opinion, she doesn't just trust me to be rational." Consider what would have to happen for two rationalists who knowingly trust each other's rationality to have a persistent disagreement. Because of conservation of expected evidence, Alice has to think her probability estimate would on average remain the same after hearing Bob's evidence, and Bob must think the same about hearing Alice's evidence. That seems to suggest they both must think they have better, more relevant evidence to the question at hand. And might be perfectly reasonable for them to think that at first. But after several rounds of sharing their probability estimates and seeing the other not budge, Alice will have to realize Bob thinks he's better informed about the topic than she is. And Bob will have to realize the same about Alice. And if they both trust each other's rationality, Alice will have to think, "I thought I was better informed than Bob about this, but it looks like Bob thinks he's the one who's better informed, so maybe I'm wrong about being better informed." And Bob will have to have the parallel thought. Eventually, they should converge.
1Eugine_Nier10y
Wei Dai's description is correct, see here for an example where the final estimate is outside the range of the initial two. And yes, the Aumann agreement theorem does not say what nearly everyone (including Eliezer) seems to intuitively think it says.
0Will_Newsome10y
Wonder if a list of such things can be constructed. Algorithmic information theory is an example where Eliezer drew the wrong implications from the math and unfortunately much of LessWrong inherited that. Group selection (multi-level selection) might be another example, but less clear cut, as that requires computational modeling and not just interpretation of mathematics. I'm sure there are more and better examples.
0RobinZ10y
The argument can even be made more general than that: under many circumstances, it is cheaper for us to discuss the evidence we have than it is for us to try to deduce it from our respective probability estimates.
0PeterDonis10y
I'm not sure this qualifier is necessary. Your argument is sufficient to establish your point (which I agree with) even if you do trust the other's rationality.
4ChrisHallquist10y
Personally, I am entirely in favor of the "I don't trust your rationality either" qualifier.
1PeterDonis10y
Is that because you think it's necessary to Wei_Dai's argument, or just because you would like people to be up front about what they think?
-1Gunnar_Zarncke10y
Yes. But it entirely depends on how the request for supportive references is phrased. Good: Bad: The neutral leaves the interpretation of the attitude to the reader/addressee and is bound to be misinterpreted (people misinterpreting tone or meaning of email).
1ChrisHallquist10y
Saying sort of implies you're updating towards the other's position. If you not only disagree but are totally unswayed by hearing the other person's opinion, it becomes polite but empty verbiage (not that polite but empty verbiage is always a bad thing).
-2Gunnar_Zarncke10y
But shouldn't you always update toward the others position? And if the argument isn't convincing you can truthfully tell so that you updated only slightly.

But shouldn't you always update toward the others position?

That's not how Aumann's theorem works. For example, if Alice mildly believe X and Bob strongly believes X, it may be that Alice has weak evidence for X, and Bob has much stronger independent evidence for X. Thus, after exchanging evidence they'll both believe X even more strongly than Bob did initially.

Yup!

One related use case is when everyone in a meeting prefers policy X to policy Y, although each are a little concerned about one possible problem. Going around the room and asking everyone how likely they think X is to succeed produces estimates of 80%, so, having achieved consensus, they adopt X.

But, if people had mentioned their particular reservations, they would have noticed they were all different, and that, once they'd been acknowledged, Y was preferred.

7Viliam_Bur10y
Even if they both equally strongly believe X, it makes sense for them to talk whether they both used the same evidence or different evidence.
2Will_Newsome10y
Obligatory link.
0Gunnar_Zarncke10y
Of course. I agree that doesn't make clear that the other holds another position and that the reply may just address the validity of the evidence. But even then shouldn't you see it at least as weak evidence and thus believe X at least a bit more strongly?

I interpret you as making the following criticisms:

1. People disagree with each other, rather than use Aumann agreement, which proves we don't really believe we're rational

Aside from Wei's comment, I think we also need to keep track of what we're doing.

If we were to choose a specific empirical fact or prediction - like "Russia will invade Ukraine tomorrow" - and everyone on Less Wrong were to go on Prediction Book and make their prediction and we took the average - then I would happily trust that number more than I would trust my own judgment. This is true across a wide variety of different facts.

But this doesn't preclude discussion. Aumann agreement is a way of forcing results if forcing results were our only goal, but we can learn more by trying to disentangle our reasoning processes. Some advantages to talking about things rather than immediately jumping to Aumann:

  • We can both increase our understanding of the issue.

  • We may find a subtler position we can both agree on. If I say "California is hot" and you say "California is cold", instead of immediately jumping to "50% probability either way" we can work out which parts of California are

... (read more)

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.

The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm

... (read more)

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.

Are ethics supposed to be Aumann-agreeable? I'm not at all sure the original proof extends that far. If it doesn't, that would cover your disagreement with Alicorn as well as a very large number of other disagreements here.

I don't think it would cover Eliezer vs. Robin, but I'm uncertain how "real" that disagreement is. If you forced both of them to come up with probability estimates for an em scenario vs. a foom scenario, then showed them both each other's estimates and put a gun to their heads and asked them whether they wanted to Aumann-update or not, I'm not sure they wouldn't ag... (read more)

2ChrisHallquist10y
I question how objective these objective criterion you're talking about are. Usually when we judge someone's intelligence, we aren't actually looking at the results of an IQ test, so that's subjective. Ditto rationality. And if you were really that concerned about education, you'd stop paying so much attention to Eliezer or people who have a bachelors' degree at best and pay more attention to mainstream academics who actually have PhDs. FWIW, actual heuristics I use to determine who's worth paying attention to are * What I know of an individual's track record of saying reasonable things. * Status of them and their ideas within mainstream academia (but because everyone knows about this heuristic, you have to watch out for people faking it. * Looking for other crackpot warning signs I've picked up over time, e.g. a non-expert claiming the mainstream academic view is not just wrong but obviously stupid, or being more interested in complaining that their views are being suppressed than in arguing for those views. Which may not be great heuristics, but I'll wager that they're better than IQ (wager, in this case, being a figure of speech, because I don't actually know how you'd adjudicate that bet). It may be helpful, here, to quote what I hope will be henceforth known as the Litany of Hermione: "The thing that people forget sometimes, is that even though appearances can be misleading, they're usually not." You've also succeeded in giving me second thoughts about being signed up for cryonics, on the grounds that I failed to consider how it might encourage terrible mental habits in others. For the record, it strikes me as quite possible that mainstream neuroscientists are entirely correct to be dismissive of cryonics—my biggest problem is that I'm fuzzy on what exactly they think about cryonics (more here).

Your heuristics are, in my opinion, too conservative or not strong enough.

Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with. If you're a creationist, you can rule out paying attention to Richard Dawkins, because if he's wrong about God existing, about the age of the Earth, and about homosexuality being okay, how can you ever expect him to be right about evolution? If you're anti-transhumanism, you can rule out cryonicists because they tend to say lots of other unreasonable things like that computers will be smarter than humans, or that there can be "intelligence explosions", or that you can upload a human brain.

Status within mainstream academia is a really good heuristic, and this is part of what I mean when I say I use education as a heuristic. Certainly to a first approximation, before investigating a field, you should just automatically believe everything the mainstream academics believe. But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain ... (read more)

2TheAncientGeek10y
Has anyone noticed that, given the fact that most of the material on this site is esemtially about philosophy, "academic philosophy sucks" is a Crackpot Warning Sign, ie "don't listen to the hidebound establishment".
4ChrisHallquist10y
So I normally defend the "trust the experts" position, and I went to grad school for philosophy, but... I think philosophy may be an area where "trust the experts" mostly doesn't work, simply because with a few exceptions the experts don't agree on anything. (Fuller explanation, with caveats, here.)
9Protagoras10y
Also, from the same background, it is striking to me that a lot of the criticisms Less Wrong people make of philosophers are the same as the criticisms philosophers make of one another. I can't really think of a case where Less Wrong stakes out positions that are almost universally rejected by mainstream philosophers. And not just because philosophers disagree so much, though that's also true, of course; it seems rather that Less Wrong people greatly exaggerate how different they are and how much they disagree with the philosophical mainstream, to the extent that any such thing exists (again, a respect in which their behavior resembles how philosophers treat one another).
1TheAncientGeek9y
Since there is no consensus among philosophers, respecting philosophy is about respecting the process. The negative .claims LW makes about philosophy are indeed similar to the negative claims philosophy makes about itself. LW also makes the positive claim that it has a better, faster method than philosophy but in fact just has a truncated version of the same method. As Hallquist notes elsewhere But Alexander misunderstands me when he says I accuse Yudkowsky “of being against publicizing his work for review or criticism.” He’s willing to publish it–but only to enlighten us lesser rationalists. He doesn’t view it as a necessary part of checking whether his views are actually right. That means rejecting the social process of science. That’s a problem. Or, as I like to put it, if you half bake your bread, then you get your bread quicker...but its half baked,
0TheAncientGeek10y
If what philosophers specialise in clarifying questions, they can trusted to get the question right. A typical failure mode of amateur philosophy is to substitute easier questions for harder ones.
0Vaniver10y
You might be interested in this article and this sequence (in particular, the first post of that sequence). "Academic philosophy sucks" is a Crackpot Warning Sign because of the implied brevity. A measured, in-depth criticism is one thing; a smear is another.
-1TheAncientGeek10y
Read them ,not generally impressed.
1torekp10y
Counterexample: your own investigation of natural law theology. Another: your investigation of the Alzheimer's bacterium hypothesis. I'd say your own intellectual history nicely demonstrates just how to pull off the seemingly impossible feat of detecting reasonable people you disagree with.
0ChrisHallquist10y
With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don't agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accomplishes anything. Examples? I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority (and the same study found many more academics take some sort of weaker pro-religion stance). And in my experience, even highly respected academics, when they try to defend religion, routinely make juvenile mistakes that make Plantinga look good by comparison. (Remember, I used Plantinga in the OP not because he makes the dumbest mistakes per se but as an example of how bad arguments can signal high intelligence.) Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true." "Good arguments" is much harder to judge, and seems to defeat the purpose of having a heuristic for deciding who to treat charitably: if I say "this guy's arguments are terrible," and you say, "you should read those arguments more charitably," it doesn't do much good for you to defend that claim by saying, "well, he has a track record of making good arguments."
2Scott Alexander10y
I agree that disagreement among philosophers is a red flag that we should be looking for alternative positions. But again, I don't feel like that's strong enough enough. Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones? Well, take Barry Marshall. Became convinced that ulcers were caused by a stomach bacterium (he was right; later won the Nobel Prize). No one listened to him. He said that "my results were disputed and disbelieved, not on the basis of science but because they simply could not be true...if I was right, then treatment for ulcer disease would be revolutionized. It would be simple, cheap and it would be a cure. It seemed to me that for the sake of patients this research had to be fast tracked. The sense of urgency and frustration with the medical community was partly due to my disposition and age." So Marshall decided since he couldn't get anyone to fund a study, he would study it on himself, drank a serum of bacteria, and got really sick. Then due to a weird chain of events, his results ended up being published in the Star, a tabloid newspaper that by his own admission "talked about alien babies being adopted by Nancy Reagan", before they made it into legitimate medical journals. I feel like it would be pretty easy to check off a bunch of boxes on any given crackpot index..."believes the establishment is ignoring him because of their biases", "believes his discovery will instantly solve a centuries-old problem with no side effects", "does his studies on himself", "studies get published in tabloid rather than journal", but these were just things he naturally felt or had to do because the establishment wouldn't take him seriously and he couldn't do things "right". I think it is much much less than the general public, but I don't
7Jiro10y
The extent to which science rejected the ulcer bacterium theory has been exaggerated. (And that article also addresses some quotes from Marshall himself which don't exactly match up with the facts.)
2ChrisHallquist10y
What's your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we're basically talking about people who all have PhDs, so education can't be the heuristic. You also proposed IQ and rationality, but admitted we aren't going to have good ways to measure them directly, aside from looking for "statements that follow proper logical form and make good arguments." I pointed out that "good arguments" is circular if we're trying to decide who to read charitably, and you had no response to that. That leaves us with "proper logical form," about which you said: In response to this, I'll just point out that this is not an argument in proper logical form. It's a lone assertion followed by a rhetorical question.
2Kawoomba10y
If they were, uFAI would be a non-issue. (They are not.)
-5pcm10y
-11TheAncientGeek10y
0TheAncientGeek10y
Not being charitable to people isn't a problem, providing you don't mistake your lack of charity for evidence that they are stupid or irrational.
4Solvent10y
That's a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn't be able to reach consensus on that no matter how hard you tried.

Three somewhat disconnected responses —

For a moral realist, moral disagreements are factual disagreements.

I'm not sure that humans can actually have radically different terminal values from one another; but then, I'm also not sure that humans have terminal values.

It seems to me that "deontologist" and "consequentialist" refer to humans who happen to have noticed different sorts of patterns in their own moral responses — not groups of humans that have fundamentally different values written down in their source code somewhere. ("Moral responses" are things like approving, disapproving, praising, punishing, feeling pride or guilt, and so on. They are adaptations being executed, not optimized reflections of fundamental values.)

2blacktrance10y
The danger of this approach is obvious, but it can have benefits as well. You may not know that a particular LessWronger is sane, but you do know that on average LessWrong has higher sanity than the general population. That's a reason to be more charitable.
8[anonymous]10y
Besides which, we're human beings, not fully-rational Bayesian agents by mathematical construction. Trying to pretend to reason like a computer is a pointless exercise when compared to actually talking things out the human way, and thus ensuring (the human way) that all parties leave better-informed than they arrived.
6elharo10y
FYI IQ, whatever it measures, has little to no correlation with either epistemic or instrumental rationality, For extensive discussion of this topic see Keith Stanovich's What Intelligence Tests Miss In brief, intelligence (as measured by an IQ test), epistemic rationality (the ability to form correct models of the world), and instrumental rationality (the ability to define and carry out effective plans for achieving ones goals) are three different things. A high score on an IQ test does not correlate with enhanced epistemic or instrumental rationality. For examples, of the lack of correlation between IQ and epistemic rationality, consider the very smart folks you have likely met who have gotten themselves wrapped up in incredibly complex and intellectually challenging belief systems that do not match the world we live in: Objectivism, Larouchism, Scientology, apologetics, etc. For examples of the lack of correlation between IQ and instrumental rationality, consider the very smart folks you have likely met who cannot get out of their parents basement, and whose impact on the world is limited to posting long threads on Internet forums and playing WoW.
2Kaj_Sotala10y
LW discussion.

A Christian proverb says: “The Church is not a country club for saints, but a hospital for sinners”. Likewise, the rationalist community is not an ivory tower for people with no biases or strong emotional reactions, it’s a dojo for people learning to resist them.

SlateStarCodex

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

Identifying as a "rationalist" is encouraged by the welcome post.

We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist

Edited the most recent welcome post and the post of mine that it linked to.

Does anyone have a 1-syllable synonym for 'aspiring'? It seems like we need to impose better discipline on this for official posts.

8somervta10y
Consider "how you came to aspire to rationality/be a rationalist" instead of "identify as an aspiring rationalist". Or, can the identity language and switch to "how you came to be interested in rationality".
3CCC10y
Looking at a thesaurus, "would-be" may be a suitable synonym. Other alternatives include 'budding', or maybe 'keen'.
2Bugmaster10y
FWIW, "aspiring rationalist" always sounded quite similar to "Aspiring Champion" to my ears. That said, why do we need to use any syllables at all to say "aspiring rationalist" ? Do we have some sort of a secret rite or a trial that an aspiring rationalist must pass in order to become a true rationalist ? If I have to ask, does that mean, I'm not a rationalist ? :-/
2wwa10y
demirationalist - on one hand, something already above average, like in demigod. On the other, leaves the "not quite there" feeling. My second best was epirationalist Didn't find anything better in my opinion, but in case you want to give it a (somewhat cheap) shot yourself... I just looped over this
0brazil8410y
The only thing I can think of is "na" e.g. in Dune, Feyd Rauthah was the "na-baron," meaning that he had been nominated to succeed the baron. (And in the story he certainly was aspiring to be Baron.) Not quite what you are asking for but not too far either.
4Oscar_Cunningham10y
And the phrase "how you came to identify as a rationalist" links to the very page where in the comments Robin Hanson suggests not using the term "rationalist", and the alternative "aspiring rationalist" is suggested!

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

My initial reaction to this was warm fuzzy feelings, but I don't think it's correct, any more than calling yourself a theist indicates believing you are God. "Rationalist" means believing in rationality (in the sense of being pro-rationality), not believing yourself to be perfectly rational. That's the sense of rationalist that goes back at least as far as Bertrand Russell. In the first paragraph of his "Why I Am A Rationalist", for example, Russell identifies as a rationalist but also says, "We are not yet, and I suppose men and women never will be, completely rational."

This also seems like it would be a futile linguistic fight. A better solution might be to consciously avoid using "rationalist" when talking about Aumann's agreement theorem—use "ideal rationalists" or "perfect rationalist". I also tend to use phrases like "members of the online rationalist community," but that's more to indicate I'm not talking about Russell or Dawkins (much less Descartes).

8Nornagest10y
The -ist suffix can mean several things in English. There's the sense of "practitioner of [an art or science, or the use of a tool]" (dentist, cellist). There's "[habitual?] perpetrator of" or "participant in [an act]" (duelist, arsonist). And then there's "adherent of [an ideology, doctrine, or teacher]" (theist, Marxist). Seems to me that the problem has to do with equivocation between these senses as much as with the lack of an "aspiring". And personally, I'm a lot more comfortable with the first sense than the others; you can after all be a bad dentist. Perhaps we should distinguish between rationaledores and rationalistas? Spanglish, but you get the picture.
2polymathwannabe10y
The -dor suffix is only added to verbs. The Spanish word would be razonadores ("ratiocinators").
0Vaniver10y
"Reasoner" captures this sense of "someone who does an act," but not quite the "practitioner" sense, and it does a poor job of pointing at the cluster we want to point at.
6A1987dM10y
Recency illusion?

I've recently had to go on (for a few months) some medication which had the side effect of significant cognitive impairment. Let's hand-wavingly equate this side effect to shaving thirty points off my IQ. That's what it felt like from the inside.

While on the medication, I constantly felt the need to idiot-proof my own life, to protect myself from the mistakes that my future self would certainly make. My ability to just trust myself to make good decisions in the future was removed.

This had far more ramifications than I can go into in a brief comment, but I can generalize by saying that I was forced to plan more carefully, to slow down, to double-check my work. Unable to think as deeply into problems in a freewheeling cognitive fashion, I was forced to break them down carefully on paper and understand that anything I didn't write down would be forgotten.

Basically what I'm trying to say is that being stupider probably forced me to be more rational.

When I went off the medication, I felt my old self waking up again, the size of concepts I could manipulate growing until I could once again comprehend and work on programs I had written before starting the drugs in the first place. I... (read more)

7John_Maxwell10y
What medication?

One thing I hear you saying here is, "We shouldn't build social institutions and norms on the assumption that members of our in-group are unusually rational." This seems right, and obviously so. We should expect people here to be humans and to have the usual human needs for community, assurance, social pleasantries, and so on; as well as the usual human flaws of defensiveness, in-group biases, self-serving biases, motivated skepticism, and so on.

Putting on the "defensive LW phyggist" hat: Eliezer pointed out a long time ago that knowing about biases can hurt people, and the "clever arguer" is a negative trope throughout that swath of the sequences. The concerns you're raising aren't really news here ...

Taking the hat off again: ... but it's a good idea to remind people of them, anyway!


Regarding jargon: I don't think the "jargon as membership signaling" approach can be taken very far. Sure, signaling is one factor, but there are others, such as —

  • Jargon as context marker. By using jargon that we share, I indicate that I will understand references to concepts that we also share. This is distinct from signaling that we are social allies; it
... (read more)

I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses.

Some people are just bad at explaining their ideas correctly (too hasty, didn't reread themselves, not a high enough verbal SAT, foreign mother tongue, inferential distance, etc.), others are just bad at reading and understanding other's ideas correctly (too hasty, didn't read the whole argument before replying, glossed over that one word which changed the whole meaning of a sentence, etc.).

I've seen many poorly explained arguments which I could understand as true or at least pointing in interesting directions, which were summarily ignored or shot down by uncharitable readers.

3alicey10y
i tend to express ideas tersely, which counts as poorly-explained if my audience is expecting more verbiage, so they round me off to the nearest cliche and mostly downvote me i have mostly stopped posting or commenting on lesswrong and stackexchange because of this like, when i want to say something, i think "i can predict that people will misunderstand and downvote me, but i don't know what improvements i could make to this post to prevent this. sigh." revisiting this on 2014-03-14, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is for example, i suspect that the use of more intuitively sensible grammar in this comment (mostly just a lack of capitalization) often discards the frame-message-bit of "i might be intelligent" (or ... something) that such people understand from messages (despite this being an incorrect thing to understand)
7shokwave10y
I have found great value in re-reading my posts looking for possible similar-sounding cliches, and re-writing to make the post deliberately inconsistent with those. For example, the previous sentence could be rounded off to the cliche "Avoid cliches in your writing". I tried to avoid that possible interpretation by including "deliberately inconsistent".
0RobinZ10y
I like it - do you know if it works in face-to-face conversations?
6TheOtherDave10y
Well, you describe the problem as terseness. If that's true, it suggests that one set of improvements might involve explaining your ideas more fully and providing more of your reasons for considering those ideas true and relevant and important. Have you tried that? If so, what has the result been?
-2alicey10y
-
6TheOtherDave10y
I understand this to mean that the only value you see to non-brevity is its higher success at manipulation. Is that in fact what you meant?
0alicey10y
-
4elharo10y
In other words, you prefer brevity to clarity and being understood? Something's a little skewed here. It sounds like you and TheOtherDave have both identified the problem. Assuming you know what the problem is, why not fix it? It may be that you are incorrect about the cause of the problem, but it's easy enough test your hypothesis. The cost is low and the value of the information gained would be high. Either you're right and brevity is your problem, in which case you should be more verbose when you wish to be understood. Or you're wrong and added verbosity would not make people less inclined to "round you off to the nearest cliche", in which case you could look for other changes to your writing that would help readers understand you better.
7philh10y
Well, I think that "be more verbose" is a little like "sell nonapples". A brief post can be expanded in many different directions, and it might not be obvious which directions would be helpful and which would be boring.
2jamesf10y
What does brevity offer you that makes it worthwhile, even when it impedes communication? Predicting how communication will fail is generally Really Hard, but it's a good opportunity to refine your models of specific people and groups of people.
0alicey10y
improving signal to noise, holding the signal constant, is brevity when brevity impedes communication, but only with a subset of people, then the reduced signal is because they're not good at understanding brief things, so it is worth not being brief with them, but it's not fun
1ThrustVectoring10y
I suspect that the issue is not terseness, but rather not understanding and bridging the inferential distance between you and your audience. It's hard for me to say more without a specific example.
0alicey10y
revisiting this, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is
2RobinZ10y
This understates the case, even. At different times, an individual can be more or less prone to haste, laziness, or any of several possible sources of error, and at times, you yourself can commit any of these errors. I think the greatest value of a well-formulated principle of charity is that it leads to a general trend of "failure of communication -> correction of failure of communication -> valuable communication" instead of "failure of communication -> termination of communication". Actually, there's another point you could make along the lines of Jay Smooth's advice about racist remarks, particularly the part starting at 1:23, when you are discussing something in 'public' (e.g. anywhere on the Internet). If I think my opposite number is making bad arguments (e.g. when she is proposing an a priori proof of the existence of a god), I can think of few more convincing avenues to demonstrate to all the spectators that she's full of it than by giving her every possible opportunity to reveal that her argument is not wrong. Regardless of what benefit you are balancing against a cost, though, a useful principle of charity should emphasize that your failure to engage with someone you don't believe to be sufficiently rational is a matter of the cost of time, not the value of their contribution. Saying "I don't care what you think" will burn bridges with many non-LessWrongian folk; saying, "This argument seems like a huge time sink" is much less likely to.
2Lumifer10y
So if I believe that someone is stupid, mindkilled, etc. and is not capable (at least at the moment) of contributing anything valuable, does this principle emphasize that I should not believe that, or that I should not tell that to this someone?
4Vaniver10y
It's not obvious to me that's the right distinction to make, but I do think that the principle of charity does actually result in a map shift relative to the default. That is, an epistemic principle of charity is a correction like one would make with the fundamental attribution error: "I have only seen one example of this person doing X, I should restrain my natural tendency to overestimate the resulting update I should make." That is, if you have not used the principle of charity in reaching the belief that someone else is stupid or mindkilled, then you should not use that belief as reason to not apply the principle of charity.
0Lumifer10y
What is the default? And is it everyone's default, or only the unenlightened ones', or whose? This implies that the "default" map is wrong -- correct? I don't quite understand that. When I'm reaching a particular belief, I basically do it to the best of my ability -- if I am aware of errors, biases, etc. I will try to correct them. Are you saying that the principle of charity is special in that regard -- that I should apply it anyway even if I don't think it's needed? An attribution error is an attribution error -- if you recognize it you should fix it, and not apply global corrections regardless.
5Vaniver10y
I am pretty sure that most humans are uncharitable in interpreting the skills, motives, and understanding of someone they see as a debate opponent, yes. This observation is basically the complement of the principle of charity- the PoC exists because "most people are too unkind here; you should be kinder to try to correct," and if you have somehow hit the correct level of kindness, then no further change is necessary. I think that the principle of charity is like other biases. This question seems just weird to me. How do you know you can trust your cognitive system that says "nah, I'm not being biased right now"? This calls to mind the statistical prediction rule results, where people would come up with all sorts of stories why their impression was more accurate than linear fits to the accumulated data- but, of course, those were precisely the times when they should have silenced their inner argument and gone with the more accurate rule. The point of these sorts of things is that you take them seriously, even when you generate rationalizations for why you shouldn't take them seriously! (There are, of course, times when the rules do not apply, and not every argument against a counterbiasing technique is a rationalization. But you should be doubly suspicious against such arguments.)
0Lumifer10y
It's weird to me that the question is weird to you X-/ You know when and to what degree you can trust your cognitive system in the usual way: you look at what it tells you and test it against the reality. In this particular case you check whether later, more complete evaluations corroborate your initial perception or there is a persistent bias. If you can't trust your cognitive system then you get all tangled up in self-referential loops and really have no basis on which to decide by how much to correct your thinking or even which corrections to apply.
4Vaniver10y
To me, a fundamental premise of the bias-correction project is "you are running on untrustworthy hardware." That is, biases are not just of academic interest, and not just ways that other people mistakes, but known flaws that you personally should attend to with regards to your own mind. There's more, but I think in order to explain that better I should jump to this first: You can ascribe different parts of your cognitive system different levels of trust, and build a hierarchy out of them. To illustrate a simple example, I can model myself as having a 'motive-detection system,' which is normally rather accurate but loses accuracy when used on opponents. Then there's a higher-level system that is a 'bias-detection system' which detects how much accuracy is lost when I use my motive-detection system on opponents. Because this is hierarchical, I think it bottoms out in a finite number of steps; I can use my trusted 'statistical inference' system to verify the results from my 'bias-detection' system, which then informs how I use the results from my 'motive-detection system.' Suppose I just had the motive-detection system, and learned of PoC. The wrong thing to do would be to compare my motive-detection system to itself, find no discrepancy, and declare myself unbiased. "All my opponents are malevolent or idiots, because I think they are." The right thing to do would be to construct the bias-detection system, and actively behave in such a way to generate more data to determine whether or not my motive-detection system is inaccurate, and if so, where and by how much. Only after a while of doing this can I begin to trust myself to know whether or not the PoC is needed, because by then I've developed a good sense of how unkind I become when considering my opponents. If I mistakenly believe that my opponents are malevolent idiots, I can only get out of that hole by either severing the link between my belief in their evil stupidity and my actions when discussing with them
0Lumifer10y
Before I get into the response, let me make a couple of clarifying points. First, the issue somewhat drifted from "to what degree should you update on the basis of what looks stupid" to "how careful you need to be about updating your opinion of your opponents in an argument". I am not primarily talking about arguments, I'm talking about the more general case of observing someone being stupid and updating on this basis towards the "this person is stupid" hypothesis. Second, my evaluation of stupidity is based more on how a person argues rather than on what position he holds. To give an example, I know some smart people who have argued against evolution (not in the sense that it doesn't exist, but rather in the sense that the current evolutionary theory is not a good explanation for a bunch of observables). On the other hand, if someone comes in and goes "ha ha duh of course evolution is correct my textbook says so what u dumb?", well then... I don't like this approach. Mainly this has to do with the fact that unrolling "untrustworthy" makes it very messy. As you yourself point out, a mind is not a single entity. It is useful to treat is as a set or an ecology of different agents which have different capabilities, often different goals, and typically pull into different directions. Given this, who is doing the trusting or distrusting? And given the major differences between the agents, what does "trust" even mean? I find this expression is usually used to mean that human mind is not a simple-enough logical calculating machine. My first response to this is duh! and the second one is that this is a good thing. Consider an example. Alice, a hetero girl, meets Bob at a party. Bob looks fine, speaks the right words, etc. and Alice's conscious mind finds absolutely nothing wrong with the idea of dragging him into her bed. However her gut instincts scream at her to run away fast -- for no good reason that her consciousness can discern. Basically she has a really bad fe
2Vaniver10y
I understand PoC to only apply in the latter case, with a broad definition of what constitutes an argument. A teacher, for example, likely should not apply the PoC to their students' answers, and instead be worried about the illusion of transparency and the double illusion of transparency. (Checking the ancestral comment, it's not obvious to me that you wanted to switch contexts- 7EE1D988 and RobinZ both look like they're discussing conservations or arguments- and you may want to be clearer in the future about context changes.) Here, I think you just need to make fundamental attribution error corrections (as well as any outgroup bias corrections, if those apply). Presumably, whatever module sits on the top of the hierarchy (or sufficiently near the top of the ecological web). From just the context given, no, she should trust her intuition. But we could easily alter the context so that she should tell herself that her hardware is untrustworthy and override her intuition- perhaps she has social anxiety or paranoia she's trying to overcome, and a trusted (probably female) friend doesn't get the same threatening vibe from Bob. You don't directly perceive reality, though, and your perceptions are determined in part by your behavior, in ways both trivial and subtle. Perhaps Mallory is able to read your perception of him from your actions, and thus behaves cruelly towards you? As a more mathematical example, in the iterated prisoner's dilemma with noise, TitForTat performs poorly against itself, whereas a forgiving TitForTat performs much better. PoC is the forgiveness that compensates for the noise. This is discussed a few paragraphs ago, but this is a good opportunity to formulate it in a way that is more abstract but perhaps clearer: claims about other people's motives or characteristics are often claims about counterfactuals or hypotheticals. Suppose I believe "If I were to greet to Mallory, he would snub me," and thus in order to avoid the status hit I don't say
2RobinZ10y
In Lumifer's defense, this thread demonstrates pretty conclusively that "the principle of charity" is also far too terse a package. (:
2Vaniver10y
For an explanation, agreed; for a label, disagreed. That is, I think it's important to reduce "don't be an idiot" into its many subcomponents, and identify them separately whenever possible,
0RobinZ10y
Mm - that makes sense.
0Lumifer10y
Well, not quite, I think the case here was/is that we just assign different meanings to these words. P.S. And here is yet another meaning...
0Lumifer10y
That's not the case where she shouldn't trust her hardware -- that's the case where her software has a known bug. Sure, so you have to trade off your need to discover more evidence against the cost of doing so. Sometimes it's worth it, sometimes not. Really? For a randomly sampled person, my prior already is that talking to him/her will be wasted effort. And if in addition to that he offers evidence of stupidity, well... I think you underappreciate opportunity costs -- there are a LOT of people around and most of them aren't very interesting. Yes, but properly unpacking it will take between one and several books at best :-/
0Vaniver10y
For people, is there a meaningful difference between the two? The primary difference between "your software is buggy" and "your hardware is untrustworthy" that I see is that the first suggests the solution is easier: just patch the bug! It is rarely enough to just know that the problem exists, or what steps you should take to overcome the problem; generally one must train themselves into being someone who copes effectively with the problem (or, rarely, into someone who does not have the problem). I agree there are opportunity costs; I see value in walled gardens. But just because there is value doesn't mean you're not overestimating that value, and we're back to the my root issue that your response to "your judgment of other people might be flawed" seems to be "but I've judged them already, why should I do it twice?" Indeed; I have at least a shelf and growing devoted to decision-making and ameliorative psychology.
0Lumifer10y
Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind. I said I will update on the evidence. The difference seems to be that you consider that insufficient -- you want me to actively seek new evidence and I think it's rarely worthwhile.
4EHeller10y
I don't think this is a meaningful distinction for people. People can (and often do) have personality changes (and other changes of 'mind') after a stroke.
0Lumifer10y
You don't think it's meaningful to model people as having a hardware layer and a software layer? Why? Why are you surprised that changes (e.g. failures) in hardware affect the software? That seems to be the way these things work, both in biological brains and in digital devices. In fact, humans are unusual in that for them the causality goes both ways: software can and does affect the hardware, too. But hardware affects the software in pretty much every situation where it makes sense to speak of hardware and software.
2Vaniver10y
Echoing the others, this is more dualistic than I'm comfortable with. It looks to me that in people, you just have 'wetware' that is both hardware and software simultaneously, rather than the crisp distinction that exists between them in silicon. Correct. I do hope that you noticed that this still relies on a potentially biased judgment (I think it's rarely worthwhile is a counterfactual prediction about what would happen if you did apply the PoC), but beyond that I think we're at mutual understanding.
0Lumifer10y
To quote myself, we're talking about "model[ing] people as having a hardware layer and a software layer". And to quote Monty Python, it's only a model. It is appropriate for some uses and inappropriate for others. For example, I think it's quite appropriate for a neurosurgeon. But it's probably not as useful for thinking about biofeedback, to give another example. Of course, but potentially biased judgments is all I have. They are still all I have even if I were to diligently apply PoC everywhere.
0[anonymous]10y
Huh, I don't think I've ever understood that metaphor before. Thanks. It's oddly dualist.
-4TheAncientGeek10y
I'll say it again: the PoC isn't at all about when's worth investing effort in talking to someone.
0TheAncientGeek10y
What is the reality about whether you interpreted someone correct.y? When do you hit the bedrock of Real Meaning?
-2TheAncientGeek10y
tldr; The principle of charity correct biases you're not aware of.
2RobinZ10y
I see that my conception of the "principle of charity" is either non-trivial to articulate or so inchoate as to be substantially altered by my attempts to do so. Bearing that in mind: The principle of charity isn't a propositional thesis, it's a procedural rule, like the presumption of innocence. It exists because the cost of false positives is high relative to the cost of reducing false positives: the shortest route towards correctness in many cases is the instruction or argumentation of others, many of whom would appear, upon initial contact, to be stupid, mindkilled, dishonest, ignorant, or otherwise unreliable sources upon the subject in question. The behavior proposed by the principle of charity is intended to result in your being able to reliably distinguish between failures of communication and failures of reasoning. My remark took the above as a basis and proposed behavior to execute in cases where the initial remark strongly suggests that the speaker is thinking irrationally (e.g. an assertion that the modern evolutionary synthesis is grossly incorrect) and your estimate of the time required to evaluate the actual state of the speaker's reasoning processes was more than you are willing to spend. In such a case, what the principle of charity implies are two things: * You should consider the nuttiness of the speaker as being an open question with a large prior probability, akin to your belief prior to lifting a dice cup that you have not rolled double-sixes, rather than a closed question with a large posterior probability, akin to your belief that the modern evolutionary synthesis is largely correct. * You should withdraw from the conversation in such a fashion as to emphasize that you are in general willing to put forth the effort to understand what they are saying, but that the moment is not opportune. Minor tyop fix T1503-4.
2Lumifer10y
I don't see it as self-evident. Or, more precisely, in some situations it is, and in other situations it is not. You are saying (a bit later in your post) that the principle of charity implies two things. The second one is a pure politeness rule and it doesn't seem to me that the fashion of withdrawing from a conversation will help me "reliably distinguish" anything. As to the first point, you are basically saying I should ignore evidence (or, rather, shift the evidence into the prior and refuse to estimate the posterior). That doesn't help me reliably distinguish anything either. In fact, I don't see why there should be a particular exception here ("a procedural rule") to the bog-standard practice of updating on evidence. If my updating process is incorrect, I should fix it and not paper it over with special rules for seemingly-stupid people. If it is reasonably OK, I should just go ahead and update. That will not necessarily result in either a "closed question" or a "large posterior" -- it all depends on the particulars.
8TheAncientGeek10y
I'll say it again: POC doesn't mean "believe everyone is sane and intelligent", it means "treat everyone's comments as though they were made by a sane , intelligent, person".
1TheAncientGeek9y
Ie, its a defeasible assumption. If you fail, you have evidence that it was a dumb comment. Ift you succeed, you have evidence it wasn't. Either way, you have evidence, and you are not sitting in an echo chamber where your beliefs about people's dumbness go forever untested, because you reject out of hand anything that sounds superficially dumb, .or was made by someone you have labelled , however unjustly,as dumb.
0Lumifer9y
That's fine. I have limited information processing capacity -- my opportunity costs for testing other people's dumbness are fairly high. In the information age I don't see how anyone can operate without the "this is too stupid to waste time on" pre-filter.
0TheAncientGeek9y
The PoC tends to be advised in the context of philosophy, where there is a background assumption of infinite amounts of time to consider things, The resource-constrained version would be to interpret comments charitably once you have, for whatever reason, got into a discussion....with the corollary of reserving some space for "I might be wrong" where you haven't had the resources to test the hypothesis.
0Lumifer9y
LOL. While ars may be longa, vita is certainly brevis. This is a silly assumption, better suited for theology, perhaps -- it, at least, promises infinte time. :-) If I were living in English countryside around XVIII century I might have had a different opinion on the matter, but I do not. It's not a binary either-or situation. I am willing to interpret comments charitably according to my (updateable) prior of how knowledgeable, competent, and reasonable the writer is. In some situations I would stop and ponder, in others I would roll my eyes and move on.
0TheAncientGeek9y
Users report that charitable interpretation gives you more evidence for updating than you would have otherwise.
0TheAncientGeek9y
Are you already optimal? How do you know?
0satt9y
As I operationalize it, that definition effectively waters down the POC to a degree I suspect most POC proponents would be unhappy with. Sane, intelligent people occasionally say wrong things; in fact, because of selection effects, it might even be that most of the wrong things I see & hear in real life come from sane, intelligent people. So even if I were to decide that someone who's just made a wrong-sounding assertion were sane & intelligent, that wouldn't lead me to treat the assertion substantially more charitably than I otherwise would (and I suspect that the kind of person who likes the(ir conception of the) POC might well say I were being "uncharitable"). Edit: I changed "To my mind" to "As I operationalize it". Also, I guess a shorter form of this comment would be: operationalized like that, I think I effectively am applying the POC already, but it doesn't feel like it from the inside, and I doubt it looks like it from the outside.
0TheAncientGeek9y
You have uncharutably interpreted my formulation to mean 'treat everyone's comments as though they were made by a sane intelligent person who may .or may have been having an off day". What kind of guideline is that? The charitable version would have been "treat everyone's comments as though they were made by someone sane and intelligent at the time".
0satt9y
(I'm giving myself half a point for anticipating that someone might reckon I was being uncharitable.) A realistic one. The thing is, that version actually sounds less charitable to me than my interpretation. Why? Well, I see two reasonable ways to interpret your latest formulation. The first is to interpret "sane and intelligent" as I normally would, as a property of the person, in which case I don't understand how appending "at the time" makes a meaningful difference. My earlier point that sane, intelligent people say wrong things still applies. Whispering in my ear, "no, seriously, that person who just said the dumb-sounding thing is sane and intelligent right now" is just going to make me say, "right, I'm not denying that; as I said, sanity & intelligence aren't inconsistent with saying something dumb". The second is to insist that "at the time" really is doing some semantic work here, indicating that I need to interpret "sane and intelligent" differently. But what alternative interpretation makes sense in this context? The obvious alternative is that "at the time" is drawing my attention to whatever wrong-sounding comment was just made. But then "sane and intelligent" is really just a camouflaged assertion of the comment's worthiness, rather than the claimant's, which reduces this formulation of the POC to "treat everyone's comments as though the comments are cogent". The first interpretation is surely not your intended one because it's equivalent to one you've ruled out. So presumably I have to go with the second interpretation, but it strikes me as transparently uncharitable, because it sounds like a straw version of the POC ("oh, so I'm supposed to treat all comments as cogent, even if they sound idiotic?"). The third alternative, of course, is that I'm overlooking some third sensible interpretation of your latest formulation, but I don't see what it is; your comment's too pithy to point me in the right direction.
0TheAncientGeek9y
Yep. You have assumed that cannot be the correct interpretation of the PoC, without saying why. In light of your other comments, it could well be that you are assuming that the PoC can only be true by correspondence to reality or false, by lack of correspondence. But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. Their equivalent of the true/false axis are the Works/Does Not Work axis. So would adoption of the PoC work as way of understanding people, and calibrating your confidence levels?...that is the question.
0satt9y
OK, but that's not an adequate basis for recommending a given norm/guideline/heuristic. One has to at least sketch an answer to the question, drawing on evidence and/or argument (as RobinZ sought to). Well, because it's hard for me to believe you really believe that interpretation and understand it in the same way I would naturally operationalize it: namely, noticing and throwing away any initial suspicion I have that a comment's wrong, and then forcing myself to pretend the comment must be correct in some obscure way. As soon as I imagine applying that procedure to a concrete case, I cringe at how patently silly & unhelpful it seems. Here's a recent-ish, specific example of me expressing disagreement with a statement I immediately suspected was incorrect. What specifically would I have done if I'd treated the seemingly patently wrong comment as cogent instead? Read the comment, thought "that can't be right", then shaken my head and decided, "no, let's say that is right", and then...? Upvoted the comment? Trusted but verified (i.e. not actually treated the comment as cogent)? Replied with "I presume this comment is correct, great job"? Surely these are not courses of action you mean to recommend (the first & third because they actively support misinformation, the second because I expect you'd find it insufficiently charitable). Surely I am being uncharitable in operationalizing your recommendation this way...even though that does seem to me the most literal, straightforward operationalization open to me. Surely I misunderstand you. That's why I assumed "that cannot be the correct interpretation" of your POC.
1CCC9y
If I may step in at this point; "cogent" does not mean "true". The principle of charity (as I understand it) merely recommends treating any commenter as reasonably sane and intelligent. This does not mean he can't be wrong - he may be misinformed, he may have made a minor error in reasoning, he may simply not know as much about the subject as you do. Alternatively, you may be misinformed, or have made a minor error in reasoning, or not know as much about the subject as the other commenter... So the correct course of action then, in my opinion, is to find the source of error and to be polite about it. The example post you linked to was a great example - you provided statistics, backed them up, and linked to your sources. You weren't rude about it, you simply stated facts. As far as I could see, you treated RomeoStevens as sane, intelligent, and simply lacking in certain pieces of pertinent historical knowledge - which you have now provided. (As to what RomeoStevens said - it was cogent. That is to say, it was pertinent and relevant to the conversation at the time. That it was wrong does not change the fact that it was cogent; if it had been right it would have been a worthwhile point to make.)
6satt9y
Yes, and were I asked to give synonyms for "cogent", I'd probably say "compelling" or "convincing" [edit: rather than "true"]. But an empirical claim is only compelling or convincing (and hence may only be cogent) if I have grounds for believing it very likely true. Hence "treat all comments as cogent, even if they sound idiotic" translates [edit: for empirical comments, at least] to "treat all comments as if very likely true, even if they sound idiotic". Now you mention the issue of relevance, I think that, yeah, I agree that relevance is part of the definition of "cogent", but I also reckon that relevance is only a necessary condition for cogency, not a sufficient one. And so... ...I have to push back here. While pertinent, the comment was not only wrong but (to me) obviously very likely wrong, and RomeoStevens gave no evidence for it. So I found it unreasonable, unconvincing, and unpersuasive — the opposite of dictionary definitions of "cogent". Pertinence & relevance are only a subset of cogency. That's why I wrote that that version of the POC strikes me as watered down; someone being "reasonably sane and intelligent" is totally consistent with their just having made a trivial blunder, and is (in my experience) only weak evidence that they haven't just made a trivial blunder, so "treat commenters as reasonably sane and intelligent" dissolves into "treat commenters pretty much as I'd treat anyone".
0CCC9y
Then "cogent" was probably the wrong word to use. I'd need a word that means pertinent, relevant, and believed to have been most likely true (or at least useful to say) by the person who said it; but not necessarily actually true. Okay, I appear to have been using a different definition (see definition two). I think at this point, so as not to get stuck on semantics, we should probably taboo the word 'cogent'. (Having said that, I do agree anyone with access to the statistics you quoted would most likely find RomeoSteven's comments unreasonable, unconvincing and unpersuasive). Then you may very well be effectively applying the principle already. Looking at your reply to RomeoStevens supports this assertion.
2satt9y
TheAncientGeek assented to that choice of word, so I stuck with it. His conception of the POC might well be different from yours and everyone else's (which is a reason I'm trying to pin down precisely what TheAncientGeek means). Fair enough, I was checking different dictionaries (and I've hitherto never noticed other people using "cogent" for "pertinent"). Maybe, though I'm confused here by TheAncientGeek saying in one breath that I applied the POC to RomeoStevens, but then agreeing ("Thats exactly what I mean.") in the next breath with a definition of the POC that implies I didn't apply the POC to RomeoStevens.
2CCC9y
I think that you and I are almost entirely in agreement, then. (Not sure about TheAncientGeek). I think you're dealing with double-illusion-of-transparency issues here. He gave you a definition ("treat everyone's comments as though they were made by someone sane and intelligent at the time") by which he meant some very specific concept which he best approximated by that phrase (call this Concept A). You then considered this phrase, and mapped it to a similar-but-not-the-same concept (Concept B) which you defined and tried to point out a shortcoming in ("namely, noticing and throwing away any initial suspicion I have that a comment's wrong, and then forcing myself to pretend the comment must be correct in some obscure way."). Now, TheAncientGeek is looking at your words (describing Concept B) and reading into them the very similar Concept A; where your post in response to RomeoStevens satisfies Concept A but not Concept B. Nailing down the difference between A and B will be extremely tricky and will probably require both of you to describe your concepts in different words several times. (The English language is a remarkably lossy means of communication).
0satt9y
Your diagnosis sounds all too likely. I'd hoped to minimize the risk of this kind of thing by concretizing and focusing on a specific, publicly-observable example, but that might not have helped.
2TheAncientGeek9y
Yes, that was an example of PoC, because satt assumed RomeoStevens had failed to look up the figures, rather than insanely believing that 120,000ish < 500ish.
0TheAncientGeek9y
Yes, but that's beside the original point. What you call a realistic guideline doesnt work as a guideline at all, and therefore isnt a a charitable interpretation of the PoC. Justifying that PoC as something that works at what it is supposed to do, is a question that can be answered, but it is a separate question. Thats exactly what I mean. Cogent doesn't mean right. You actually succeeded in treating it as wrong for sane reasons, ie failure to check data.
0satt9y
You brought it up! I continue to think that the version I called realistic is no less workable than your version. Again, it's a question you introduced. (And labelled "the question".) But I'm content to put it aside. But surely it isn't. Just 8 minutes earlier you wrote that a case where I did the opposite was an "example of PoC". See my response to CCC.
0TheAncientGeek9y
But not one that tells you unambiguously what to do, ie not a usable guideline at all. There's a lot of complaint about this heuristic along the lines that it doesn't guarantee perfect results...ie, its a heuristic And now there is the complaint that its not realistic, it doesn't reflect reality. Ideal rationalists can stop reading now. Everybody else: you're biased. Specifically, overconfident,. Overconfidence makes people overestimate their ability to understand what people are saying, and underestimate the rationality of others. The PoC is a heuristic which corrects those. As a heuristic, an approximate method, it i is based on the principle that overshooting the amount of sense people are making is better than undershooting. Overshooting would be a problem, if there were some goldilocks alternative, some way of getting things exactly right. There isn't. The voice in your head that tells you you are doing just fine its the voice of your bias.
0satt9y
I don't see how this applies any more to the "may .or may have been having an off day"" version than it does to your original. They're about as vague as each other. Understood. But it's not obvious to me that "the principle" is correct, nor is it obvious that a sufficiently strong POC is better than my more usual approach of expressing disagreement and/or asking sceptical questions (if I care enough to respond in the first place).
0TheAncientGeek9y
Mine implies a heuristic of "make repeated attempts at re-intepreting the comment using different background assumptions". What does yours imply? As I have explained, it provides its own evidence. Neither of those is much good if interpreting someone who died 100 years ago.
0satt9y
I don't see how "treat everyone's comments as though they were made by a sane , intelligent, person" entails that without extra background assumptions. And I expect that once those extra assumptions are spelled out, the "may .or may have been having an off day" version will imply the same action(s) as your original version. Well, when I've disagreed with people in discussions, my own experience has been that behaving according to my baseline impression of how much sense they're making gets me closer to understanding than consciously inflating my impression of how much sense they're making. A fair point, but one of minimal practical import. Almost all of the disagreements which confront me in my life are disagreements with live people.
0Lumifer10y
I don't like this rule. My approach is simpler: attempt to understand what the person means. This does not require me to treat him as sane or intelligent.
3TheAncientGeek10y
How do you know how many mistakes you are or aren't making?
-2TheAncientGeek9y
The PoC is a way of breaking down "understand what the other person says" into smaller steps, not .something entirely different, Treating your own mental processes as a black box that always delivers the right answer is a great way to stay in the grip of bias.
5RobinZ10y
The prior comment leads directly into this one: upon what grounds do I assert that an inexpensive test exists to change my beliefs about the rationality of an unfamiliar discussant? I realize that it is not true in the general case that the plural of anecdote is data, and much the following lacks citations, but: * Many people raised to believe that evolution is false because it contradicts their religion change their minds in their first college biology class. (I can't attest to this from personal experience - this is something I've seen frequently reported or alluded to via blogs like Slacktivist.) * An intelligent, well-meaning, LessWrongian fellow was (hopefully-)almost driven out of my local Less Wrong meetup in no small part because a number of prominent members accused him of (essentially) being a troll. In the course of a few hours conversation between myself and a couple others focused on figuring out what he actually meant, I was able to determine that (a) he misunderstood the subject of conversation he had entered, (b) he was unskilled at elaborating in a way that clarified his meaning when confusion occurred, and (c) he was an intelligent, well-meaning, LessWrongian fellow whose participation in future meetups I would value. * I am unable to provide the details of this particular example (it was relayed to me in confidence), but an acquaintance of mine was a member of a group which was attempting to resolve an elementary technical challenge - roughly the equivalent of setting up a target-shooting range with a safe backstop in terms of training required. A proposal was made that was obviously unsatisfactory - the equivalent of proposing that the targets be laid on the ground and everyone shoot straight down from a second-story window - and my acquaintance's objection to it on common-sense grounds was treated with a response equivalent to, "You're Japanese, what would you know about firearms?" (In point of fact, while no metaphorical gunsmith, my acquai
0Lumifer10y
What you are talking about doesn't fall under the principle of charity (in my interpretation of it). It falls under the very general rubric of "don't be stupid yourself". In particular, considering that the speaker expresses his view within a framework which is different from your default framework is not an application of the principle of charity -- it's an application of the principle "don't be stupid, of course people talk within their frameworks, not within your framework".
3RobinZ10y
I might be arguing for something different than your principle of charity. What I am arguing for - and I realize now that I haven't actually explained a procedure, just motivations for one - is along the following lines: When somebody says something prima facie wrong, there are several possibilities, both regarding their intended meaning: * They may have meant exactly what you heard. * They may have meant something else, but worded it poorly. * They may have been engaging in some rhetorical maneuver or joke. * They may have been deceiving themselves. * They may have been intentionally trolling. * They may have been lying. ...and your ability to infer such: * Their remark may resemble some reasonable assertion, worded badly. * Their remark may be explicable as ironic or joking in some sense. * Their remark may conform to some plausible bias of reasoning. * Their remark may seem like a lie they would find useful.* * Their remark may represent an attempt to irritate you for their own pleasure.* * Their remark may simply be stupid. * Their remark may allow more than one of the above interpretations. What my interpretation of the principle of charity suggests as an elementary course of action in this situation is, with an appropriate degree of polite confusion, to ask for clarification or elaboration, and to accompany this request with paraphrases of the most likely interpretations you can identify of their remarks excluding the ones I marked with asterisks. Depending on their actual intent, this has a good chance of making them: * Elucidate their reasoning behind the unbelievable remark (or admit to being unable to do so); * Correct their misstatement (or your misinterpretation - the difference is irrelevant); * Admit to their failed humor; * Admit to their being unable to support their assertion, back off from it, or sputter incoherently; * Grow impatient at your failure to rise to their goading and give up; or * Back off from (or admit to, or
0RobinZ10y
Belatedly: I recently discovered that in 2011 I posted a link to an essay on debating charitably by pdf23ds a.k.a. Chris Capel - this is MichaelBishop's summary and this is a repost of the text (the original site went down some time ago). I recall endorsing Capel's essay unreservedly last time I read it; I would be glad to discuss the essay, my prior comments, or any differences that exist between the two if you wish.
2RobinZ10y
A small addendum, that I realized I omitted from my prior arguments in favor of the principle of charity: Because I make a habit of asking for clarification when I don't understand, offering clarification when not understood, and preferring "I don't agree with your assertion" to "you are being stupid", people are happier to talk to me. Among the costs of always responding to what people say instead of your best understanding of what they mean - especially if you are quick to dismiss people when their statements are flawed - is that talking to you becomes costly: I have to word my statements precisely to ensure that I have not said something I do not mean, meant something I did not say, or made claims you will demand support for without support. If, on the other hand, I am confident that you will gladly allow me to correct my errors of presentation, I can simply speak, and fix anything I say wrong as it comes up. Which, in turn, means that I can learn from a lot of people who would not want to speak to me otherwise.
0Lumifer10y
Again: I completely agree that you should make your best effort to understand what other people actually mean. I do not call this charity -- it sounds like SOP and "just don't be an idiot yourself" to me.
2RobinZ10y
You're right: it's not self-evident. I'll go ahead and post a followup comment discussing what sort of evidential support the assertion has. My usage of the terms "prior" and "posterior" was obviously mistaken. What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it's perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability. I have high confidence that an inexpensive test - lifting the dice cup - will change my beliefs about the value of the die roll by many orders of magnitude, and low confidence that any comparable test exists to affect my confidence regarding the scientific theory.
4Lumifer10y
I think you are talking about what's in local parlance is called a "weak prior" vs a "strong prior". Bayesian updating involves assigning relative importance the the prior and to the evidence. A weak prior is easily changed by even not very significant evidence. On the other hand, it takes a lot of solid evidence to move a strong prior. In this terminology, your pre-roll estimation of the probability of double sixes is a weak prior -- the evidence of an actual roll will totally overwhelm it. But your estimation of the correctness of the modern evolutionary theory is a strong prior -- it will take much convincing evidence to persuade you that the theory is not correct after all. Of course, the posterior of a previous update becomes the prior of the next update. Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being. And I don't see why this should be so.
4RobinZ10y
Oh, dear - that's not what I meant at all. I meant that - absent a strong prior - the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent. It's entirely possible that ten minutes of conversation will suffice to make a strong prior out of this weaker one - there's someone arguing for dualism on a webcomic forum I (in)frequent along the same lines as Chalmers "hard problem of consciousness", and it took less than ten posts to establish pretty confidently that the same refutations would apply - but as the history of DIPS (defense-independent pitching statistics) shows, it's entirely possible for an idea to be as correct as "the earth is a sphere, not a plane" and nevertheless be taken as prima facie absurd. (As the metaphor implies, DIPS is not quite correct, but it would be more accurate to describe its successors as "fixing DIPS" than as "showing that DIPS was completely wrongheaded".)
2Lumifer10y
Oh, I agree with that. What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid. The principle of charity should not prevent that from happening. Of course evidence of stupidity should not make you close the case, declare someone irretrievably stupid, and stop considering any further evidence. As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.
2RobinZ10y
...in the context during which they exhibited the behavior which generated said evidence, of course. In broader contexts, or other contexts? To a much lesser extent, and not (usually) strongly in the strong-prior sense, but again, yes. That you should always be capable of considering further evidence is - I am glad to say - so universally accepted a proposition in this forum that I do not bother to enunciate it, but I take no issue with drawing conclusions from a sufficient body of evidence. Come to think, you might be amused by this fictional dialogue about a mendacious former politician, illustrating the ridiculousness of conflating "never assume that someone is arguing in bad faith" and "never assert that someone is arguing in bad faith". (The author also posted a sequel, if you enjoy the first.) I'm afraid that I would have about as much luck barking like a duck as enunciating how I evaluate the intelligence (or reasonableness, or honesty, or...) of those I converse with. YMMV, indeed.
0V_V10y
People tend to update too much in these circumstances: Fundamental attribution error
0Lumifer10y
The fundamental attribution error is about underestimating the importance of external drivers (the particular situation, random chance, etc.) and overestimating the importance of internal factors (personality, beliefs, etc.) as an explanation for observed actions. If a person in a discussion is spewing nonsense, it is rare that external factors are making her do it (other than a variety of mind-altering chemicals). The indicators of stupidity are NOT what position a person argues or how much knowledge about the subject does she has -- it's how she does it. And inability e.g. to follow basic logic is hard to attribute to external factors.
4TheAncientGeek10y
This discussion has got badly derailed. You are taking it that there is some robust fact about someones lack of lrationality or intelligence which may or may not be explained by internal or external factors. The point is that you cannot make a reliable judgement about someone's rationality or intelligence unless you have understood that they are saying,....and you cannot reliably understand what they ares saying unl ess you treat it as if it were the product of a rational and intelligent person. You can go to "stupid"when all attempts have failed, but not before.
0Lumifer10y
I disagree, I don't think this is true.
0[anonymous]10y
I think it's true, on roughly these grounds: taking yourself to understand what someone is saying entails thinking that almost all of their beliefs (I mean 'belief' in the broad sense, so as to include my beliefs about the colors of objects in the room) are true. The reason is that unless you assume almost all of a person's (relevant) beliefs are true, the possibility space for judgements about what they mean gets very big, very fast. So if 'generally understanding what someone is telling you' means having a fairly limited possibility space, you only get this on the assumption that the person talking to you has mostly true beliefs. This, of course, doesn't mean they have to be rational in the LW sense, or even very intelligent. The most stupid and irrational (in the LW sense) of us still have mostly true beliefs. I guess the trick is to imagine what it would be to talk to someone who you thought had on the whole false beliefs. Suppose they said 'pass me the hammer'. What do you think they meant by that? Assuming they have mostly or all false beliefs relevant to the utterance, they don't know what a hammer is or what 'passing' involves. They don't know anything about what's in the room, or who you are, or what you are, or even if they took themselves to be talking to you, or talking at all. The possibility space for what they took themselves to be saying is too large to manage, much larger than, for example, the possibility space including all and only every utterance and thought that's ever been had by anyone. We can say things like 'they may have thought they were talking about cats or black holes or triangles' but even that assumes vastly more truth and reason in the person that we've assumed we can anticipate.
0Lumifer10y
Generally speaking, understanding what a person means implies reconstructing their framework of meaning and reference that exists in their mind as the context to what they said. Reconstructing such a framework does NOT require that you consider it (or the whole person) sane or rational.
0[anonymous]10y
Well, there are two questions here: 1) is it in principle necessary to assume your interlocutors are sane and rational, and 2) is it as a matter of practical necessity a fact that we always do assume our interlocutors are sane and rational. I'm not sure about the first one, but I am pretty sure about the second: the possibility space for reconstructing the meaning of someone speaking to you is only manageable if you assume that they're broadly sane, rational, and have mostly true beliefs. I'd be interested to know which of these you're arguing about. Also, we should probably taboo 'sane' and 'rational'. People around here have a tendency to use these words in an exaggerated way to mean that someone has a kind of specific training in probability theory, statistics, biases, etc. Obviously people who have none of these things, like people living thousands of years ago, were sane and rational in the conventional sense of these terms, and they had mostly true beliefs even by any standard we would apply today.
4Lumifer10y
The answers to your questions are no and no. I don't think so. Two counter-examples: I can discuss fine points of theology with someone without believing in God. For example, I can understand the meaning of the phrase "Jesus' self-sacrifice washes away the original sin" without accepting that Christianity is "mostly true" or "rational". Consider a psychotherapist talking to a patient, let's say a delusional one. Understanding the delusion does not require the psychotherapist to believe that the patient is sane.
0[anonymous]10y
You're not being imaginative enough: you're thinking about someone with almost all true beliefs (including true beliefs about what Christians tend to say), and a couple of sort of stand out false beliefs about how the universe works as a whole. I want you to imagine talking to someone with mostly false beliefs about the subject at hand. So you can't assume that by 'Jesus' self-sacrifice washes away the original sin' that they're talking about anything you know anything about because you can't assume they are connecting with any theology you've ever heard of. Or even that they're talking about theology. Or even objects or events in any sense you're familiar with. I think, again, delusional people are remarkable for having some unaccountably false beliefs, not for having mostly false beliefs. People with mostly false beliefs, I think, wouldn't be recognizable even as being conscious or aware of their surroundings (because they're not!).
0Lumifer10y
So why are we talking about them, then?
0[anonymous]10y
Well, my point is that as a matter of course, you assume everyone you talk to has mostly true beliefs, and for the most part thinks rationally. We're talking about 'people' with mostly or all false beliefs just to show that we don't have any experience with such creatures. Bigger picture: the principle of charity, that is the assumption that whoever you are talking to is mostly right and mostly rational, isn't something you ought to hold, it's something you have no choice but to hold. The principle of charity is a precondition on understanding anyone at all, even recognizing that they have a mind.
0Jiro10y
People will have mostly true beliefs, but they might not have true beliefs in the areas under concern. For obvious reasons, people's irrationality is likely to be disproportionately present in the beliefs with which they disagree with others. So the fact that you need to be charitable in assuming people have mostly true beliefs may not be practically useful--I'm sure a creationist rationally thinks water is wet, but if I'm arguing with him, that subject probably won't come up as much as creationism.
0[anonymous]10y
That's true, but I feel like a classic LW point can be made here: suppose it turns out some people can do magic. That might seem like a big change, but in fact magic will then just be subject to the same empirical investigation as everything else, and ultimately the same integration into physical theory the same as everything else. So while I agree with you that when we specify a topic, we can have broader disagreement, that disagreement is built on and made possible by very general agreement about everything else. Beliefs are holistic, not atomic, and we can't partition them off while making any sense of them. We're never just talking about some specific subject matter, but rather emphasizing some subject matter on the background of all our other beliefs (most of which must be true). The thought, in short, is that beliefs are of a nature to be true, in the way dogs naturally have four legs. Some don't, because something went wrong, but we can only understand the defect in these by having the basic nature of beliefs, namely truth, in the background.
0ChristianKl10y
That could be true but doesn't have to be true. Our ontological assumptions might also turn out to be mistaken. To quote Eliezer:
2[anonymous]10y
True, and a discovery like that might require us to make some pretty fundamental changes. But I don't think Morpheus could be right about the universe's relation to math. No universe, I take it, 'runs' on math in anything but the loosest figurative sense. The universe we live in is subject to mathematical analysis, and what reason could we have for thinking any universe could fail to be so? I can't say for certain, of course, that every possible universe must run on math, but I feel safe in claiming that we've never imagined a universe, in fiction or through something like religion, which would fail to run on math. More broadly speaking, anything that is going to be knowable at all is going to be rational and subject to rational understanding. Even if someone has some very false beliefs, their beliefs are false not just jibber-jabber (and if they are just jibber-jabber then you're not talking to a person). Even false beliefs are going to have a rational structure.
0Eugine_Nier10y
That is a fact about you, not a fact about the universe. Nobody could imagine light being both a particle and a wave, for example, until their study of nature forced them to.
1Jiro10y
People could imagine such a thing before studying nature showed they needed to; they just didn't. I think there's a difference between a concept that people only don't imagine, and a concept that people can't imagine. The latter may mean that the concept is incoherent or has an intrinsic flaw, which the former doesn't.
-1Eugine_Nier10y
In the interest of not having this discussion degenerate into an argument about what "could" means, I would like to point out that your and hen's only evidence that you couldn't imagine a world that doesn't run on math is that you haven't.
2private_messaging10y
For one thing, "math" trivially happens to run on world, and corresponds to what happens when you have a chain of interactions. Specifically to how one chain of physical interactions (apples being eaten for example) combined with another that looks dissimilar (a binary adder) ends up with conclusion that apples were counted correctly, or how the difference in count between the two processes of counting (none) corresponds to another dissimilar process (the reasoning behind binary arithmetic). As long as there's any correspondences at all between different physical processes, you'll be able to kind of imagine that world runs on world arranged differently, and so it would appear that world "runs on math". If we were to discover some new laws of physics that were producing incalculable outcomes, we would just utilize those laws in some sort of computer and co-opt them as part of "math", substituting processes for equivalent processes. That's how we came up with math in the first place. edit: to summarize, I think "the world runs on math" is a really confused way to look at how world relates to the practice of mathematics inside of it. I can perfectly well say that the world doesn't run on math any more than the radio waves are transmitted by mechanical aether made of gears, springs, and weights, and have exact same expectations about everything.
0TheAncientGeek10y
"There is non trivial subset of maths whish describes physical law" might be better way of stating it
1private_messaging10y
It seems to me that as long as there's anything that is describable in the loosest sense, that would be taken to be true. I mean, look at this, some people believe literally that our universe is a "mathematical object", what ever that means (tegmarkery), and we haven't even got a candidate TOE that works. edit: I think the issue is that Morpheus confuses "made of gears" with "predictable by gears". Time is not made of gears, and neither are astronomical objects, but a clock is very useful nonetheless.
-1TheAncientGeek10y
I don't see why "describable" would necessarily imply "describable mathematically". I can imagine a qualia only universe,and I can imagine the ability describe qualia. As things stand, there are a number of things that can't be described mathematically
0dthunt10y
Example?
-1TheAncientGeek10y
Qualia, the passage of time, symbol grounding..
0[anonymous]10y
Absolutely, it's a fact about me, that's my point. I just also think it's a necessary fact.
0Eugine_Nier10y
What's your evidence for this? Keep in mind that the history of science is full of people asserting that X has to be the case because they couldn't imagine the world being otherwise, only for subsequent discoveries to show that X is not in fact the case.
3A1987dM10y
Name three (as people often say around here).
0Eugine_Nier10y
Well, the most famous (or infamous) is Kant's argument the space must be flat (in the Euclidean sense) because the human mind is incapable of imagining it to be otherwise. Another example was Lucretius's argument against the theory that the earth is round: if the earth were round and things fell towards its center than in which direction would an object at the center fall? Not to mention the standard argument against the universe having a beginning "what happened before it?"
2[anonymous]10y
I don't intend to bicker, I think your point is a good one independently of these examples. In any case, I don't think at least the first two of these examples of the phenomenon you're talking about. I think this comes up in the sequences as an example of the mind-projection fallacy, but that's not right. Kant did not take himself to be saying anything about the world outside the mind when he said that space was flat. He only took himself to be talking about the world as it appears to us. Space, so far as Kant was concerned, was part of the structure of perception, not the universe. So in the Critique of Pure Reason, he says: So Kant is pretty explicit that he's not making a claim about the world, but about the way we percieve it. Kant would very likely poke you in chest and say "No you're committing the mind-projection fallacy for thinking that space is even in the world, rather than just a form of perception. And don't tell me about the mind-projection fallacy anyway, I invented that whole move." This also isn't an example, because the idea of a spherical world had in fact been imagined in detail by Plato (with whom Lucretius seems to be arguing), Aristotle, and many of Lucretius' contemporaries and predecessors. Lucretius' point couldn't have been that a round earth is unimaginable, but that it was inconsistent with an analysis of the motions of simple bodies in terms of up and down: you can't say that fire is of a nature to go up if up is entirely relative. Or I suppose, you can say that but you'd have to come up with a more complicated account of natures.
-4Eugine_Nier10y
And in particular he claimed that this showed it had to be Euclidean because humans couldn't imagine it otherwise. Well, we now know it's not Euclidean and people can imagine it that way (I suppose you could dispute this, but that gets into exactly what we mean by "imagine" and attempting to argue about other people's qualia).
4[anonymous]10y
No, he never says that. Feel free to cite something from Kant's writing, or the SEP or something. I may be wrong, but I just read though the Aesthetic again, and I couldn't find anything that would support your claim. EDIT: I did find one passage that mentions imagination: I've edited my post accordingly, but my point remains the same. Notice that Kant does not mention the flatness of space, nor is it at all obvious that he's inferring anything from our inability to imagine the non-existence of space. END EDIT. You gave Kant's views about space as an example of someone saying 'because we can't imagine it otherwise, the world must be such and such'. Kant never says this. What he says is that the principles of geometry are not derived simply from the analysis of terms, nor are they empirical. Kant is very, very, explicit...almost annoyingly repetitive, that he is not talking about the world, but about our perceptive faculties. And if indeed we cannot imagine x, that does seem to me to be a good basis from which to draw some conclusions about our perceptive faculties. I have no idea what Kant would say about whether or not we can imagine non-Euclidian space (I have no idea myself if we can) but the matter is complicated because 'imagination' is a technical term in his philosophy. He thought space was an infinite Euclidian magnitude, but Euclidian geometry was the only game in town at the time. Anyway he's not a good example. As I said before, I don't mean to dispute the point the example was meant to illustrate. I just wanted to point out that this is an incorrect view of Kant's claims about space. It's not really very important what he thought about space though.
0Jiro10y
There's a difference between "can't imagine" in a colloquial sense, and actual inability to imagine. There's also a difference between not being able to think of how something fits into our knowledge about the universe (for instance, not being able to come up with a mechanism or not being able to see how the evidence supports it) and not being able to imagine the thing itself. There also aren't as many examples of this in the history of science as you probably think. Most of the examples that come to people's mind involve scientists versus noscientists.
0Eugine_Nier10y
See my reply to army above.
0[anonymous]10y
Hold on now, you're pattern matching me. I said: To which you replied that this is a fact about me, not the universe. But I explicitly say that its not a fact about the universe! My evidence for this is the only evidence that could be relevant: my experience with literature, science fiction, talking to people, etc. Nor is it relevant that science is full of people that say that something has to be true because they can't imagine the world otherwise. Again, I'm not making a claim about the world, I'm making a claim about the way we have imagined, or now imagine the world to be. I would be very happy to be pointed toward a hypothetical universe that isn't subject to mathematical analysis and which contains thinking animals. So before we go on, please tell me what you think I'm claiming? I don't wish to defend any opinions but my own.
0TheAncientGeek10y
Hen, I told you how I imagine such a universe, and you told me I couldn't be imagining it! Maybe you could undertake not to gainsay further hypotheses.
0[anonymous]10y
I found your suggestion to be implausible for two reasons: first, I don't think the idea of epistemically significant qualia is defensible, and second, even on the condition that it is, I don't think the idea of a universe of nothing but a single quale (one having epistemic significance) is defensible. Both of these points would take some time to work out, and it struck me in our last exchange that you had neither the patience nor the good will to do so, at least not with me. But I'd be happy to discuss the matter if you're interested in hearing what I have to say.
0Eugine_Nier10y
You said: I'm not sure what you mean by "necessary", but the most obvious interpretation is that you think it's necessarily impossible for the world to not be run by math or at least for humans to understand a world that doesn't.
0[anonymous]10y
This is my claim, and here's the thought: thinking things are natural, physical objects and they necessarily have some internal complexity. Further, thoughts have some basic complexity: I can't engage in an inference with a single term. Any universe which would not in principle be subject to mathematical analysis is a universe in which there is no quantity of anything. So it can't, for example, involve any space or time, no energy or mass, no plurality of bodies, no forces, nothing like that. It admits of no analysis in terms of propositional logic, so Bayes is right out, as is any understanding of causality. This, it seems to me, would preclude the possibility of thought altogether. It may be that the world we live in is actually like that, and all its multiplicity is merely the contribution of our minds, so I won't venture a claim about the world as such. So far as I know, the fact that worlds admit of mathematical analysis is a fact about thinking things, not worlds.
0Eugine_Nier10y
What do you mean by "complexity"? I realize you have an intuitive idea, but it could very well be that your idea doesn't make sense when applied to whatever the real universe is. Um, that seems like a stretch. Just because some aspects of the universe are subject to mathematical analysis doesn't necessarily mean the whole universe is.
0[anonymous]10y
For my purposes, complexity is: involving (in the broadest sense of that word) more than one (in the broadest sense of that word) thing (in the broadest sense of that word). And remember, I'm not talking about the real universe, but about the universe as it appears to creatures capable of thinking. I think it does, if you're granting me that such a world could be distinguished into parts. It doesn't mean we could have the rich mathematical understanding of laws we do now, but that's a higher bar than I'm talking about.
0ChristianKl10y
You can always "use" analysis the issue is whether it gives you correct answers. It only gives you the correct answer if the universe obeys certain axioms.
0[anonymous]10y
Well, this gets us back to the topic that spawned this whole discussion: I'm not sure we can separate the question 'can we use it' from 'does it give us true results' with something like math. If I'm right that people always have mostly true beliefs, then when we're talking about the more basic ways of thinking (not Aristotelian dynamics, but counting, arithmetic, etc.) the fact that we can use them is very good evidence that they mostly return true results. So if you're right that you can always use, say, arithmetic, then I think we should conclude that a universe is always subject to analysis by arithmetic. You may be totally wrong that you can always use these things, of course. But I think you're probably right and I can't make sense of any suggestion to the contrary that I've heard yet.
0private_messaging10y
One could mathematically describe things not analysable by arithmetic, though...
0[anonymous]10y
Fair point, arithmetic's not a good example of a minimum for mathematical description.
0ChristianKl10y
The idea of rational understanding rests on the fact that you are separated from the object that you are trying to understand and the object itself doesn't change if you change your understanding of it. Then there the halting problem. There are a variety of problems that are NP. Those problems can't be understood by doing a few experiments and then extrapolating general rules from your experiments. I'm not quite firm with the mathematical terminology but I think NP problems are not subject to thinks like calculus that are covered in what Wikipedia describes as Mathematical analysis. Heinz von Förster makes the point that children have to be taught that "green" is no valid answer for the question: "What's 2+2?". I personally like his German book titled: "Truth is the invention of a liar". Heinz von Förster headed the started the Biological Computer Laboratory in 1958 and came up with concepts like second-order cybernetics. As far as fictional worlds go, Terry Pratchetts discworld runs on narrativium instead of math. That's true as long there no revelations of truth by Gods or other magical processes. In an universe where you can get the truth through magical tarot reading that assumption is false.
0[anonymous]10y
That's not obvious to me. Why do you think this? I also don't understand this inference. Why do you think revelations of truth by Gods or other magical processes, or tarot readings, mean that such a universe would a) be knowable, and b) not be subject to rational analysis?
0ChristianKl10y
It might depend a bit of what you mean with rationality. You lose objectivity. Let's say I'm hypnotize someone. I'm in a deep state of rapport. That means my emotional state matters a great deal. If I label something that the person I'm talking to as unsuccessful, anxiety raises in myself. That anxiety will screw with the result I want to achieve. I'm better of if I blank my mind instead of engaging in rational analysis of what I'm doing. Logically A -> B is not the same thing as B -> A. I said that it's possible for there to be knowledge that you can only get through a process besides rational analysis if you allow "magic".
0[anonymous]10y
I'm a little lost. So do you think these observations challenge the idea that in order to understand anyone, we need to assume they've got mostly true beliefs, and make mostly rational inferences?
0TheAncientGeek10y
I don't know what you mean by "run on math". Do qualia run in math?
0[anonymous]10y
It's not my phrase, and I don't particularly like it myself. If you're asking whether or not qualia are quanta, then I guess the answer is no, but in the sense that the measured is not the measure. It's a triviality that I can ask you how much pain you feel on a scale of 1-10, and get back a useful answer. I can't get at what the experience of pain itself is with a number or whatever, but then, I can't get at what the reality of a block of wood is with a ruler either.
0TheAncientGeek10y
Then by imagining an all qualia universe, I can easily imagine a universe that doesn't run on math, for some values of an"runs on math"
0[anonymous]10y
I don't think you can imagine, or conceive of, an all qualia universe though.
-4TheAncientGeek10y
You don't get to tell me what I can imagine, though. All I have to do is imagine away the quantitative and structural aspects of my experience.
0[anonymous]10y
I rather think I do. If you told me you could imagine a euclidian triangle with more or less than 180 internal degrees, I would rightly say 'No you can't'. It's simply not true that we can imagine or conceive of anything we can put into (or appear to put into) words. And I don't think it's possible to imagine away things like space and time and keep hold of the idea that you're imagining a universe, or an experience, or anything like that. Time especially, and so long as I have time, I have quantity.
0TheAncientGeek10y
That looks likes typical mind fallacy I don't know where you are getting yourmfacts from, but it is well known that people's abilities at visualization vary considerably, so where's the "we"? Having studied non euclidean geometry, I can easily imagine a triangle whose angles .sum to more than180 (hint: it's inscribed on the surface of a sphere) Saying that non spatial or in temporal universes aren't really universes is a True Scotsman fallacy. Non spatial and non temporal models have been serious proposed by physicists; perhaps you should talk to them.
0Jiro10y
It depends on what you mean by "imagine". I can't imagine a Euclidian triangle with less than 180 degrees in the sense of having a visual representation in my mind that I could then reproduce on a piece of paper. On the other hand, I can certainly imagine someone holding up a measuring device to a vague figure on a piece of paper and saying "hey, I don't get 180 degrees when I measure this". Of course, you could say that the second one doesn't count since you're not "really" imagining a triangle unless you imagine a visual representation, but if you're going to say that you need to remember that all nontrivial attempts to imagine things don't include as much detail as the real thing. How are you going to define it so that eliminating some details is okay and eliminating other details isn't? (And if you try that, then explain why you can't imagine a triangle whose angles add up to 180.05 degrees or some other amount that is not 180 but is close enough that you wouldn't be able to tell the difference in a mental image. And then ask yourself "can I imagine someone writing a proof that a Euclidian triangle's angles don't add up to 180 degrees?" without denying that you can imagine people writing proofs at all.)
0[anonymous]10y
These are good questions, and I think my general answer is this: in the context of this similar arguments, being able to imagine something is sometimes taken as evidence that it's at least a logical possibility. I'm fine with that, but it needs to be imagined in enough detail to capture the logical structure of the relevant possibility. If someone is going to argue, for example, that one can imagine a euclidian triangle with more or less than 180 internal degrees, the imagined state of affairs must have as least as much logical detail as does a euclidian triangle with 180 internal degrees. Will that exclude your 'vague shape' example, and probably your 'proof' example?
0Jiro10y
It would exclude the vague shape example but I think it fails for the proof example. Your reasoning suggests that if X is false, it would be impossible for me to imagine someone proving X. I think that is contrary to what most people mean when they say they can imagine something. It's not clear what your reasoning implies when X is true. Either 1. I cannot imagine someone proving X unless I can imagine all the steps in the proof 2. I can imagine someone proving X as long as X is true, since having a proof would be a logical possibility as long as X is true 1) is also contrary to what most people think of as imagining. 2) would mean that it is possible me to not know whether or not I am imagining something. (I imagine someone proving X and I don't know if X is true. 2) means that if X is true I'm "really imagining" it and that if X is false, I am not.)
0[anonymous]10y
Well, say I argue that it's impossible to write a story about a bat. It seems like it should be unconvincing for you to say 'But I can imagine someone writing a story about a bat...see, I'm imagining Tom, who's just written a story about a bat.' Instead, you'd need to imagine the story itself. I don't intend to talk about the nature of the imagination here, only to say that as a rule, showing that something is logically possible by way of imagining it requires that it have enough logical granularity to answer the challenge. So I don't doubt that you could imagine someone proving that E-triangles have more than 180 internal degrees, but I am saying that not all imaginings are contenders in an argument about logical possibility. Only those ones which have sufficient logical granularity do.
0Jiro10y
I would understand "I can imagine..." in such a context to mean that it doesn't contain flaws that are basic enough to prevent me from coming up with a mental picture or short description. Not that it doesn't contain any flaws at all. It wouldn't make sense to have "I can imagine X" mean "there are no flaws in X"--that would make "I can imagine X" equivalent to just asserting X.
0[anonymous]10y
The issue isn't flaws or flawlessness. In my bat example, you could perfectly imagine Tom sitting in an easy chair with a glass of scotch saying to himself, 'I'm glad I wrote that story about the bat'. But that wouldn't help. I never said it's impossible for Tom to sit in a chair and say that, I said that it was impossible to write a story about a bat. The issue isn't logical detail simpliciter, but logical detail relative to the purported impossibility. In the triangle case, you have to imagine, not Tom sitting in his chair thinking 'I'm glad I proved that E-triangles have more than 180 internal degrees' (no one could deny that that is possible) but rather the figure itself. It can be otherwise as vague and flawed as you like, so long as the relevant bits are there. Very likely, imagining the proof in the relevant way would require producing it. And you are asserting something, you're asserting the possibility of something in virtue of the fact that it is in some sense actual. To say that something is logically impossible is to say that it can't exist anywhere, ever, not even in a fantasy. To imagine up that possibility is to make it sufficiently real to refute the claim of possibility, but only if you imagine, and thus make real, the precise thing being claimed to be impossible.
-2TheAncientGeek10y
Are you sure it is logically impossible to have shameless and timeless universes? Who has put forward the necessity of space and time?
2[anonymous]10y
Dear me no! I have no idea if such a universe is impossible. I'm not even terribly confident that this universe has space or time. I am pretty sure that space and time (or something like them) are a necessary condition on experience, however. Maybe they're just in our heads, but it's nevertheless necessary that they, or something like them, be in our heads. Maybe some other kind of creature thinks in terms of space, time, and fleegle, or just fleegle, time, and blop, or just blop and nizz. But I'm confident that such things will all have some common features, namely being something like a context for a multiplicity. I mean in the way time is a context for seeing this, followed by that, and space is a context for seeing this in that in some relation, etc. Without something like this, it seems to me experience would always (except there's no time) only be of one (except an idea of number would never come up) thing, in which case it wouldn't be rich enough to be an experience. Or experience would be of nothing, but that's the same problem. So there might be universes of nothing but qualia (or, really, quale) but it wouldn't be a universe in which there are any experiencing or thinking things. And if that's so, the whole business is a bit incoherent, since we need an experiencer to have a quale.
-4TheAncientGeek10y
Are you using experience to mean visual experience by any chance? How much spatial information are you getting from hearing? PS your dogmatic Kantianism is now taken as read.
2[anonymous]10y
Tapping out.
0Lumifer10y
That depends on your definition of "math". For example, consider a simulated world where you control the code. Can you make it so that 2+2 in that simulation is sometimes 4, sometimes 15, and sometimes green? I don't see why not.
2RobinZ10y
I think you're conflating the physical operation that we correlate with addition and the mathematical structure. 'Green' I'm not seeing, but I could write a computer program modeling a universe in which placing a pair of stones in a container that previously held a pair of stones does not always lead to that container holding a quadruplet of stones. In such a universe, the mathematical structure we call 'addition' would not be useful, but that doesn't say that the formalized reasoning structure we call 'math' would not exist, or could not be employed. (In fact, if it's a computer program, it is obvious that its nature is susceptible to mathematical analysis.)
0[anonymous]10y
I guess I could make it appear that way, sure, though I don't know if I could then recognize anything in my simulation as thinking or doing math. But in any case, that's not a universe in which 2+2=green, it's a universe in which it appears to. Maybe I'm just not being imaginative enough, and so you may need to help me flesh out the hypothetical.
0ChristianKl10y
If I write the simulation in python I can simple define my function for addition: Unfortunately I don't know how to format the indention perfectly for this forum.
0[anonymous]10y
We don't need to go to the trouble of defining anything in Python. We can get the same result just by saying
0ChristianKl10y
If I use python to simulate a world than it matters how things are defined in python. It doesn't only appear that 2+2=green but it's that way at the level of the source code that depends how the world runs.
0[anonymous]10y
But it sounds to me like you're talking about the manipulation of signs, not about numbers themselves. We could make the set of signs '2+2=' end any way we like, but that doesn't mean we're talking about numbers. I donno, I think you're being too cryptic or technical or something for me, I don't really understand the point you're trying to make.
0ChristianKl10y
What do you mean with "the numbers themselves"? Peano axioms? I could imagine that n -> n+1 just doesn't apply.
0shminux10y
Math is what happens when you take your original working predictive toolkit (like counting sheep) and let it run on human wetware disconnected from its original goal of having to predict observables. Thus some form of math would arise in any somewhat-predictable universe evolving a calculational substrate.
0[anonymous]10y
That's an interesting problem. Do we have math because we make abstractions about the multitude of things around us, or must we already have some idea of math in the abstract just to recognize the multitude as a multitude? But I think I agree with the gist of what you're saying.
2shminux10y
Just like I think of language as meta-grunting, I think of math as meta-counting. Some animals can count, and possibly add and subtract a bit, but abstracting it away from the application for the fun of it is what humans do.
0TheAncientGeek10y
Is "containing mathematical truth" the same as "running on math"?
-2TheAncientGeek10y
Mixing truth and rationality is a failure mode. To know whether someone statement is true , you have to understand it,ad to understand it, you have to assume the speaker's rationality. It's also a failure mode to attach "Irrational" directly to beliefs. A belief is rational if it can be supported by an argument, and you don't carry the space of all possible arguments round jn your head,
2Lumifer10y
That's an... interesting definition of "rational".
4Jayson_Virissimo10y
Puts on Principle of Charity hat... Maybe TheAncientGreek means: (1) a belief is rational if it can be supported by a sound argument (2) a belief is rational if it can be supported by a valid argument with probable premises (3) a belief is rational if it can be supported by an inductively strong argument with plausible premises (4) a belief is rational if it can be supported by an argument that is better than any counterarguments the agent knows of etc... Although personally, I think it is more helpful to think of rationality as having to do with how beliefs cohere with other beliefs and about how beliefs change when new information comes in than about any particular belief taken in isolation.
2Lumifer10y
I can't but note that the world "reality" is conspicuously absent here...
2TheAncientGeek10y
That there is empirical evidence for something is good argument for it.
0Jayson_Virissimo10y
Arguments of type (1) necessarily track reality (it is pretty much defined this way), (2) may or may not depending on the quality of the premises, (3) often does, and sometimes you just can't do any better than (4) with available information and corrupted hardware. Just because I didn't use the word "reality" doesn't really mean much.
-2Eugine_Nier10y
A definition of "rational argument" that explicitly referred to "reality" would be a lot less useful, since checking which arguments are rational is one of the steps in figuring what' real.
0Lumifer10y
I am not sure this is (necessarily) the case, can you unroll? Generally speaking, arguments live in the map and, in particular, in high-level maps which involve abstract concepts and reasoning. If I check the reality of the stone by kicking it and seeing if my toe hurts, no arguments are involved. And from the other side, classical logic is very much part of "rational arguments" and yet needs not correspond to reality.
0Eugine_Nier10y
That tends to work less well for things that one can't directly observe, e.g., how old is the universe, or things where there is confounding noise, e.g., does this drug help.
0Lumifer10y
That was a counterexample, not a general theory of cognition...
0TheAncientGeek10y
There isn't a finite list of rational beliefs, because someone could think of an argument for a belief that you haven't thought of. There isn't a finite list of correct arguments either. People can invent new ones.
-2TheAncientGeek10y
Well, it's not too compatible with self congratulations "rationality".
0RobinZ10y
I believe this disagreement is testable by experiment.
0Lumifer10y
Do elaborate.
0RobinZ10y
If you would more reliably understand what people mean by specifically treating it as the product of a rational and intelligent person, then executing that hack should lead to your observing a much higher rate of rationality and intelligence in discussions than you would previously have predicted. If the thesis is true, many remarks which, using your earlier methodology, you would have dismissed as the product of diseased reasoning will prove to be sound upon further inquiry. If, however, you execute the hack for a few months and discover no change in the rate at which you discover apparently-wrong remarks to admit to sound interpretations, then TheAncientGeek's thesis would fail the test.
2TheAncientGeek10y
You will also get less feedback on the lines of "you just don't get it"
0RobinZ10y
True, although being told less often that you are missing the point isn't, in and of itself, all that valuable; the value is in getting the point of those who otherwise would have given up on you with a remark along those lines. (Note that I say "less often"; I was recently told that this criticism of Tom Godwin's "The Cold Equations", which I had invoked in a discussion of "The Ones Who Walk Away From Omelas", missed the point of the story - to which I replied along the lines of, "I get the point, but I don't agree with it.")
2Lumifer10y
That looks like a test of my personal ability to form correct first-impression estimates. Also "will prove to be sound upon further inquiry" is an iffy part. In practice what usually happens is that statement X turns out to be technically true only under conditions A, B, and C, however in practice there is the effect Y which counterbalances X and the implementation of X is impractical for a variety of reasons, anyway. So, um, was statement X "sound"? X-/
2RobinZ10y
Precisely. Ah, I see. "Sound" is not the right word for what I mean; what I would expect to occur if the thesis is correct is that statements will prove to be apposite or relevant or useful - that is to say, valuable contributions in the context within which they were uttered. In the case of X, this would hold if the person proposing X believed that those conditions applied in the case described. A concrete example would be someone who said, "you can divide by zero here" in reaction to someone being confused by a definition of the derivative of a function in terms of the limit of a ratio.
-2TheAncientGeek10y
Because you are not engaged in establishing facts about how smart someone is, you are instead trying to establish facts about what they mean by what they say.
-2TheAncientGeek10y
I do see what you are describing as being the standard PoC at all. May I suggest you are call it something else.
0RobinZ10y
How does the thing I am vaguely waving my arms at differ from the "standard PoC"?
2TheAncientGeek10y
Depends. Have you tried charitable interpretations of what they are saying that dont make them stupid, or are you going with your initial reaction?
-2Lumifer10y
I'm thinking that charity should not influence epistemology. Adjusting your map for charitable reasons seems like the wrong thing to do.
-2TheAncientGeek10y
I think you need to readup on the Principle Of Charity and realise that it's about accurate communication ,not some vague notion of niceness.
2Lumifer10y
That's what my question upthread was about -- is the principle of charity as discussed in this thread a matter of my belief (=map) or is it only about communication?
2TheAncientGeek10y
It's both. You need charity to communicate accurately, and also to form accurate beliefs. The fact that people you haven't been charitable towards seem stupid to you is not reliable data.
-2TheAncientGeek10y
Research and discover.
2Vaniver10y
How else would you interpret this series of clarifying questions?
0TheAncientGeek10y
I can tell someone the answer, but they may might not believe me. The might better off researching it form reliable sources than trying to figure it out from yet another stupid internet argument.
-2TheAncientGeek10y
If you haven't attempted to falsity your belief by being charitable, then you should stop believing it. It's bad data.
[-][anonymous]10y140

Ok, there's no way to say this without sounding like I'm signalling something, but here goes.

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

"If you can't say something you are very confident is actually smart, don't say anything at all." This is, in fact, why I don't say very much, or say it in a lot of detail, much of the time. I have all kinds of thoughts about all kinds of things, but I've had to retract sincerely-held beliefs so many times I just no longer bother embarrassing myself by opening my big dumb mouth.

Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the

... (read more)

there seem to be a lot of people in the LessWrong community who imagine themselves to be (...) paragons of rationality who other people should accept as such.

Uhm. My first reaction is to ask "who specifically?", because I don't have this impression. (At least I think most people here aren't like this, and if a few happen to be, I probably did not notice the relevant comments.) On the other hand, if I imagine myself at your place, even if I had specific people in mind, I probably wouldn't want to name them, to avoid making it a personal accusation instead of observation of trends. Now I don't know what do to.

Perhaps could someone else give me a few examples of comments (preferably by different people) where LW members imagine themselves paragons of rationality and ask other people to accept them as such? (If I happen to be such an example myself, that information would be even more valuable to me. Feel free to send me a private message if you hesitate to write it publicly, but I don't mind if you do. Crocker's rules, Litany of Tarski, etc.)

I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about

... (read more)

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

Getting principle of charity right can be hard in general. A common problem is when something can be interpreted as stupid in two different ways; namely, it has an interpretation which is obviously false, and another interpretation which is vacuous or trivial. (E.g.: "People are entirely selfish.") In cases like this, where it's not clear what the charitable reading is, it may just be best to point out what's going on. ("I'm not certain what you mean by that. I see two ways of interpreting your statement, but one is obviously false, and the other is vacuous.") Assuming they don't mean the wrong thing is not the right ans... (read more)

2JoshuaZ10y
There are I think two other related aspects that are relevant. First, there's some tendency to interpret what other people say in a highly non-charitable or anti-charitable fashion when one already disagrees with them about something. So a principle of charity helps to counteract that. Second, even when one is using a non-silent charity principle, it can if one is not careful, come across as condescending, so it is important to phrase it in a way that minimizes those issues.

As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary.

This is one of the loonier[1] ideas to be found on Overcoming Bias (and that's quite saying something). Exercise for the reader: test this idea that sharing opinions screens off the usefulness of sharing evidence with the following real-world scenario. I have participated in this scenario several times and know what the correct answer is.

You are on the programme committee of a forthcoming conference, which is meeting to decide which of the submitted papers to accept. Each paper has been refereed by several people, each of whom has given a summary opinion (definite accept, weak accept, weak reject, or definite reject) and supporting evidence for the opinion.

To transact business most efficiently, some papers are judged solely on the summary opinions. Every paper rated a definite accept by every referee for that paper is accepted without further discussion, because if three independent experts all think ... (read more)

0gwern10y
Verbal abuse is not a productive response to the results of an abstract model. Extended imaginary scenarios are not a productive response either. Neither explains why the proofs are wrong or inapplicable, or if inapplicable, why they do not serve useful intellectual purposes such as proving some other claim by contradiction or serving as an ideal to aspire to. Please try to do better.
7Richard_Kennaway10y
As I said, the scenario is not imaginary. I might have done so, had you not inserted that condescending parting shot.
0ChristianKl10y
Your real world scenario tells you that sometimes sharing evidence will move judgements in the right direction. Thinking that Robert Hanson or someone else on Overcoming Bias hasn't thought of that argument is naive. Robert Hanson might sometimes make arguments that are wrong but he's not stupid. If you are treating him as if he would be, then you are likely arguing against a strawman. Apart from that your example also has strange properties like only four different kind of judgements that reviewers are allowed to make. Why would anyone choose four?
6Richard_Kennaway10y
It is a lot more than "sometimes". In my experience (mainly in computing) no journal editor or conference chair will accept a referee's report that provides nothing but an overall rating of the paper. The rubric for the referees often explicitly states that. Where ratings of the same paper differ substantially among referees, the reasons for those differing judgements are examined. The routine varies but that one is typical. A four-point scale (sometimes with a fifth not on the same dimension: "not relevant to this conference", which trumps the scalar rating). Sometimes they ask for different aspects to be rated separately (originality, significance, presentation, etc.). Plus, of course, the rationale for the verdict, without which the verdict will not be considered and someone else will be found to referee the paper properly. Anyone is of course welcome to argue that they're all doing it wrong, or to found a journal where publication is decided by simple voting rounds without discussion. However, Aumann's theorem is not that argument, it's not the optimal version of Delphi (according to the paper that gwern quoted), and I'm not aware of any such journal. Maybe Plos ONE? I'm not familiar with their process, but their criteria for inclusion are non-standard.
-4ChristianKl10y
That just tells us that the journals believe that the rating isn't the only thing that matters. But most journals just do things that make sense to them. The don't draft their policies based on findings of decision science.
4Richard_Kennaway10y
Those findings being? Aumann's theorem doesn't go the distance. Anyway, I have no knowledge of how they draft their policies, merely some of what those policies are. Do you have some information to share here?
-4ChristianKl10y
For example that likert scales are nice if you want someone to give you their opinion. Of course it might sense to actually do run experiments. Big publishers do rule over 1000's of journals so it should be easy for them to do the necessary research if the wanted to do so.
-7gwern10y
2Richard_Kennaway10y
That was excessive, and I now regret having said it.
0ChristianKl10y
I think the most straightforward way is to do a second round. Let every referee read the opinions of the other referees and see whether they converge onto a shared judgement. If you want a more formal name the Delphi method
5Richard_Kennaway10y
What actually happens is that the reasons for the summary judgements are examined. Three for, one against. Is the dissenter the only one who has not understood the paper, or the only one who knows that although the work is good, almost the same paper has just been accepted to another conference? The set of summary judgements is the same but the right final judgement is different. Therefore there is no way to get the latter from the former. Aumann agreement requires common knowledge of each others' priors. When does this ever obtain? I believe Robin Hanson's argument about pre-priors just stands the turtle on top of another turtle.
4TheAncientGeek10y
People don't coincide in their priors, don't have access to the same evidence and aren't running off the same epistemology, and can't settle epistemologiical debates non-circularly...... Threr's a lot wrong with Aumannn, or at least the way some people use it.
-4gwern10y
Really? My understanding was that (From Rowe & Wright's "Expert opinions in forecasting: the role of the Delphi technique", in the usual Armstrong anthology.) From the sound of it, the feedback is often purely statistical in nature, and if it wasn't commonly such restricted feedback, it's hard to see why Rowe & Wright would criticize Delphi studies for this:
6Richard_Kennaway10y
I was referring to what actually happens in a programme committee meeting, not the Delphi method.
0gwern10y
Fine. Then consider it an example of 'loony' behavior in the real world: Delphi pools, as a matter of fact, for many decades, have operated by exchanging probabilities and updating repeatedly, and in a number of cases performed well (justifying their continued usage). You don't like Delphi pools? That's cool too, I'll just switch my example to prediction markets.
5Richard_Kennaway10y
It would be interesting to conduct an experiment to compare the two methods for this problem. However, it is not clear how to obtain a ground truth with which to judge the correctness of the results. BTW, my further elaboration, with the example of one referee knowing that the paper under discussion was already published, was also non-fictional. It is not clear to me how any decision method that does not allow for sharing of evidence can yield the right answer for this example. What have Delphi methods been found to perform well relative to, and for what sorts of problems?
-2ChristianKl10y
That assumes we don't have any criteria on which to judge good versus bad scientific papers. You could train your model to predict the amount of citations that a paper will get. You can also look at variables such as reproduced papers or withdrawn papers. Define a utility function that collapses such variables into a single one. Run a real world experiment in a journal and do 50% of the paper submissions with one mechanism and 50% with the other. Let a few years go by and then you evaluate the techniques based on your utility function.
2Richard_Kennaway10y
Something along those lines might be done, but an interventional experiment (creating journals just to test a hypothesis about refereeing) would be impractical. That leaves observational data-collecting, where one might compare the differing practices of existing journals. But the confounding problems would be substantial. Or, more promisingly, you could do an experiment with papers that are already published and have a citation record, and have experimental groups of referees assess them, and test different methods of resolving disagreements. That might actually be worth doing, although it has the flaw that it would only be assessing accepted papers and not the full range of submissions.
0ChristianKl10y
Then no reason why you can't test different procedures in an existing journal.
-7gwern10y

Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"

I'll go ahead and disagree with this. Sure, there's a lot of smart people who aren't rational, but then I would say that rationality is less common than intelligence. On the other hand, all the rational people I've met are very smart. So it seems really high intellig... (read more)

8Richard_Kennaway10y
The teaching calls to what is within the pupil. To borrow a thought from Georg Christoph Lichtenberg, if an ass looks into LessWrong, it will not see a sage looking back. I have a number of books of mathematics on my shelves. In principle, I could work out what is in them, but in practice, to do so I would have to be of the calibre of a multiple Field and Nobel medallist, and exercise that ability for multiple lifetimes. Yet I can profitably read them, understand them, and use that knowledge; but that does still require at least a certain level of ability and previous learning. Or to put that another way, learning is in P, figuring out by yourself is in NP.
5Sophronius10y
Agreed. I'm currently under the impression that most people cannot become rationalists even with training, but training those who do have the potential increases the chance that they will succeed. Still I think rationality cannot be taught like you might teach a university degree: A large part of it is inspiration, curiosity, hard work and wanting to become stronger. And it has to click. Just sitting in the classroom and listening to the lecturer is not enough. Actually now that I think about it, just sitting in the classroom and listening to the lecturer for my economics degree wasn't nearly enough to gain a proper understanding either, yet that's all that most people did (aside from a cursory reading of the books of course). So maybe the problem is not limited to rationality but more about becoming really proficient at something in general.
3dthunt10y
Reading something and understanding/implementing it are not quite the same thing. It takes clock time and real effort to change your behavior. I do not think it is unexpected that a large portion of the population on a site dedicated to writing, teaching, and discussing the skills of rationality is going to be, you know, still very early in the learning, and that some people will have failed to grasp a lesson they think they have grasped, and that others will think others have failed to grasp a lesson that they have failed to grasp, and that you will have people who just like to watch stuff burn. I'm sure it's been asked elsewhere, and I liked the estimation questions on the 2013 survey; has there been a more concerted effort to see what being an experienced LWer translates to, in terms of performance on various tasks that, in theory, people using this site are trying to get better at?
1Sophronius10y
Yes, you hit the nail on the head. Rationality takes hard work and lots of practice, and too often people on Less Wrong just spend time making clever arguments instead of doing the actual work of asking what the actual answer is to the actual question. It makes me wonder whether Less Wrongers care more about being seen as clever than they care about being rational. As far as I know there's been no attempt to make a rationality/Bayesian reasoning test, which I think is a great pity because I definitely think that something like that could help with the above problem.
0dthunt10y
There are many calibration tests you can take (there are many articles on this site with links to see if you are over-or-underconfident on various subject tests - search for calibration). What I don't know is if there has been some effort to do this across many questions, and compile the results anonymously for LWers. I caution against jumping quickly to conclusions about "signalling". Frankly, I suspect you are wrong, and that most of the people here are in fact trying. Some might not be, and are merely looking for sparring matches. Those people are still learning things (albeit perhaps with less efficiency). As far as "seeming clever", perhaps as a community it makes sense to advocate people take reasoning tests which do not strongly correlate with IQ, and that people generally do quite poorly on (I'm sure someone has a list, though it may be a relatively short list of tasks), which might have the effect of helping people to see stupid as part of the human condition, and not merely a feature of "non-high-IQ" humans.
0Sophronius10y
Fair enough, that was a bit too cynical/negative. I agree that people here are trying to be rational, but you have to remember that signalling does not need to be on purpose. I definitely detect a strong impulse amongst the less wrong crowd to veer towards controversial and absurd topics rather than the practical and to make use of meta level thinking and complex abstract arguments instead of simple and solid reasoning. It may not feel that way from the inside, but from the outside point of view it does kind of look like Less Wrong is optimizing for being clever and controversial rather than rational. I definitely say yes to (bayesian) reasoning tests. Someone who is not me needs to go do this right now.
2dthunt10y
I don't know that there is anything to do, or that should be done, about that outside-view problem. Understanding why people think you're being elitist or crazy doesn't necessarily help you avoid the label. http://lesswrong.com/lw/kg/expecting_short_inferential_distances/
2Sophronius10y
Huh? If the outside view tells you that there's something wrong, then the problem is not with the outside view but with the thing itself. It has nothing to do with labels or inferential distance. The outside-view is a rationalist technique used for viewing a matter you're personally involved in objectively by taking a step back. I'm saying that when you take a step back and look at things objectively, it looks like Less Wrong spends more time and effort on being clever than on being rational. But now that you've brought it up, I'd also like to add that the habit on Less Wrong to assume that any criticism or disagreement must be because of inferential distance (really just a euphemism for saying the other guy is clueless) is an extremely bad one.
2Nornagest10y
The outside view isn't magic. Finding the right reference class to step back into, in particular, can be tricky, and the experiments the technique is drawn from deal almost exclusively with time forecasting; it's hard to say how well it generalizes outside that domain. Don't take this as quoting scripture, but this has been discussed before, in some detail.

Okay, you're doing precisely the thing I hate and which I am criticizing about Less Wrong. Allow me to illustrate:

LW1: Guys, it seems to me that Less Wrong is not very rational. What do you think?
LW2: What makes you think Less Wrong isn't rational?
LW1: Well if you take a step back and use the outside view, Less Wrong seems to be optimizing for being clever rather than optimizing for being rational. That's a pretty decent indicator.
LW3: Well, the outside view has theoretical limitations, you know. Eliezer wrote a post about how it is possible to misuse the outside point of view as a conversation stopper.
LW1: Uh, well unless I actually made a mistake in applying the outside view I don't see why that's relevant? And if I did make a mistake in applying it it would be more helpful to say what it was I specifically did wrong in my inference.
LW4: You are misusing the term inference! Here, someone wrote a post about this at some point.
LW5: Yea but that post has theoretical limitations.
LW1: I don't care about any of that, I want to know whether or not Less Wrong is succeeding at being rational. Stop making needlessly theoretical abstract arguments and talk about the actual thing we were a... (read more)

1Nornagest10y
Dude, my post was precisely about how you're making a mistake in applying the outside view. Was I being too vague, too referential? Okay, here's the long version, stripped of jargon because I'm cool like that. The point of the planning fallacy experiments is that we're bad at estimating the time we're going to spend on stuff, mainly because we tend to ignore time sinks that aren't explicitly part of our model. My boss asks me how long I'm going to spend on a task: I can either look at all the subtasks involved and add up the time they'll take (the inside view), or I can look at similar tasks I've done in the past and report how long they took me (the outside view). The latter is going to be larger, and it's usually going to be more accurate. That's a pretty powerful practical rationality technique, but its domain is limited. We have no idea how far it generalizes, because no one (as far as I know) has rigorously tried to generalize it to things that don't have to do with time estimation. Using the outside view in its LW-jargon sense, to describe any old thing, therefore is almost completely meaningless; it's equivalent to saying "this looks to me like a $SCENARIO1". As long as there also exists a $SCENARIO2, invoking the outside view gives us no way to distinguish between them. Underfitting is a problem. Overfitting is also a problem. Which one's going to be more of a problem in a particular reference class? There are ways of figuring that out, like Yvain's centrality heuristic, but crying "outside view" is not one of them. As to whether LW is rational, I got bored of that kind of hand-wringing years ago. If all you're really looking for is an up/down vote on that, I suggest a poll, which I will probably ignore because it's a boring question.
0Sophronius10y
Ok, I guess I could have inferred your meaning from your original post, so sorry if my reply was too snarky. But seriously, if that's your point I would have just made it like this: "Dude you're only supposed to use the phrase 'outside view' with regards to the planning fallacy, because we don't know if the technique generalizes well." And then I'd go back and change "take a step back and look at it from the outside view" into "take a step back and look at it from an objective point of view" to prevent confusion, and upvote you for taking the time to correct my usage of the phrase.
0dthunt10y
My guess is that the site is "probably helping people who are trying to improve", because I would expect some of the materials here to help. I have certainly found a number of materials useful. But a personal judgement probably helping" isn't the kind of thing you'd want. It'd be much better to find some way to measure the size of the effect. Not tracking your progress is a bad, bad sign.
-2TheAncientGeek10y
LW8...rationality is more than one thing
0dthunt10y
My apologies, I thought you we referring to how people who do not use this site perceive people using the site, which seemed more likely to be what you were trying to communicate than the alternative. Yes, the site viewed as a machine does not look like a well-designed rational-people-factory to me, either, unless I've missed the part where it's comparing its output to its input to see how it is performing. People do, however, note cognitive biases and what efforts to work against them have produced, from time to time, and there are other signs that seem consistent with a well-intentioned rational-people-factory. And, no, not every criticism does. I can only speak for myself, and acknowledge that I have a number of times in the past failed to understand what someone was saying and assumed they were being dumb or somewhat crazy as a result. I sincerely doubt that's a unique experience.
0dthunt10y
http://lesswrong.com/lw/ec2/preventing_discussion_from_being_watered_down_by/, and other articles, I now read, because they are pertinent, and I want to know what sorts of work have been done to figure out how LW is perceived and why.
3David_Gerard10y
Surely you know people of average intelligence who consistently show "common sense" (so rare it's pretty much a superpower). They may not be super-smart, but they're sure as heck not dumb.
2Sophronius10y
Common sense does seem like a superpower sometimes, but that's not a real explanation. I think that what we call common sense is mostly just the result of clear thinking and having a distaste for nonsense. If you favour reality over fancies, you are more likely to pay more attention to reality --> better mental habits --> stronger intuition = common sense. But to answer your question, yes I do know people like that and I do respect them for it (though they still have above average intelligence, mostly). However, I would not trust them with making decisions on anything counter-intuitive like economics, unless they're also really good at knowing what experts to listen to.
4David_Gerard10y
Yeah, but I'd say that about the smart people too. Related, just seen today: The curse of smart people. SPOILER: "an ability to convincingly rationalize nearly anything."
6XiXiDu10y
The AI box experiment seems to support this. People who have been persuaded that it would be irrational to let an unfriendly AI out of the box are being persuaded to let it out of the box. The ability of smarter or more knowledgeable people to convince less intelligent or less educated people of falsehoods (e.g. parents and children) shows that we need to put less weight on arguments and more weight on falsifiability.
3Sophronius10y
I wouldn't use the Ai box experiment as an example for anything because it is specifically designed to be a black box: It's exciting precisely because the outcome confuses the heck out of people. I'm having trouble parsing this in Bayesian terms but I think you're committing a rationalist sin by using an event that your model of reality couldn't predict in advance as evidence that your model of reality is correct. I strongly agree that we need to put less weight on arguments but I think falsifiability is impractical in everyday situations.
4Sophronius10y
S1) Most smart people aren't rational but most rational people are smart D1) There are people of average intelligence with common sense S2) Yes they have good intuition but you cannot trust them with counter-intuitive subjects (people with average intelligence are not rational) D2) You can't trust smart people with counter-intuitive subjects either (smart people aren't rational) D2) does not contradict S1 because "most smart people aren't rational" isn't the same as "most rational people aren't smart", which is of course the main point of S1). Interesting article, it confirms my personal experiences in corporations. However, I think the real problem is deeper than smart people being able to rationalize anything. The real problem is that overconfidence and rationalizing your actions makes becoming a powerful decision-maker easier. The mistakes they make due to irrationality don't catch up with them until after the damage is done, and then the next overconfident guy gets selected.

Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy.

Not if you want to be accepted by that group. Being bad at signaling games can be crippling - as much as intellectual signaling poisons discourse, it's also the glue that holds a community together enough to make discourse possible.

Example: how likely you are to get away with making a post or comment on signaling games is primarily dependent on how good you are at signaling games, especially how good you are at the "make the signal appear to plausibly be something other than a signal" part of signaling games.

1ChrisHallquist10y
You're right, being bad at signaling games can be crippling. The point, though, is to watch out for them and steer away from harmful ones. Actually, I wish I'd emphasized this in the OP: trying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.
4ialdabaoth10y
Borrowing from the "Guess vs. Tell (vs. Ask)" meta-discussion, then, perhaps it would be useful for the community to have an explicit discussion about what kinds of signals we want to converge on? It seems that people with a reasonable understanding of game theory and evolutionary psychology would stand a better chance deliberately engineering our group's social signals than simply trusting our subconsciouses to evolve the most accurate and honest possible set.
1ChrisHallquist10y
The right rule is probably something like, "don't mix signaling games and truth seeking." If it's the kind of thing you'd expect in a subculture that doesn't take itself too seriously or imagine its quirks are evidence of its superiority to other groups, it's probably fine.
[-][anonymous]10y70

But a humble attempt at rationalism is so much less funny...

More seriously, I could hardly agree more with the statement that intelligence has remarkably little to do with susceptibility to irrational ideas. And as much as I occasionally berate others for falling into absurd patterns, I realize that it pretty much has to be true that somewhere in my head is something just as utterly inane that I will likely never be able to see, and it scares me. As such sometimes I think dissensus is not only good, but necessary.

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid.

As far as I understand, the Principle of Charity is defined differently; it states that you should interpret other people's arguments on the assumption that these people are arguing in good faith. That is to say, you should assume that your interlocutor honestly believes in everything he's saying, and that he has no ulterior motive beyou... (read more)

6fubarobfusco10y
Wikipedia quotes a few philosophers on the principle of charity: Blackburn: "it constrains the interpreter to maximize the truth or rationality in the subject's sayings." Davidson: "We make maximum sense of the words and thoughts of others when we interpret in a way that optimises agreement." Also, Dennett in The Intentional Stance quotes Quine that "assertions startlingly false on the face of them are likely to turn on hidden differences of language", which seems to be a related point.
0AshwinV10y
Interesting point of distinction. Irrespective of how you define the principle of charity (i.e. motivation based or intelligence based), I do believe that the principle on the whole should not become a universal guideline and it is important to distinguish it, a sort of "principle of differential charity". This is obviously similar to basic real world things (eg. expertise when it comes to the intelligent charity issue and/or political/official positioning when it comes to the motivation issue). I also realise that being differentially charitable may come with the risk of becoming even more biased, if you're priors themselves are based on extremely biased findings. However, I would think that by and large it works well, and is a great time saver when deciding how much effort to put into evaluating claims and statements alike.

Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.

I agree with your assessment. My suspicion is that this is due to... (read more)

Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it."

Not necessarily. It might be saying "You have an interesting viewpoint; let me see what basis it has, so that I may properly integrate this evidence into my worldview and correctly update my viewpoint"

Particularly problematic is this self congratulatory process:

some simple mistake leads to non mainstream conclusion -> the world is insane and I'm so much more rational than everyone else -> endorphins released -> circuitry involved in mistake-making gets reinforced.

For example: the IQ is the best predictor of job performance, right? So the world is insane that it mostly hires based on experience, test questions, and so on (depending on the field) rather than IQ, right? Cue the endorphins and reinforcement of careless thinking.

If you're not after... (read more)

4TheAncientGeek10y
These things can be hard to budge ....they certainly look it ... perhaps because the "Im special" delusion and "world is crazy" delusion need to fall at the same time.
2private_messaging10y
Plus in many cases all that had been getting strengthened via reinforcement learning for decades. It's also ridiculous how easy it is to be special in that imaginary world. Say I want to hire candidates really well - better than competition. I need to figure out the right mix of interview questions and prior experience and so on. I probably need to make my own tests. It's hard! It's harder still if I want to know if my methods work! But that crazy world, in it, there's a test readily available, widely known, and widely used, and nobody's using it for that, because they're so irrational. And you can know you're special by just going "yeah, it sounds about right". Like coming across 2x+2y=? and going on speculating about the stupid reasons why someone would be unable to apply 2+2=4 and 2*2=4 and conclude it's 4xy .

The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

Well perhaps you should adopt a charitable interpretation of the principle of charity :) It occurs to me that the phrase itself might not be ideal since "charity" implies that you are giving something which the recipient does... (read more)

Skimming the "disagreement" tag in Robin Hanson's archives, I found I few posts that I think are particularly relevant to this discussion:

I wonder how much people's interactions with other aspiring rationalists in real life has any effect on this problem. Specifically, I think people who have become/are used to being significantly better at forming true beliefs than everyone around them will tend to discount other people's opinions more.

Everyone (and every group) thinks they are rational. This is not a distinctive feature of LW. Christianity and Buddhism make a lot of their rationality. Even Nietzsche acknowledged that it was the rationality of Christianity that led to its intellectual demise (as he saw it), as people relentlessly applied rationality tools to Christianity.

My own model of how rational we are is more in line with Ed Seykota's (http://www.seykota.com/tribe/TT_Process/index.htm) than the typical geek model that we are basically rational with a few "biases" added on... (read more)

Everyone (and every group) thinks they are rational. This is not a distinctive feature of LW. Christianity and Buddhism make a lot of their rationality.

To the contrary, lots of groups make a big point of being anti-rational. Many groups (religious, new-age, political, etc.) align themselves in anti-scientific or anti-evidential ways. Most Christians, to make an example, assign supreme importance to (blind) faith that triumphs over evidence.

But more generally, humans are a-rational by default. Few individuals or groups are willing to question their most cherished beliefs, to explicitly provide reasons for beliefs, or to update on new evidence. Epistemic rationality is not the human default and needs to be deliberately researched, taught and trained.

And people, in general, don't think of themselves as being rational because they don't have a well-defined, salient concept of rationality. They think of themselves as being right.

3brazil8410y
Here's a hypothetical for you: Suppose you were to ask a Christian "Do you think the evidence goes more for or more against your belief in Christ?" How do you think a typical Christian would respond? I think most Christians would respond that the evidence goes more in favor of their beliefs.
0DanArmak10y
I think the word "evidence" is associated with being pro-science and therefore, in most people's heads, anti-religion. So many Christians would respond by e.g. asking to define "evidence" more narrowly before they committed to an answer. Also, the evidence claimed in favor of Christianity is mostly associated with the more fundamentalist interpretations; e.g. young-earthers who obsess with clearly false evidence vs. Catholics who accept evolution and merely claim a non-falsifiable Godly guidance. And there are fewer fundamentalists than there are 'moderates'. However, suppose a Christian responded that the evidence is in the favor of Christianity. And then I would ask them: if the evidence was different and was in fact strongly against Christianity - if new evidence was found or existing evidence disproved - would you change your opinion and stop being a Christian? Would you want to change your opinion to match whatever the evidence turned out to be? And I think most Christians, by far, would answer that they would rather have faith despite evidence, or that they would rather cling to evidence in their favor and disregard any contrary evidence.
5brazil8410y
I doubt it. That may be how their brains work, but I doubt they would admit that they would cling to beliefs against the evidence. More likely they would insist that such a situation could never happen; that the contrary evidence must be fraudulent in some way. I actually did ask the questions on a Christian bulletin board this afternoon. The first few responses have been pretty close to my expectations; we will see how things develop.
0DanArmak10y
That is exactly why I would label them not identifying as "rational". A rational person follows the evidence, he does not deny it. (Of course there are meta-rules, preponderance of evidence, independence of evidence, etc.) Upvoted for empirical testing, please followup! However, I do note that 'answers to a provocative question on a bulletin board, without the usual safety guards of scientific studies' won't be very strong evidence about 'actual beliefs and/or behavior of people in hypothetical future situations'.
2brazil8410y
That's not necessarily true and I can illustrate it with an example from the other side. A devout atheist once told me that even if The Almighty Creator appeared to him personally; performed miracles; etc., he would still remain an atheist on the assumption that he was hallucinating. One can ask if such a person thinks of himself as anti-rational given his pre-announcement that he would reject evidence that disproves his beliefs. Seems to me the answer is pretty clearly "no" since he is still going out of his way to make sure that his beliefs are in line with his assessment of the evidence. Well I agree it's just an informal survey. But I do think it's pretty revealing given the question on the table: Do Christians make a big point of being anti-rational? Here's the thread: http://www.reddit.com/r/TrueChristian/comments/1zd9t1/does_the_evidence_support_your_beliefs/ Of 4 or 5 responses, I would say that there is 1 where the poster sees himself as irrational. Anyway, the original claim which sparked this discussion is that everyone thinks he is rational. Perhaps a better way to put it is that it's pretty unusual for anyone to think his beliefs are irrational.
1DanArmak10y
And I wouldn't call that person rational, either. He may want to be rational, and just be wrong about the how. I think the relevant (psychological and behavioral) difference here is between not being rational, i.e. not always following where rationality might lead you or denying a few specific conclusions, and being anti-rational, which I would describe as seeing rationality as an explicit enemy and therefore being against all things rational by association. ETA: retracted. Some Christians are merely not rational, but some groups are explicitly anti-rational: they attack rationality, science, and evidence-based reasoning by association, even when they don't disagree with the actual evidence or conclusions. The Reddit thread is interesting. 5 isn't a big sample, and we got examples basically of all points of view. My prediction was that: By my count, of those Reddit respondents who explicitly answered the question, these match the prediction, given the most probable interpretation of their words: Luc-Pronounced_Luke, tinknal. EvanYork comes close but doesn't explicitly address the hypothetical. And these don't: Mageddon725, rethcir_, Va1idation. So my prediction of 'most' is falsified, but the study is very underpowered :-) I agree that it's unusual. My original claim was that many more people don't accept rationality as a valid or necessary criterion and don't even try to evaluate their beliefs' rationality. They don't see themselves as irrational, but they do see themselves as "not rational". And some of them further see themselves as anti-rational, and rationality as an enemy philosophy or dialectic.
2brazil8410y
Well he might be rational and he might not be, but pretty clearly he perceives himself to be rational. Or at a minimum, he does not perceive himself to be not rational. Agreed? Would you mind providing two or three quotes from Christians which manifest this attitude so I can understand and scrutinize your point? That's true. But I would say that of the 5, there was only one individual who doesn't perceive himself to be rational. Two pretty clearly perceive themselves to be rational. And two are in a greyer area but pretty clearly would come up with rationalizations to justify their beliefs. Which is irrational but they don't seem to perceive it as such. Well, I agree that a lot of people might not have a clear opinion about whether their beliefs are rational. But the bottom line is that when push comes to shove, most people seem to believe that their beliefs are a reasonable evidence-based conclusion. But I am interested to see quotes from these anti-rational Christians you refer to.
7DanArmak10y
After some reflection, and looking for evidence, it seems I was wrong. I felt very certain of what I said, but then I looked for justification and didn't find it. I'm sorry I led this conversation down a false trail. And thank you for questioning my claims and doing empirical tests. (To be sure, I found some evidence, but it doesn't add up to large, numerous, or representative groups of Christians holding these views. Or in fact for these views being associated with Christianity more than other religions or non-religious 'mystical' or 'new age' groups. Above all, it doesn't seem these views have religion as their primary motivation. It's not worth while looking into the examples I found if they're not representative of larger groups.)
2CCC10y
Well, as a Christian myself, allow me to provide a data point for your questions: (from the grandparent post) More for. Young-earthers fall into a trap; there are parts of the Bible that are not intended to be taken literally (Jesus' parables are a good example). Genesis (at least the garden-of-eden section) is an example of this. It would have to be massively convincing evidence. I'm not sure that sufficient evidence can be found (but see next answer). I've seen stage magicians do some amazing things; the evidence would have to be convincing enough to convince me that it wasn't someone, with all the skills of David Copperfield, intentionally pulling the wool over my eyes in some manner. In the sense that I want my map to match the territory; yes. In the sense that I do not want the territory to be atheistic; no. I wouldn't mind so much if it turned out that (say) modern Judaism was 100% correct instead; it would be a big adjustment, but I think I could handle that much more easily. But the idea that there's nothing in the place of God; the idea that there isn't, in short, someone running the universe is one that I find extremely disquieting for some reason. I imagine it's kindof like the feeling one might get, imagining the situation of being in a chauffeur-driven bus, travelling at full speed, along with the rest of humanity, and suddenly discovering that there's no-one behind the steering wheel and no-one on the bus can get into the front compartment. ...extremely disquieting.
5Viliam_Bur10y
If feels the same to me; I just believe it's true. Let's continue the same metaphor and imagine that many people in the bus decide to pretend that there is an invisible chauffeur and therefore everything is okay. This idea allows them to relax; at least partially (because parts of their minds are aware that the chauffeur should not be invisible, because that doesn't make much sense). And whenever someone in the bus suggests that we should do our best to explore the bus and try getting to the front compartment, these people become angry and insist that such distrust of our good chauffeur is immoral, and getting to the front compartment is illegal. Instead we should just sit quietly and sing a happy song together.
3CCC10y
...I'm not sure this metaphor can take this sort of strain. (Of course, it makes a difference if you can see into the front compartment; I'd assumed an opaque front compartment that couldn't be seen into from the rest of the bus). Personally, I don't have any problem with people trying to, in effect, get into the front compartment. As long as it's done in an ethical way, of course (so, for example, if it involves killing people, then no; but even then, what I'd object to is the killing, not the getting-into-the-front). I do think it makes a lot of sense to try to explore the rest of the bus; the more we find out about the universe, the more effect we can have on it; and the more effect we can have on the universe, the more good we can do. (Also, the more evil we can do; but I'm optimistic enough to believe that humanity is more good than evil, on balance. Despite the actions of a few particularly nasty examples). As I like to phrase it: God gave us brains. Presumably He expected us to use them.
0Viliam_Bur10y
I assumed the front compartment was completely opaque in the past, and parts of it are gradually made transparent by science. Some people, less and less credibly, argue that the chauffeur has a weird body shape and still may be hidden behing the remaining opaque parts. But the smarter ones can already predict where this goes, so they already hypothesise an invisible chauffeur (separate magisteria, etc.). Most people probably believe some mix, like the chauffeur is partially transparent and partially visible, and the transparent and visible parts of the chauffeur's body happen to correspond to the parts they can and cannot see from their seats. Okay, I like your attitude. You probably wouldn't ban teaching evolutionary biology at schools.
1CCC10y
I think this is the point at which the metaphor has become more of an impediment to communication than anything else. I recognise what I think you're referring to; it's the idea of the God of the gaps (in short, the idea that God is responsible for everything that science has yet to explain; which starts leading to questions as soon as science explains something new). As an argument for theism, the idea that God is only responsible for things that haven't yet been otherwise explained is pretty thoroughly flawed to start with. (I can go into quite a bit more detail if you like). No, I most certainly would not. Personally, I think that the entire evolution debate has been hyped up to an incredible degree by a few loud voices, for absolutely no good reason; there's nothing in the theory of evolution that runs contrary to the idea that the universe is created. Evolution just gives us a glimpse at the mechanisms of that creation.
3Sophronius10y
This is precisely how I feel about humanity. I mean, we came from within a hair's breadth of annihilating all human life on the planet during the cold war, for pete's sake. Now that didn't come to pass, but if you look at all the atrocities that did happen during the history of humanity... even if you're right and there is a driver, he is most surely drunk behind the wheel. Still, I can sympathise. After all, people also generally prefer to have an actual person piloting their plane, even if the auto-pilot is better (or so I've read). There seems to be some primal desire to want someone to be in charge. Or as the Joker put it: "Nobody panics when things go according to plan. Even if that plan is horrifying."
2CCC10y
Atrocities in general are a point worth considering. They make it clear that, even given the existence of God, there's a lot of agency being given to the human race; it's up to us as a race to not mess up totally, and to face the consequences of the actions of others.
2Bugmaster10y
I find your post very interesting, because I tend to respond almost exactly the same way when someone asks me why I'm an atheist. The one difference is the "extremely disquieting" part; I find it hard to relate to that. From my point of view, reality is what it is; i.e., it's emotionally neutral. Anyway, I find it really curious that we can disagree so completely while employing seemingly identical lines of reasoning. I'm itching to ask you some questions about your position, but I don't want to derail the thread, or to give the impression of getting all up in your business, as it were...
1CCC10y
Reality stops being emotionally neutral when it affects me directly. If I were to wake up to find that my bed has been moved to a hovering platform over a volcano, then I will most assuredly not be emotionally neutral about the discovery (I expect I would experience shock, terror, and lots and lots of confusion). Well, I'd be quite willing to answer them. Maybe you could open up a new thread in Discussion, and link to it from here?
0orbenn10y
I think we're getting some word-confusion. Groups that claim "make a big point of being anti-rational" are against the things with the label "rational". However they do tend to think of their own beliefs as being well thought out (i.e. rational).
1DanArmak10y
No, I think we're using words the same way. I disagree with your statement that all or most groups "think of their own beliefs as being well thought out (i.e. rational).". They think of their beliefs of being right, but not well thought out. "Well thought out" should mean: 1. Being arrived at through thought (science, philosophy, discovery, invention), rather than writing the bottom line first and justifying it later or not at all (revelation, mysticism, faith deliberately countering evidence, denial of the existence of objective truth). 2. Thought out to its logical consequences, without being selective about which conclusions you adopt or compartmentalizing them, making sure there are no internal contradictions, and dealing with any repugnant conclusions.
-3Eugine_Nier10y
That's not what most Christians mean by faith.
0DanArmak10y
The comment you link to gives a very interesting description of faith: I like that analysis! And I would add: obligation to your social superiors, and to your actual legal superiors (in a traditional society), is a very strong requirement and to deny faith is not merely to be rude, but to rebel against the social structure which is inseparable from institutionalized religion. However, I think this is more of an explanation of how faith operates, not what it feels like or how faithful people describe it. It's a good analysis of the social phenomenon of faith from the outside, but it's not a good description of how it feels from the inside to be faithful. This is because the faith actually required of religious people is faith in the existence of God and other non-evident truths claimed by their religion. As a faithful person, you can't feel faith is "duty, trust, obligation" - you feel that is is belief. You can't feel that to be unfaithful would be to wrong someone or to rebel; you feel that it would be to be wrong about how the world really is. However, I've now read Wikipedia on Faith in Christianity and I see there are a lot of complex opinions about the meaning of this word. So now I'm less sure of my opinion. I'm still not convinced that most Christians mean "duty, trust, deference" when they say "faith", because WP quotes many who disagree and think it means "belief".
2NoSignalNoNoise10y
That sentence motivated me to overcome the trivial inconvenience of logging in on my phone so I could up vote it.
1elharo10y
a) Why do you expect a rational person would necessarily avoid the environmental problems that cause overweight and obesity? Especially given that scientists are very unclear amongst themselves as to what causes obesity and weight gain? Even if you adhere to the notion that weight gain and loss is simply a matter of calorie consumption and willpower, why would you assume a rational person has more willpower? b) Why do you expect that a rational person would necessarily value the optimum amount of exercise (presumably optimal for health) over everything else they might have done with their time this week? And again given that scientists have even less certainty about the optimum amount or type of exercise, than they do about the optimum amount of food we should eat. c) Why do you assume that a rational person is financially able to save for retirement? There are many people on this planet who live on less than a dollar a day. Does being born poor imply a lack of rationality? d) Why do you assume a rational person does not waste time on occasion? Rationality is not a superpower. It does not magically produce health, wealth, or productivity. It may assist in the achievement of those and other goals, but it is neither necessary nor sufficient.
0NoSignalNoNoise10y
The question was directed at people discussing rationality on the internet. If you can afford some means of internet access, you are almost certainly not living on less than a dollar a day.
2CAE_Jones10y
I receive less in SSI than I'm paying on college debt (no degree), am legally blind, unemployed, and have internet access because these leave me with no choice but to live with my parents (no friends within 100mi). Saving for retirement is way off my radar. (I do have more to say on how I've handled this, but it seems more appropriate for the rationality diaries. I will ETA a link if I make such a comment.)
0brazil8410y
A more rational person might have a better understanding of how his mind works and use that understanding to deploy his limited willpower to maximum effect.
-1Vaniver10y
Even if producing no external output, one can still use time rather than waste it. waveman's post is about the emotional difficulties of being effective- and so to the extent that rationality is about winning, a rational person has mastered those difficulties.
-2brazil8410y
Most likely because getting regular exercise is a pretty good investment of time. Of course some people might rationally choose not to make the investment for whatever reason, but if someone doesn't exercise regularly there is an excellent chance that it's akrasia at work. One can ask if rational people are less likely to fall victim to akrasia. My guess is that they are, since a rational person is likely to have a better understanding of how his brain works. So he is in a better position to come up with ways to act consistently with his better judgment.
0[anonymous]10y
I wasted some time today. Is 3-4 times per week of strength training and 1/2 hour cardio enough exercise? Then I think I get 3/4. Woot, but I actually don't see the point of the exercise, since I don't even aspire to be perfectly rational (especially since I don't know what I would be perfectly rational about).

sake-handling -> snake-handling

Anecdotally, I feel like I treat anyone on LW as someone to take much more seriously because f that, it's just not different enough for any of the things-perfect-rationalists-should-do to start to apply.

Excellent post. I don't have anything useful to add at the moment, but I am wondering if the second-to-last paragraph:

First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that

is just missing a period at the end, or has a fragmented sentence.

By the way, I agree with you that there is a problem with rationalists who are a lot less rational than they realize.

What would be nice is if there were a test for rationality just like one can test for intelligence. It seems that it would be hard to make progress without such a test.

Unfortunately there would seem to be a lot of opportunity for a smart but irrational person to cheat on such a test without even realizing it. For example, if it were announced that atheism is a sign of rationality, our hypothetical smart but irrational person would proudl... (read more)

2Viliam_Bur10y
Rationality tests shouldn't be about professing things; not even things correlated with rationality. Intelligence tests also aren't about professing intelligent things (whatever those would be), they are about solving problems. Analogically, rationality tests should require people to use rationality to solve novel situations, not just guess the teacher's password. If the test depends too much on trusting the rationality of the person designing the test, they are doing it wrong. Again, IQ tests are not made by finding the highest-IQ people on the planet and telling them: "Please use your superior rationality in ways incomprehensive to us mere mortals to design a good IQ test." Both intelligence and rationality are necessary in designing an IQ test or a rationality test, but that's in a similar way that intelligence and rationality are necessary to design a new car. The act of designing requires brainpower; but it's not generally true that tests of X must be designed by people with high X.
0brazil8410y
I agree with this. But I can't think of such a rationality test. I think part of the problem is that a smart but irrational person could use his intelligence to figure out the answers that a rational person would come up with and then choose those answers. On an IQ test, if you are smart enough to figure out the answers that a smart person would choose, then you yourself must be pretty smart. But I don't think the same thing holds for rationality. Well yes, but it's hard to think of how to do it right. What's an example of a question you might put on a rationality test?
0Viliam_Bur10y
I agree that rationality tests will be much more difficult than IQ tests. First, we already have the IQ tests so if we tried to create a new one, we already know what to do and what to expect. Second, the rationality tests can be inherently more difficult. Still I think that if we look at the history of the IQ tests, we can take some lessons from there. I mean; imagine that there are no IQ tests yet, and you are supposed to invent the first one. The task would probably seem impossible, and there would be similar objections. Today we know that the first IQ tests got a few things wrong. And we also know that the "online IQ tests" are nonsense from the psychometrics point of view, but to people without psychological education they seem right, because their intuitive idea of IQ is "being able to answer difficult questions invented by other intelligent people", when if fact the questions in Raven's progressive matrices are rather simple. 20 years later we may have analogical knowledge about the rationality tests, and some things may seem obvious in hindsight. At this moment, while respecting that intelligence is not the same thing as rationality, IQ tests are the outside-view equivalent I will use for making guesses, because I have no better analogy. The IQ tests were first developed for small children. The original purpose of the early IQ tests was to tell whether a 6 years old child is ready to go to elementary school, or whether we should give them another year. They probably even weren't called IQ tests yet, but school readiness tests. Only later was the idea of some people being "smarter/dumber for their age" generalized to all ages. Analogically, we could probably start measuring rationality where it is easiest; on children. I'm not saying it will be easy, just easier that with adults. Many of the small children's logical mistakes will be less politically controversial. And it is easier to reason about the mistakes that you are already not prone to making. Some
0brazil8410y
It's hard to say given that we have the benefit of hindsight, but at least we wouldn't have to deal with what I believe to be the killer objection -- that irrational people would subconsciously cheat if they know they are being tested. I agree, but that still doesn't get you any closer to overcoming the problem I described. To my mind that's not very helpful because the irrational people I meet have been pretty good at thinking rationally if they choose to. Let me illustrate with a hypothetical: Suppose you meet a person with a fervent belief in X, where X is some ridiculous and irrational claim. Instead of trying to convince them that X is wrong, you offer them a bet, the outcome of which is closely tied to whether X is true or not. Generally they will not take the bet. And in general, when you watch them making high or medium stakes decisions, they seem to know perfectly well -- at some level -- that X is not true. Of course not all beliefs are capable of being tested in this way, but when they can be tested the phenomenon I described seems pretty much universal. The reasonable inference is that irrational people are generally speaking capable of rational thought. I believe this is known as "standby rationality mode."
0TheOtherDave10y
I agree with you that people who assert crazy beliefs frequently don't behave in the crazy ways those beliefs would entail. This doesn't necessarily mean they're engaging in rational thought. For one thing, the real world is not that binary. If I assert a crazy belief X, but I behave as though X is not true, it doesn't follow that my behavior is sane... only that it isn't crazy in the specific way indicated by X. There are lots of ways to be crazy. More generally, though... for my own part what I find is that most people's betting/decision making behavior is neither particularly "rational" nor "irrational" in the way I think you're using these words, but merely conventional. That is, I find most people behave the way they've seen their peers behaving, regardless of what beliefs they have, let alone what beliefs they assert (asserting beliefs is itself a behavior which is frequently conventional). Sometimes that behavior is sane, sometimes it's crazy, but in neither case does it reflect sanity or insanity as a fundamental attribute. You might find yvain's discussion of epistemic learned helplessness enjoyable and interesting.
0brazil8410y
That may very well be true . . .I'm not sure what it says about rationality testing. If there is a behavior which is conventional but possibly irrational, it might not be so easy to assess its rationality. And if it's conventional and clearly irrational, how can you tell if a testee engages in it? Probably you cannot trust self-reporting.
0TheOtherDave10y
A lot of words are getting tossed around here whose meanings I'm not confident I understand. Can you say what it is you want to test for, here, without using the word "rational" or its synonyms? Or can you describe two hypothetical individuals, one of whom you'd expect to pass such a test and the other you'd expect to fail?
1brazil8410y
Our hypothetical person believes himself to be very good at not letting his emotions and desires color his judgments. However his judgments are heavily informed by these things and then he subconsciously looks for rationalizations to justify them. He is not consciously aware that he does this. Ideally, he should fail the rationality test. Conversely, someone who passes the test is someone who correctly believes that his desires and emotions have very little influence over his judgments. Does that make sense? And by the way, one of the desires of Person #1 is to appear "rational" to himself and others. So it's likely he will subconsiously attempt to cheat on any "rationality test. "
1TheOtherDave10y
Yeah, that helps. If I were constructing a test to distinguish person #1 from person #2, I would probably ask for them to judge a series of scenarios that were constructed in such a way that formally, the scenarios were identical, but each one had different particulars that related to common emotions and desires, and each scenario was presented in isolation (e.g., via a computer display) so it's hard to go back and forth and compare. I would expect P2 to give equivalent answers in each scenario, and P1 not to (though they might try).
0brazil8410y
I doubt that would work, since P1 most likely has a pretty good standby rationality mode which can be subconsciously invoked if necessary. But can you give an example of two such formally identical scenarios so I can think about it?
0TheOtherDave10y
It's a fair question, but I don't have a good example to give you, and constructing one would take more effort than I feel like putting into it. So, no, sorry. That said, what you seem to be saying is that P2 is capable of making decisions that aren't influenced by their emotions and desires (via "standby rationality mode") but does not in fact do so except when taking rationality tests, whereas P1 is capable of it and also does so in real life. If I've understood that correctly, then I agree that no rationality test can distinguish P1 and P2's ability to make decisions that aren't influenced by their emotions and desires.
0brazil8410y
That's unfortunate, because this strikes me as a very important issue. Even being able to measure one's own rationality would be very helpful, let alone that of others. I'm not sure I would put it in terms of "making decisions" so much as "making judgments," but basically yes. Also, P1 does make rational judgments in real life but the level of rationality depends on what is at stake. Well one idea is to look more directly at what is going on in the brain with some kind of imaging technique. Perhaps self-deception or result-oriented reasoning have a tell tale signature. Also, perhaps this kind of irrationality is more cognitively demanding. To illustrate, suppose you are having a Socratic dialogue with someone who holds irrational belief X. Instead of simply laying out your argument, you ask the person whether he agrees with Proposition Y, where Proposition Y seems pretty obvious and indisputable. Our rational person might quickly and easily agree or disagree with Y. Whereas our irrational person needs to think more carefully about Y; decide whether it might undermine his position; and if it does, construct a rationalization for rejecting Y. This difference in thinking might be measured in terms of reaction times.
-2Lumifer10y
[ha-ha-only-serious](http://www.catb.org/jargon/html/H/ha-ha-only-serious.html) Rationality is commonly defined as winning. Therefore rationality testing is easy -- just check if the subject is a winner or a loser.
0Viliam_Bur10y
Okay, I think there is a decent probability that you are right, but at this moment we need more data, which we will get by trying to create different kinds of rationality tests. A possible outcome is that we won't get true rationality tests, but at least something partially useful, e.g. tests selecting the people capable of rational though, which includes a lot of irrational people, but still not everyone. Which may still appear to be just another form of intelligence tests (a sufficiently intelligent irrational person is able to make rational bets, and still believe they have an invisible dragon in the garage). So... perhaps this is a moment where I should make a bet about my beliefs. Assuming that Stanovich does not give up, and other people will follow him (that is, assuming that enough psychologists will even try to create rationality tests), I'd guess... probability 20% within 5 years, 40% within 10 years, 80% ever (pre-Singularity) that there will be a test which predicts rationality significantly better than an IQ test. Not completely reliably, but sufficiently that you would want your employees to be tested by that test instead of an IQ test, even if you had to pay more for it. (Which doesn't mean that employers actually will want to use it. Or will be legally allowed to.) And probability 10% within 10 years, 60% ever that a true "rationality test" will be invented, at least for values up to 130 (which still many compartmentalizing people will pass). These numbers are just a wild guess, tomorrow I would probably give different values; I just thought it would be proper to express my beliefs in this format, because it encourages rationality in general.
0brazil8410y
Yes, I have a feeling that "capability of rationality" would be highly correlated with IQ. Your mention of employees raises another issue, which is who the test would be aimed at. When we first started discussing the issue, I had an (admittedly vague) idea in my head that the test could be for aspiring rationalists. i.e. that it could it be used to bust irrational lesswrong posters who are far less rational than they realize. It's arguably more of a challenge to come up with a test to smoke out the self-proclaimed paragon of rationality who has the advantage of careful study and who knows exactly what he is being tested for. By analogy, consider the Crown-Marlow Social Desirability Scale, which has been described as a test which measures "the respondent's desire to exaggerate his own moral excellence and to present a socially desirable facade" Here is a sample question from the test: Probably the test works pretty well for your typical Joe or Jane Sixpack. But someone who is intelligent; who has studied up in this area; and who knows what's being tested will surely conceal his desire to exaggerate his moral excellence. That said, having thought about it, I do think there is a decent chance that solid rationality tests will be developed. At least for subjects who are unprepared. One possibility is to measure reaction times as with "Project Implicit." Perhaps self-deception is more congnitively demanding than self-honesty and therefore a clever test might measure it. But you still might run into the problem of subconscious cheating.
2Nornagest10y
If anything, I might expect the opposite to be true in this context. Neurotypical people have fast and frugal conformity heuristics to fall back on, while self-honestly on a lot of questions would probably take some reflection; at least, that's true for questions that require aggregating information or assessing personality characteristics rather than coming up with a single example of something. It'd definitely be interesting to hook someone up to a polygraph or EEG and have them take the Crowne-Marlowe Scale, though.
0brazil8410y
Well consider the hypothetical I proposed: See what I mean? I do agree that in other contexts, self-deception might require less thought. e.g. spouting off the socially preferable answer to a question without really thinking about what the correct answer is. Yes.
0Viliam_Bur10y
That sample question reminds me of a "lie score", which is a hidden part of some personality tests. Among the serious questions, there are also some questions like this, where you are almost certain that the "nice" answer is a lie. Most people will lie on one or two of ten such question, but the rule of thumb is that if they lie in five or more, you just throw the questionnaire away and declare them a cheater. -- However, if they didn't lie on any of these question, you do a background check whether they have studied psychology. And you keep in mind that the test score may be manipulated. Okay, I admit that this problem would be much worse for rationality tests, because if you want a person with given personality, they most likely didn't study psychology. But if CFAR or similar organizations become very popular, then many candidates for highly rational people will be "tainted" by the explicit study of rationality, simply because studying rationality explicitly is probably a rational thing to do (this is just an assumption), but it's also what an irrational person self-identifying as a rationalist would do. Also, practicing for IQ tests is obvious cheating, but practicing for getting better at doing rational tasks is the rational thing to do, and a wannabe rationalist would do it, too. Well, seems like the rationality tests would be more similar to IQ tests than to personality test. Puzzles, time limits... maybe even the reaction times or lie detectors.
0PeterDonis10y
On the Crowne-Marlowe scale, it looks to me (having found a copy online and taken it) like most of the questions are of this form. When I answered all of the questions honestly, I scored 6, which according to the test, indicates that I am "more willing than most people to respond to tests truthfully"; but what it indicates to me is that, for all but 6 out of 33 questions, the "nice" answer was a lie, at least for me. The 6 questions were the ones where the answer I gave was, according to the test, the "nice" one, but just happened to be the truth in my case: for example, one of the 6 was "T F I like to gossip at times"; I answered "F", which is the "nice" answer according to the test--presumably on the assumption that most people do like to gossip but don't want to admit it--but I genuinely don't like to gossip at all, and can't stand talking to people who do. Of course, now you have the problem of deciding whether that statement is true or not. :-) Could a rationality test be gamed by lying? I think that possibility is inevitable for a test where all you can do is ask the subject questions; you always have the issue of how to know they are answering honestly.
0brazil8410y
Yes, reaction times seem like an interesting possibility. There is an online test for racism which uses this principle. But it would be pretty easy to beat the test if the results counted for anything. Actually lie detectors can be beaten too. Perhaps brain imaging will eventually advance to the point where you can cheaply and accurately determine if someone is engaged in deception or self-deception :)

"rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme

Perhaps a better branding would be "effective decision making", or "effective thought"?

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rational

... (read more)

Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."

I couldn't agree more to that - to a first approximation.

Now of course, the first problem is with people who think a person is either rational in general or not, right in general, or not. Being right or rational is conflated ... (read more)

0elharo10y
Rationality, intelligence, and even evidence are not sufficient to resolve all differences. Sometimes differences are a deep matter of values and preferences. Trivially, I may prefer chocolate and you prefer vanilla. There's no rational basis for disagreement, nor for resolving such a dispute. We simply each like what we like. Less trivially, some people take private property as a fundamental moral right. Some people treat private property as theft. And a lot of folks in the middle treat it as a means to an end. Folks in the middle can usefully dispute the facts and logic of whether particular incarnations of private property do or do not serve other ends and values, such as general happiness and well-being. However perfectly rational and intelligent people who have different fundamental values with respect to private property are not going to agree, even when they agree on all arguments and points of evidence. There are many other examples where core values come into play. How and why people develop and have different core values than other people is an interesting question. However even if we can eliminate all partisan-shaded argumentation, we will not eliminate all disagreements.
0brilee10y
'''I posit that people want to find others like them (in a continuum with finding a community of people like them, some place where they can belong), and it stings to realize that even people who hold many similar opinions still aren't carbon copies of you, that their cognitive engine doesn't work exactly the same way as yours, and that you'll have to either change yourself, or change others (both of which can be hard, unpleasant work), if you want there to be less friction between you (unless you agree to disagree, of course).''' Well said.

For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.

That... isn't jargon? There are probably plenty of actual examples you could have used here, but that isn't one.

Edit: OK, you did give an actual example below that ("blue-green politics"). Nonetheless, "mental model" is not jargon. It wasn't coined here, it doesn't have some specialized meaning here that differs from its use outside, it's entirely compositional and thus transparent -- nobody has to explain to you what it means -- and at least in my own experience it just isn't a rare phrase in the first place.

1Jiro10y
It doesn't have a use outside. I measn, yeah, literally, the words do mean the same thing and you could find someone outside lesswrong who says it, but it's an unnecessarily complicated way to say things that generally is not used. It takes more mental effort to understand, it's outside most people's expectations for everyday speech, and it may as well be jargon, even if technically it isn't. Go ahead, go down the street and the next time you ask someone for directions and they tell you something you can't understand, reply "I have a poor mental model of how to get to my destination". They will probably look at you like you're insane.
5VAuroch10y
"Outside" doesn't have to include a random guy on the street. Cognitive science as a field is "outside", and uses "mental model". Also, "I have a poor mental model of how to get to my destination" is, descriptively speaking, wrong usage of 'poor mental model'; it's inconsistent with the connotations of the phrase, which connotes an attempted understanding which is wrong. I don't "have a poor mental model" of the study of anthropology; I just don't know anything about it or have any motivation to learn. I do "have a poor mental model" of religious believers; my best attempts to place myself in the frame of reference of a believer do not explain their true behavior, so I know that my model is poor.
1Jiro10y
I suggested saying it in response to being given directions you don't understand. If so, then you did indeed attempt to understand and couldn't figure it out. But there's a gradation. Some phrases are used only by LWers. Some phrases are used by a slightly wider range of people, some by a slightly wider than that. Whether a phrase is jargon-like isn't a yes/no thing; using a phrase which is used by cognitive scientists but which would not be understood by the man on the street, when there is another way of saying the same thing that would be understood by the man on the street, is most of the way towards being jargon, even if technically it's not because cognitive scientists count as an outside group. Furthermore, just because cognitive scientists know the phrase doesn't mean they use it in conversation about subjects that are not cognitive science. I suspect that even cognitive scientists would, when asking each other for directions, not reply to incomprehensible directions by saying they have a poor mental model, unless they are making a joke or unless they are a character from the Big Bang Theory (and the Big Bang Theory is funny because most people don't talk like that, and the few who do are considered socially inept.)

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments.

When? The closest case I can recall came from someone defending religion or theology - which brought roughly the response you'd expect - and even that was a weaker claim.

If you mean people saying you should try to slightly adjust your probabilities upon meeting intelligent and somewhat rational disagreement, this seems clearly true. Worst case scenario, you waste some time putting a refutation together (coughWLC).

[-][anonymous]9y00

I hadn't come across the Principle of Charity elsewhere. Thanks for your insights.

I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.

So, respond with something like "I don't think sanity is a single personal variable which extends to all held beliefs." It conveys the same information- "I don't trust conclusions solely because you reached them"- but it doesn't convey the implication that this is a personal failing... (read more)

I am surprised by the fact that this post has so little karma. Since one of the...let's call them "tenets" of the rationalism community is the drive to improve one's own self, I would have imagined that this kind of criticism would have been welcomed.

Can anyone explain this to me, please? :-/

I'm not sure what the number you were seeing when you wrote this was, and for my own part I didn't upvote it because I found it lacked enough focus to retain my interest, but now I'm curious: how much karma would you expect a welcomed post to have received between the "08:52AM" and "01:29:45PM" timestamps?

6devas10y
I actually hadn't considered the time; in retrospect, though, it does make a lot of sense. Thank you! :-)

Just curious, how does Plantinga's argument prove that pigs fly? I only know how it proves that the perfect cheeseburger exists...

7ChrisHallquist10y
Plantinga's argument defines God as a necessary being, and assumes it's possible that God exists. From this, and the S5 axioms of modal logic, it folllws that God exists. But you can just as well argue, "It's possible the Goldbach Conjecture is true, and mathematical truths are if true necessarily true, therefore the Goldbach Conjecture is true." Or even "Possibly it's a necessary truth that pigs fly, therefore pigs fly." (This is as much as I can explain without trying to give a lesson in modal logic, which I'm not confident in my ability to do.)
1cousin_it10y
That's nice, thanks!
5Alejandro110y
Copying the description of the argument from the Stanford Encyclopedia of Philosophy, with just one bolded replacement of a definition irrelevant to the formal validity of the argument:
6TheOtherDave10y
This argument proves that at least one pig can fly. I understand "pigs fly" to mean something more like "for all X, if X is a typical pig, X can fly."
7Alejandro110y
You are right. Perhaps the argument could be modified by replacing "is a flying pig" by "is a typical pig in all respects, and flies"?
4TheOtherDave10y
Perhaps. It's not clear to me that this is irrelevant to the formal validity of the argument, since "is a typical pig in all respects, and flies" seems to be a contradiction, and replacing a term in an argument with a contradiction isn't necessarily truth-preserving. But perhaps it is, I don't know... common sense would reject it, but we're clearly not operating in the realms of common sense here.

If you persistently misinterpret people as saying stupid things, then your evidence that people say a lot of stupid things is false evidence, You're in a sort of echo chamber. The PoC is correct because an actually stupid comment is a comment that can't be interpreted as smart no matter how hard you try.

The fact that some people misapply the POC is not the PoCs fault..

The PoC cannot in any way a guideline about what is worth spending time on. It is about efficient communication in the sense of interpreting people correctly only. If you haven't got time to... (read more)