Self-Congratulatory Rationalism

51 Post author: ChrisHallquist 01 March 2014 08:52AM

Quite a few people complain about the atheist/skeptic/rationalist communities being self-congratulatory. I used to dismiss this as a sign of people's unwillingness to admit that rejecting religion, or astrology, or whatever, was any more rational than accepting those things. Lately, though, I've started to worry.

Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.

Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality. The fact that members of the LessWrong community tend to be smart is no guarantee that they will be rational. And we have much reason to fear "rationality" degenerating into signaling games.

What Disagreement Signifies

Let's start by talking about disagreement. There's been a lot of discussion of disagreement on LessWrong, and in particular of Aumann's agreement theorem, often glossed as something like "two rationalists can't agree to disagree." (Or perhaps that we can't foresee to disagree.) Discussion of disagreement, however, tends to focus on what to do about it. I'd rather take a step back, and look at what disagreement tells us about ourselves: namely, that we don't think all that highly of each other's rationality.

This, for me, is the take-away from Tyler Cowen and Robin Hanson's paper Are Disagreements Honest? In the paper, Cowen and Hanson define honest disagreement as meaning that "meaning that the disputants respect each other’s relevant abilities, and consider each person’s stated opinion to be his best estimate of the truth, given his information and effort," and they argue disagreements aren't honest in this sense.

I don't find this conclusion surprising. In fact, I suspect that while people sometimes do mean it when they talk about respectful disagreement, often they realize this is a polite fiction (which isn't necessarily a bad thing). Deep down, they know that disagreement is disrespect, at least in the sense of not thinking that highly of the other person's rationality. That people know this is shown in the fact that they don't like being told they're wrong—the reason why Dale Carnegie says you can't win an argument

On LessWrong, people are quick to criticize each others' views, so much so that I've heard people cite this as a reason to be reluctant to post/comment (again showing they know intuitively that disagreement is disrespect). Furthermore when people in LessWrong criticize others' views, they very often don't seem to expect to quickly reach agreement. Even people Yvain would classify as "experienced rationalists" sometimes knowingly have persistent disagreements. This suggests that LessWrongers almost never consider each other to be perfect rationalists.

And I actually think this is a sensible stance. For one thing, even if you met a perfect rationalist, it could be hard to figure out that they are one. Furthermore, the problem of knowing what to do about disagreement is made harder when you're faced with other people having persistent disagreements: if you find yourself agreeing with Alice, you'll have to think Bob is being irrational, and vice versa. If you rate them equally rational and adopt an intermediate view, you'll have to think they're both being a bit irrational for not doing likewise.

The situation is similar to Moore's paradox in philosophy—the impossibility of asserting "it's raining, but I don't believe it's raining." Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."

We can find some room for humility in an analog of the preface paradox, the fact that the author of a book can say things like "any errors that remain are mine." We can say this because we might think each individual claim in the book is highly probable, while recognize that all the little uncertainties add up to it being likely there are still errors. Similarly, we can think each of our beliefs are individually rational, while recognizing we still probably have some irrational beliefs—we just don't know which ones And just because respectful disagreement is a polite fiction doesn't mean we should abandon it. 

I don't have a clear sense of how controversial the above will be. Maybe we all already recognize that we don't respect each other's opinions 'round these parts. But I think some features of discussion at LessWrong look odd in light of the above points about disagreement—including some of the things people say about disagreement.

The wiki, for example, says that "Outside of well-functioning prediction markets, Aumann agreement can probably only be approximated by careful deliberative discourse. Thus, fostering effective deliberation should be seen as a key goal of Less Wrong." The point of Aumann's agreement theorem, though, is precisely that ideal rationalists shouldn't need to engage in deliberative discourse, as usually conceived, in order to reach agreement.

As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it." But when dealing with real people who may or may not have a rational basis for their beliefs, that's almost always the right stance to take.

Intelligence and Rationality

Intelligence does not equal rationality. Need I say more? Not long ago, I wouldn't have thought so. I would have thought it was a fundamental premise behind LessWrong, indeed behind old-school scientific skepticism. As Michael Shermer once said, "Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments. When I hear that, I think "whaaat? People on LessWrong make bad arguments all the time!" When this happens, I generally limit myself to trying to point out the flaw in the argument and/or downvoting, and resist the urge to shout "YOUR ARGUMENTS ARE BAD AND YOU SHOULD FEEL BAD." I just think it.

When I reach for an explanation of why terrible arguments from smart people shouldn't surprise anyone, I go to Yvain's Intellectual Hipsters and Meta-Contarianism, one of my favorite LessWrong posts of all time. While Yvain notes that meta-contrarianism often isn't a good thing, though, on re-reading it I noticed what seems like an important oversight:

A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.

A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

According to the survey, the average IQ on this site is around 145. People on this site differ from the mainstream in that they are more willing to say death is bad, more willing to say that science, capitalism, and the like are good, and less willing to say that there's some deep philosophical sense in which 1+1 = 3. That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death.

The pattern of countersignaling Yvain describes here is real. But it's important not to forget that sometimes, the super-wealthy signal their wealth by buying things even the moderately wealthy can't afford. And sometimes, the very intelligent signal their intelligence by holding opinions even the moderately intelligent have trouble understanding. You also get hybrid status moves: designer versions of normally low-class clothes, complicated justifications for opinions normally found among the uneducated.

Robin Hanson has argued that this leads to biases in academia:

I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds. If so, academic beliefs are secondary – the important thing is to clearly show respect to those who make impressive displays like theorems or difficult data analysis. And the obvious way for academics to use their beliefs to show respect for impressive folks is to have academic beliefs track the most impressive recent academic work.

Robin's post focuses on economics, but I suspect the problem is even worse in my home field of philosophy. As I've written before, the problem is that in philosophy, philosophers never agree on whether a philosopher has solved a problem. Therefore, there can be no rewards for being right, only rewards for showing off your impressive intellect. This often means finding clever ways to be wrong.

I need to emphasize that I really do think philosophers are showing off real intelligence, not merely showing off faux-cleverness. GRE scores suggest philosophers are among the smartest academics, and their performance is arguably made more impressive by the fact that GRE quant scores are bimodally distributed based on whether your major required you to spend four years practicing your high school math, with philosophy being one of the majors that doesn't grant that advantage. Based on this, if you think it's wrong to dismiss the views of high-IQ people, you shouldn't be dismissive of mainstream philosophy. But in fact I think LessWrong's oft-noticed dismissiveness of mainstream philosophy is largely justified.

I've found philosophy of religion in particular to be a goldmine of terrible arguments made by smart people. Consider Alvin Plantinga's modal ontological argument. The argument is sufficiently difficult to understand that I won't try to explain it here. If you want to understand it, I'm not sure what to tell you except to maybe read Plantinga's book The Nature of NecessityIn fact, I predict at least one LessWronger will comment on this thread with an incorrect explanation or criticism of the argument. Which is not to say they wouldn't be smart enough to understand it, just that it might take them a few iterations of getting it wrong to finally get it right. And coming up with an argument like that is no mean feat—I'd guess Plantinga's IQ is just as high as the average LessWronger's.

Once you understand the modal ontological argument, though, it quickly becomes obvious that Plantinga's logic works just as well to "prove" that it's a necessary truth that pigs fly. Or that Plantinga's god does not exist. Or even as a general purpose "proof" of any purported mathematical truth you please. The main point is that Plantinga's argument is not stupid in the sense of being something you'd only come up with if you had a low IQ—the opposite is true. But Plantinga's argument is stupid in the sense of being something you'd only come up with it while under the influence of some serious motivated reasoning.

The modal ontological argument is admittedly an extreme case. Rarely is the chasm between the difficulty of the concepts underlying an argument, and the argument's actual merits, so vast. Still, beware the temptation to affiliate with smart people by taking everything they say seriously.

Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"

The Principle of Charity

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

More frustrating than this simple disagreement over charity, though, is when people who invoke the principle of charity do so selectively. They apply it to people who's views they're at least somewhat sympathetic to, but when they find someone they want to attack, they have trouble meeting basic standards of fairness. And in the most frustrating cases, this gets explicit justification: "we need to read these people charitably, because they are obviously very intelligent and rational." I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.

I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses. Whatever its merits, though, they can't depend on the actual intelligence and rationality of the person making an argument. Not only is intelligence no guarantee against making bad arguments, the whole reason we demand other people tell us their reasons for their opinions in the first place is we fear their reasons might be bad ones.

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

Beware Weirdness for Weirdness' Sake

There's a theory in the psychology and sociology of religion that the purpose of seemingly foolish rituals like circumcision and snake-handling is to provide a costly and therefore hard-to-fake signal of group commitment. I think I've heard it suggested—though I can't find by who—that crazy religious doctrines could serve a similar purpose. It's easy to say you believe in a god, but being willing to risk ridicule by saying you believe in one god who is three persons, who are all the same god, yet not identical to each other, and you can't explain how that is but it's a mystery you accept on faith... now that takes dedication.

Once you notice the general "signal group commitment in costly ways" strategy, it seems to crop up everywhere. Subcultures often seem to go out of their way to be weird, to do things that will shock people outside the subculture, ranging from tattoos and weird clothing to coming up with reasons why things regarded as normal and innocuous in the broader culture are actually evil. Even something as simple as a large body of jargon and in-jokes can do the trick: if someone takes the time to learn all the jargon and in-jokes, you know they're committed.

This tendency is probably harmless when done with humor and self-awareness, but it's more worrisome when a group becomes convinced its little bits of weirdness for weirdness' sake are a sign of its superiority to other groups. And it's worth being aware of, because it makes sense of signaling moves that aren't straightforwardly plays for higher status.

The LessWrong community has amassed a truly impressive store of jargon and in-jokes over the years, and some of it's quite useful (I reiterate my love for the term "meta-contrarian"). But as with all jargon, LessWrongian jargon is often just a silly way of saying things you could have said without it. For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.

That bit of LessWrong jargon is merely silly. Worse, I think, is the jargon around politics. Recently, a friend gave "they avoid blue-green politics" as a reason LessWrongians are more rational than other people. It took a day before it clicked that "blue-green politics" here basically just meant "partisanship." But complaining about partisanship is old hat—literally. America's founders were fretting about it back in the 18th century. Nowadays, such worries are something you expect to hear from boringly middle-brow columnists at major newspapers, not edgy contrarians.

But "blue-green politics," "politics is the mind-killer"... never mind how much content they add, the point is they're obscure enough to work as an excuse to feel superior to anyone whose political views are too mainstream. Outsiders will probably think you're weird, invoking obscure jargon to quickly dismiss ideas that seem plausible to them, but on the upside you'll get to bond with members of your in-group over your feelings of superiority.

A More Humble Rationalism?

I feel like I should wrap up with some advice. Unfortunately, this post was motivated by problems I'd seen, not my having thought of brilliant solutions to them. So I'll limit myself to some fairly boring, non-brilliant advice.

First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that.

Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy. Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme. It's just a little too easy to forget where "rationality" is supposed to connect with the real world, increasing the temptation for "rationality" to spiral off into signaling games.

Comments (395)

Comment author: Clarity 19 August 2015 11:26:39AM 0 points [-]

I hadn't come across the Principle of Charity elsewhere. Thanks for your insights.

Comment author: CCC 18 August 2015 08:14:59AM 3 points [-]

Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it."

Not necessarily. It might be saying "You have an interesting viewpoint; let me see what basis it has, so that I may properly integrate this evidence into my worldview and correctly update my viewpoint"

Comment author: TheAncientGeek 24 April 2014 09:06:35AM *  0 points [-]

If you persistently misinterpret people as saying stupid things, then your evidence that people say a lot of stupid things is false evidence, You're in a sort of echo chamber. The PoC is correct because an actually stupid comment is a comment that can't be interpreted as smart no matter how hard you try.

The fact that some people misapply the POC is not the PoCs fault..

The PoC cannot in any way a guideline about what is worth spending time on. It is about efficient communication in the sense of interpreting people correctly only. If you haven't got time to charitable interpret someone, you should default to some average or commital appraisal of their intelligence, rather than accumulate false data that they are stupid.

Comment author: private_messaging 24 April 2014 07:37:32AM *  3 points [-]

Particularly problematic is this self congratulatory process:

some simple mistake leads to non mainstream conclusion -> the world is insane and I'm so much more rational than everyone else -> endorphins released -> circuitry involved in mistake-making gets reinforced.

For example: the IQ is the best predictor of job performance, right? So the world is insane that it mostly hires based on experience, test questions, and so on (depending on the field) rather than IQ, right? Cue the endorphins and reinforcement of careless thinking.

If you're not after endorphins, though: IQ is a good predictor of performance within the population of people who got hired traditionally, which is a very different population than the job applicants.

Comment author: TheAncientGeek 24 April 2014 08:50:53AM 3 points [-]

These things can be hard to budge ....they certainly look it ... perhaps because the "Im special" delusion and "world is crazy" delusion need to fall at the same time.

Comment author: private_messaging 24 April 2014 11:16:52AM *  1 point [-]

Plus in many cases all that had been getting strengthened via reinforcement learning for decades.

It's also ridiculous how easy it is to be special in that imaginary world. Say I want to hire candidates really well - better than competition. I need to figure out the right mix of interview questions and prior experience and so on. I probably need to make my own tests. It's hard! It's harder still if I want to know if my methods work!

But that crazy world, in it, there's a test readily available, widely known, and widely used, and nobody's using it for that, because they're so irrational. And you can know you're special by just going "yeah, it sounds about right". Like coming across 2x+2y=? and going on speculating about the stupid reasons why someone would be unable to apply 2+2=4 and 2*2=4 and conclude it's 4xy .

Comment author: RichardKennaway 14 April 2014 08:36:49AM *  7 points [-]

As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary.

This is one of the loonier[1] ideas to be found on Overcoming Bias (and that's quite saying something). Exercise for the reader: test this idea that sharing opinions screens off the usefulness of sharing evidence with the following real-world scenario. I have participated in this scenario several times and know what the correct answer is.

You are on the programme committee of a forthcoming conference, which is meeting to decide which of the submitted papers to accept. Each paper has been refereed by several people, each of whom has given a summary opinion (definite accept, weak accept, weak reject, or definite reject) and supporting evidence for the opinion.

To transact business most efficiently, some papers are judged solely on the summary opinions. Every paper rated a definite accept by every referee for that paper is accepted without further discussion, because if three independent experts all think it's excellent, it probably is, and further discussion is unlikely to change that decision. Similarly, every paper firmly rejected by every referee is rejected. For papers that get a uniformly mediocre rating, the committee have to make some judgement about where to draw the line between filling out the programme and maintaining a high standard.

That leaves a fourth class: papers where the referees disagree sharply. Here is a paper where three referees say definitely accept, one says definitely reject. On another paper, it's the reverse. Another, two each way.

How should the committee decide on these papers? By combining the opinions only, or by reading the supporting evidence?

ETA: [1] By which I mean not "so crazy it must be wrong" but "so wrong it's crazy".

Comment author: gwern 25 April 2014 06:29:49PM *  -1 points [-]

This is one of the loonier[1] ideas to be found on Overcoming Bias (and that's quite saying something). Exercise for the reader: test this idea that sharing opinions screens off the usefulness of sharing evidence with the following real-world scenario. I have participated in this scenario several times and know what the correct answer is.

Verbal abuse is not a productive response to the results of an abstract model. Extended imaginary scenarios are not a productive response either. Neither explains why the proofs are wrong or inapplicable, or if inapplicable, why they do not serve useful intellectual purposes such as proving some other claim by contradiction or serving as an ideal to aspire to. Please try to do better.

Comment author: RichardKennaway 25 April 2014 08:01:00PM 1 point [-]

This is one of the loonier[1] ideas to be found on Overcoming Bias (and that's quite saying something).

That was excessive, and I now regret having said it.

Comment author: RichardKennaway 25 April 2014 06:49:22PM 3 points [-]

Extended imaginary scenarios are not a productive response either.

As I said, the scenario is not imaginary.

Please try to do better.

I might have done so, had you not inserted that condescending parting shot.

Comment author: ChristianKl 25 April 2014 11:59:23PM -1 points [-]

As I said, the scenario is not imaginary.

Your real world scenario tells you that sometimes sharing evidence will move judgements in the right direction.

Thinking that Robert Hanson or someone else on Overcoming Bias hasn't thought of that argument is naive. Robert Hanson might sometimes make arguments that are wrong but he's not stupid. If you are treating him as if he would be, then you are likely arguing against a strawman.

Apart from that your example also has strange properties like only four different kind of judgements that reviewers are allowed to make. Why would anyone choose four?

Comment author: RichardKennaway 30 April 2014 08:11:08AM *  2 points [-]

Your real world scenario tells you that sometimes sharing evidence will move judgements in the right direction.

It is a lot more than "sometimes". In my experience (mainly in computing) no journal editor or conference chair will accept a referee's report that provides nothing but an overall rating of the paper. The rubric for the referees often explicitly states that. Where ratings of the same paper differ substantially among referees, the reasons for those differing judgements are examined.

Apart from that your example also has strange properties like only four different kind of judgements that reviewers are allowed to make. Why would anyone choose four?

The routine varies but that one is typical. A four-point scale (sometimes with a fifth not on the same dimension: "not relevant to this conference", which trumps the scalar rating). Sometimes they ask for different aspects to be rated separately (originality, significance, presentation, etc.). Plus, of course, the rationale for the verdict, without which the verdict will not be considered and someone else will be found to referee the paper properly.

Anyone is of course welcome to argue that they're all doing it wrong, or to found a journal where publication is decided by simple voting rounds without discussion. However, Aumann's theorem is not that argument, it's not the optimal version of Delphi (according to the paper that gwern quoted), and I'm not aware of any such journal. Maybe Plos ONE? I'm not familiar with their process, but their criteria for inclusion are non-standard.

Comment author: ChristianKl 30 April 2014 01:45:30PM -2 points [-]

It is a lot more than "sometimes". In my experience (mainly in computing) no journal editor or conference chair will accept a referee's report that provides nothing but than an overall rating of the paper.

That just tells us that the journals believe that the rating isn't the only thing that matters. But most journals just do things that make sense to them. The don't draft their policies based on findings of decision science.

Comment author: RichardKennaway 30 April 2014 01:57:06PM 2 points [-]

But most journals just do things that make sense to them. The don't draft their policies based on findings of decision science.

Those findings being? Aumann's theorem doesn't go the distance. Anyway, I have no knowledge of how they draft their policies, merely some of what those policies are. Do you have some information to share here?

Comment author: ChristianKl 30 April 2014 04:22:53PM -2 points [-]

For example that likert scales are nice if you want someone to give you their opinion.

Of course it might sense to actually do run experiments. Big publishers do rule over 1000's of journals so it should be easy for them to do the necessary research if the wanted to do so.

Comment author: ChristianKl 25 April 2014 04:16:44PM 0 points [-]

I think the most straightforward way is to do a second round. Let every referee read the opinions of the other referees and see whether they converge onto a shared judgement.

If you want a more formal name the Delphi method

Comment author: RichardKennaway 25 April 2014 04:39:00PM 3 points [-]

What actually happens is that the reasons for the summary judgements are examined.

Three for, one against. Is the dissenter the only one who has not understood the paper, or the only one who knows that although the work is good, almost the same paper has just been accepted to another conference? The set of summary judgements is the same but the right final judgement is different. Therefore there is no way to get the latter from the former.

Aumann agreement requires common knowledge of each others' priors. When does this ever obtain? I believe Robin Hanson's argument about pre-priors just stands the turtle on top of another turtle.

Comment author: gwern 25 April 2014 06:28:02PM *  -2 points [-]

What actually happens is that the reasons for the summary judgements are examined.

Really? My understanding was that

Between each iteration of the questionnaire, the facilitator or monitor team (i.e., the person or persons administering the procedure) informs group members of the opinions of their anonymous colleagues. Often this “feedback” is presented as a simple statistical summary of the group response, usually a mean or median value, such as the average group estimate of the date before which an event will occur. As such, the feedback comprises the opinions and judgments of all group members and not just the most vocal. At the end of the polling of participants (after several rounds of questionnaire iteration), the facilitator takes the group judgment as the statistical average (mean or median) of the panelists’ estimates on the final round.

(From Rowe & Wright's "Expert opinions in forecasting: the role of the Delphi technique", in the usual Armstrong anthology.) From the sound of it, the feedback is often purely statistical in nature, and if it wasn't commonly such restricted feedback, it's hard to see why Rowe & Wright would criticize Delphi studies for this:

The use of feedback in the Delphi procedure is an important feature of the technique. However, research that has compared Delphi groups to control groups in which no feedback is given to panelists (i.e., non-interacting individuals are simply asked to re-estimate their judgments or forecasts on successive rounds prior to the aggregation of their estimates) suggests that feedback is either superfluous or, worse, that it may harm judgmental performance relative to the control groups (Boje and Murnighan 1982; Parenté, et al. 1984). The feedback used in empirical studies, however, has tended to be simplistic, generally comprising means or medians alone with no arguments from panelists whose estimates fall outside the quartile ranges (the latter being recommended by the classical definition of Delphi, e.g., Rowe et al. 1991). Although Boje and Murnighan (1982) supplied some written arguments as feedback, the nature of the panelists and the experimental task probably interacted to create a difficult experimental situation in which no feedback format would have been effective.

Comment author: RichardKennaway 25 April 2014 06:50:58PM 3 points [-]

What actually happens is that the reasons for the summary judgements are examined.

Really? My understanding was that

I was referring to what actually happens in a programme committee meeting, not the Delphi method.

Comment author: gwern 25 April 2014 06:57:06PM *  0 points [-]

I was referring to what actually happens in a programme committee meeting, not the Delphi method.

Fine. Then consider it an example of 'loony' behavior in the real world: Delphi pools, as a matter of fact, for many decades, have operated by exchanging probabilities and updating repeatedly, and in a number of cases performed well (justifying their continued usage). You don't like Delphi pools? That's cool too, I'll just switch my example to prediction markets.

Comment author: RichardKennaway 25 April 2014 07:02:16PM *  3 points [-]

It would be interesting to conduct an experiment to compare the two methods for this problem. However, it is not clear how to obtain a ground truth with which to judge the correctness of the results. BTW, my further elaboration, with the example of one referee knowing that the paper under discussion was already published, was also non-fictional. It is not clear to me how any decision method that does not allow for sharing of evidence can yield the right answer for this example.

What have Delphi methods been found to perform well relative to, and for what sorts of problems?

Comment author: ChristianKl 25 April 2014 07:55:06PM *  -1 points [-]

However, it is not clear how to obtain a ground truth with which to judge the correctness of the results.

That assumes we don't have any criteria on which to judge good versus bad scientific papers.

You could train your model to predict the amount of citations that a paper will get. You can also look at variables such as reproduced papers or withdrawn papers.

Define a utility function that collapses such variables into a single one. Run a real world experiment in a journal and do 50% of the paper submissions with one mechanism and 50% with the other. Let a few years go by and then you evaluate the techniques based on your utility function.

Comment author: RichardKennaway 30 April 2014 08:14:54AM *  1 point [-]

You could train your model to predict the amount of citations that a paper will get. You can also look at variables such as reproduced papers or withdrawn papers.

Define a utility function that collapses such variables into a single one. Run a real world experiment in a journal and do 50% of the paper submissions with one mechanism and 50% with the other. Let a few years go by and then you evaluate the techniques based on your utility function.

Something along those lines might be done, but an interventional experiment (creating journals just to test a hypothesis about refereeing) would be impractical. That leaves observational data-collecting, where one might compare the differing practices of existing journals. But the confounding problems would be substantial.

Or, more promisingly, you could do an experiment with papers that are already published and have a citation record, and have experimental groups of referees assess them, and test different methods of resolving disagreements. That might actually be worth doing, although it has the flaw that it would only be assessing accepted papers and not the full range of submissions.

Comment author: ChristianKl 30 April 2014 09:06:37AM 0 points [-]

Then no reason why you can't test different procedures in an existing journal.

Comment author: TheAncientGeek 25 April 2014 04:49:11PM *  2 points [-]

People don't coincide in their priors, don't have access to the same evidence and aren't running off the same epistemology, and can't settle epistemologiical debates non-circularly......

Threr's a lot wrong with Aumannn, or at least the way some people use it.

Comment author: moridinamael 08 March 2014 06:42:42PM 9 points [-]

I've recently had to go on (for a few months) some medication which had the side effect of significant cognitive impairment. Let's hand-wavingly equate this side effect to shaving thirty points off my IQ. That's what it felt like from the inside.

While on the medication, I constantly felt the need to idiot-proof my own life, to protect myself from the mistakes that my future self would certainly make. My ability to just trust myself to make good decisions in the future was removed.

This had far more ramifications than I can go into in a brief comment, but I can generalize by saying that I was forced to plan more carefully, to slow down, to double-check my work. Unable to think as deeply into problems in a freewheeling cognitive fashion, I was forced to break them down carefully on paper and understand that anything I didn't write down would be forgotten.

Basically what I'm trying to say is that being stupider probably forced me to be more rational.

When I went off the medication, I felt my old self waking up again, the size of concepts I could manipulate growing until I could once again comprehend and work on programs I had written before starting the drugs in the first place. I could follow long chains of verbal argument and concoct my own. And ... I pretty much immediately went back to my old problem solving habits of relying on big leaps in insight. Which I don't really blame myself for, because that's sort of what brains are for.

I don't know what the balance is. I don't know how and when to rein in the self-defeating aspects of intelligence. I probably made fewer mistakes when I was dumber but I also did less things period.

Comment author: John_Maxwell_IV 15 March 2014 06:50:28AM 4 points [-]

What medication?

Comment author: Sophronius 06 March 2014 08:50:03PM *  4 points [-]

Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"

I'll go ahead and disagree with this. Sure, there's a lot of smart people who aren't rational, but then I would say that rationality is less common than intelligence. On the other hand, all the rational people I've met are very smart. So it seems really high intelligence is a necessary but not a sufficient condition. Or as Draco Malfoy would put it: "Not all Slytherins are Dark Wizards, but all Dark Wizards are from Slytherin."

I largely agree with the rest of your post Chris (upvoted), though I'm not convinced that the self-congratulatory part is Less Wrong's biggest problem. Really, it seems to me that a lot of people on Less Wrong just don't get rationality. They go through all the motions and use all of the jargon, but don't actually pay attention to the evidence. I frequently find myself wanting to yell "stop coming up with clever arguments and pay attention to reality!" at the screen. A large part of me worries that rationality really can't be taught; that if you can't figure out the stuff on Less Wrong by yourself, there's no point in reading about it. Or, maybe there's a selection effect and people who post more comments tend to be less rational than those who lurk?

Comment author: RichardKennaway 04 July 2014 09:56:51PM 5 points [-]

A large part of me worries that rationality really can't be taught; that if you can't figure out the stuff on Less Wrong by yourself, there's no point in reading about it.

The teaching calls to what is within the pupil. To borrow a thought from Georg Christoph Lichtenberg, if an ass looks into LessWrong, it will not see a sage looking back.

I have a number of books of mathematics on my shelves. In principle, I could work out what is in them, but in practice, to do so I would have to be of the calibre of a multiple Field and Nobel medallist, and exercise that ability for multiple lifetimes. Yet I can profitably read them, understand them, and use that knowledge; but that does still require at least a certain level of ability and previous learning.

Or to put that another way, learning is in P, figuring out by yourself is in NP.

Comment author: Sophronius 04 July 2014 10:35:19PM *  3 points [-]

Agreed. I'm currently under the impression that most people cannot become rationalists even with training, but training those who do have the potential increases the chance that they will succeed. Still I think rationality cannot be taught like you might teach a university degree: A large part of it is inspiration, curiosity, hard work and wanting to become stronger. And it has to click. Just sitting in the classroom and listening to the lecturer is not enough.

Actually now that I think about it, just sitting in the classroom and listening to the lecturer for my economics degree wasn't nearly enough to gain a proper understanding either, yet that's all that most people did (aside from a cursory reading of the books of course). So maybe the problem is not limited to rationality but more about becoming really proficient at something in general.

Comment author: dthunt 04 July 2014 03:45:40PM 2 points [-]

Reading something and understanding/implementing it are not quite the same thing. It takes clock time and real effort to change your behavior.

I do not think it is unexpected that a large portion of the population on a site dedicated to writing, teaching, and discussing the skills of rationality is going to be, you know, still very early in the learning, and that some people will have failed to grasp a lesson they think they have grasped, and that others will think others have failed to grasp a lesson that they have failed to grasp, and that you will have people who just like to watch stuff burn.

I'm sure it's been asked elsewhere, and I liked the estimation questions on the 2013 survey; has there been a more concerted effort to see what being an experienced LWer translates to, in terms of performance on various tasks that, in theory, people using this site are trying to get better at?

Comment author: Sophronius 04 July 2014 06:05:26PM *  1 point [-]

Yes, you hit the nail on the head. Rationality takes hard work and lots of practice, and too often people on Less Wrong just spend time making clever arguments instead of doing the actual work of asking what the actual answer is to the actual question. It makes me wonder whether Less Wrongers care more about being seen as clever than they care about being rational.

As far as I know there's been no attempt to make a rationality/Bayesian reasoning test, which I think is a great pity because I definitely think that something like that could help with the above problem.

Comment author: dthunt 04 July 2014 06:41:11PM *  0 points [-]

There are many calibration tests you can take (there are many articles on this site with links to see if you are over-or-underconfident on various subject tests - search for calibration).

What I don't know is if there has been some effort to do this across many questions, and compile the results anonymously for LWers.

I caution against jumping quickly to conclusions about "signalling". Frankly, I suspect you are wrong, and that most of the people here are in fact trying. Some might not be, and are merely looking for sparring matches. Those people are still learning things (albeit perhaps with less efficiency).

As far as "seeming clever", perhaps as a community it makes sense to advocate people take reasoning tests which do not strongly correlate with IQ, and that people generally do quite poorly on (I'm sure someone has a list, though it may be a relatively short list of tasks), which might have the effect of helping people to see stupid as part of the human condition, and not merely a feature of "non-high-IQ" humans.

Comment author: Sophronius 04 July 2014 06:52:14PM *  0 points [-]

Fair enough, that was a bit too cynical/negative. I agree that people here are trying to be rational, but you have to remember that signalling does not need to be on purpose. I definitely detect a strong impulse amongst the less wrong crowd to veer towards controversial and absurd topics rather than the practical and to make use of meta level thinking and complex abstract arguments instead of simple and solid reasoning. It may not feel that way from the inside, but from the outside point of view it does kind of look like Less Wrong is optimizing for being clever and controversial rather than rational.

I definitely say yes to (bayesian) reasoning tests. Someone who is not me needs to go do this right now.

Comment author: dthunt 04 July 2014 07:06:47PM *  1 point [-]

I don't know that there is anything to do, or that should be done, about that outside-view problem. Understanding why people think you're being elitist or crazy doesn't necessarily help you avoid the label.

http://lesswrong.com/lw/kg/expecting_short_inferential_distances/

Comment author: Sophronius 04 July 2014 07:29:28PM *  1 point [-]

Huh? If the outside view tells you that there's something wrong, then the problem is not with the outside view but with the thing itself. It has nothing to do with labels or inferential distance. The outside-view is a rationalist technique used for viewing a matter you're personally involved in objectively by taking a step back. I'm saying that when you take a step back and look at things objectively, it looks like Less Wrong spends more time and effort on being clever than on being rational.

But now that you've brought it up, I'd also like to add that the habit on Less Wrong to assume that any criticism or disagreement must be because of inferential distance (really just a euphemism for saying the other guy is clueless) is an extremely bad one.

Comment author: dthunt 04 July 2014 07:46:21PM 0 points [-]

My apologies, I thought you we referring to how people who do not use this site perceive people using the site, which seemed more likely to be what you were trying to communicate than the alternative.

Yes, the site viewed as a machine does not look like a well-designed rational-people-factory to me, either, unless I've missed the part where it's comparing its output to its input to see how it is performing. People do, however, note cognitive biases and what efforts to work against them have produced, from time to time, and there are other signs that seem consistent with a well-intentioned rational-people-factory.

And, no, not every criticism does. I can only speak for myself, and acknowledge that I have a number of times in the past failed to understand what someone was saying and assumed they were being dumb or somewhat crazy as a result. I sincerely doubt that's a unique experience.

Comment author: dthunt 04 July 2014 08:04:48PM 0 points [-]

http://lesswrong.com/lw/ec2/preventing_discussion_from_being_watered_down_by/, and other articles, I now read, because they are pertinent, and I want to know what sorts of work have been done to figure out how LW is perceived and why.

Comment author: Nornagest 04 July 2014 07:45:50PM *  1 point [-]

The outside view isn't magic. Finding the right reference class to step back into, in particular, can be tricky, and the experiments the technique is drawn from deal almost exclusively with time forecasting; it's hard to say how well it generalizes outside that domain.

Don't take this as quoting scripture, but this has been discussed before, in some detail.

Comment author: Sophronius 04 July 2014 08:34:20PM *  7 points [-]

Okay, you're doing precisely the thing I hate and which I am criticizing about Less Wrong. Allow me to illustrate:

LW1: Guys, it seems to me that Less Wrong is not very rational. What do you think?
LW2: What makes you think Less Wrong isn't rational?
LW1: Well if you take a step back and use the outside view, Less Wrong seems to be optimizing for being clever rather than optimizing for being rational. That's a pretty decent indicator.
LW3: Well, the outside view has theoretical limitations, you know. Eliezer wrote a post about how it is possible to misuse the outside point of view as a conversation stopper.
LW1: Uh, well unless I actually made a mistake in applying the outside view I don't see why that's relevant? And if I did make a mistake in applying it it would be more helpful to say what it was I specifically did wrong in my inference.
LW4: You are misusing the term inference! Here, someone wrote a post about this at some point.
LW5: Yea but that post has theoretical limitations.
LW1: I don't care about any of that, I want to know whether or not Less Wrong is succeeding at being rational. Stop making needlessly theoretical abstract arguments and talk about the actual thing we were actually talking about.
LW6: I agree, people here use LW jargon as as a form of applause light!
LW1: Uh...
LW7: You know, accusing others of using applause lights is a fully generalized counter argument!
LW6: Oh yea? Well fully generalized counter arguments are fully generalized counter arguments themselves, so there!

We're only at LW3 right now so maybe this conversation can still be saved from becoming typical Less Wrong-style meta screwery. Or to make my point more politely: Please tell me whether or not you think Less Wrong is rational and whether or not something should be done, because that's the thing we're actually talking about.

Comment author: dthunt 04 July 2014 09:47:09PM 0 points [-]

My guess is that the site is "probably helping people who are trying to improve", because I would expect some of the materials here to help. I have certainly found a number of materials useful.

But a personal judgement probably helping" isn't the kind of thing you'd want. It'd be much better to find some way to measure the size of the effect. Not tracking your progress is a bad, bad sign.

Comment author: TheAncientGeek 04 July 2014 08:45:38PM -1 points [-]

LW8...rationality is more than one thing

Comment author: Nornagest 04 July 2014 08:39:19PM *  1 point [-]

Dude, my post was precisely about how you're making a mistake in applying the outside view. Was I being too vague, too referential? Okay, here's the long version, stripped of jargon because I'm cool like that.

The point of the planning fallacy experiments is that we're bad at estimating the time we're going to spend on stuff, mainly because we tend to ignore time sinks that aren't explicitly part of our model. My boss asks me how long I'm going to spend on a task: I can either look at all the subtasks involved and add up the time they'll take (the inside view), or I can look at similar tasks I've done in the past and report how long they took me (the outside view). The latter is going to be larger, and it's usually going to be more accurate.

That's a pretty powerful practical rationality technique, but its domain is limited. We have no idea how far it generalizes, because no one (as far as I know) has rigorously tried to generalize it to things that don't have to do with time estimation. Using the outside view in its LW-jargon sense, to describe any old thing, therefore is almost completely meaningless; it's equivalent to saying "this looks to me like a $SCENARIO1". As long as there also exists a $SCENARIO2, invoking the outside view gives us no way to distinguish between them. Underfitting is a problem. Overfitting is also a problem. Which one's going to be more of a problem in a particular reference class? There are ways of figuring that out, like Yvain's centrality heuristic, but crying "outside view" is not one of them.

As to whether LW is rational, I got bored of that kind of hand-wringing years ago. If all you're really looking for is an up/down vote on that, I suggest a poll, which I will probably ignore because it's a boring question.

Comment author: David_Gerard 04 July 2014 10:03:00AM *  1 point [-]

On the other hand, all the rational people I've met are very smart.

Surely you know people of average intelligence who consistently show "common sense" (so rare it's pretty much a superpower). They may not be super-smart, but they're sure as heck not dumb.

Comment author: Sophronius 04 July 2014 01:57:31PM 1 point [-]

Common sense does seem like a superpower sometimes, but that's not a real explanation. I think that what we call common sense is mostly just the result of clear thinking and having a distaste for nonsense. If you favour reality over fancies, you are more likely to pay more attention to reality --> better mental habits --> stronger intuition = common sense.

But to answer your question, yes I do know people like that and I do respect them for it (though they still have above average intelligence, mostly). However, I would not trust them with making decisions on anything counter-intuitive like economics, unless they're also really good at knowing what experts to listen to.

Comment author: David_Gerard 04 July 2014 04:24:03PM *  1 point [-]

However, I would not trust them with making decisions on anything counter-intuitive like economics, unless they're also really good at knowing what experts to listen to.

Yeah, but I'd say that about the smart people too.

Related, just seen today: The curse of smart people. SPOILER: "an ability to convincingly rationalize nearly anything."

Comment author: Sophronius 04 July 2014 06:31:35PM *  2 points [-]

S1) Most smart people aren't rational but most rational people are smart
D1) There are people of average intelligence with common sense
S2) Yes they have good intuition but you cannot trust them with counter-intuitive subjects (people with average intelligence are not rational)
D2) You can't trust smart people with counter-intuitive subjects either (smart people aren't rational)

D2) does not contradict S1 because "most smart people aren't rational" isn't the same as "most rational people aren't smart", which is of course the main point of S1).

Interesting article, it confirms my personal experiences in corporations. However, I think the real problem is deeper than smart people being able to rationalize anything. The real problem is that overconfidence and rationalizing your actions makes becoming a powerful decision-maker easier. The mistakes they make due to irrationality don't catch up with them until after the damage is done, and then the next overconfident guy gets selected.

Comment author: XiXiDu 04 July 2014 05:11:18PM *  3 points [-]

Related, just seen today: The curse of smart people. SPOILER: "an ability to convincingly rationalize nearly anything."

The AI box experiment seems to support this. People who have been persuaded that it would be irrational to let an unfriendly AI out of the box are being persuaded to let it out of the box.

The ability of smarter or more knowledgeable people to convince less intelligent or less educated people of falsehoods (e.g. parents and children) shows that we need to put less weight on arguments and more weight on falsifiability.

Comment author: Sophronius 04 July 2014 06:17:21PM *  2 points [-]

I wouldn't use the Ai box experiment as an example for anything because it is specifically designed to be a black box: It's exciting precisely because the outcome confuses the heck out of people. I'm having trouble parsing this in Bayesian terms but I think you're committing a rationalist sin by using an event that your model of reality couldn't predict in advance as evidence that your model of reality is correct.

I strongly agree that we need to put less weight on arguments but I think falsifiability is impractical in everyday situations.

Comment author: CellBioGuy 05 March 2014 05:44:30AM *  4 points [-]

But a humble attempt at rationalism is so much less funny...

More seriously, I could hardly agree more with the statement that intelligence has remarkably little to do with susceptibility to irrational ideas. And as much as I occasionally berate others for falling into absurd patterns, I realize that it pretty much has to be true that somewhere in my head is something just as utterly inane that I will likely never be able to see, and it scares me. As such sometimes I think dissensus is not only good, but necessary.

Comment author: Bugmaster 04 March 2014 11:50:59PM 5 points [-]

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid.

As far as I understand, the Principle of Charity is defined differently; it states that you should interpret other people's arguments on the assumption that these people are arguing in good faith. That is to say, you should assume that your interlocutor honestly believes in everything he's saying, and that he has no ulterior motive beyound getting his point across. He may be entirely ignorant, stupid, or both; but he's not a liar or a troll.

This principle allows all parties to focus on the argument, and to stick to the topic at hand -- as opposed to spiraling into the endless rabbit-holes of psychoanalyzing each other.

Comment author: AshwinV 25 April 2014 07:14:28AM 0 points [-]

Interesting point of distinction.

Irrespective of how you define the principle of charity (i.e. motivation based or intelligence based), I do believe that the principle on the whole should not become a universal guideline and it is important to distinguish it, a sort of "principle of differential charity". This is obviously similar to basic real world things (eg. expertise when it comes to the intelligent charity issue and/or political/official positioning when it comes to the motivation issue).

I also realise that being differentially charitable may come with the risk of becoming even more biased, if you're priors themselves are based on extremely biased findings. However, I would think that by and large it works well, and is a great time saver when deciding how much effort to put into evaluating claims and statements alike.

Comment author: fubarobfusco 05 March 2014 07:31:19PM 3 points [-]

Wikipedia quotes a few philosophers on the principle of charity:

Blackburn: "it constrains the interpreter to maximize the truth or rationality in the subject's sayings."

Davidson: "We make maximum sense of the words and thoughts of others when we interpret in a way that optimises agreement."

Also, Dennett in The Intentional Stance quotes Quine that "assertions startlingly false on the face of them are likely to turn on hidden differences of language", which seems to be a related point.

Comment author: ChrisHallquist 04 March 2014 05:48:16AM 1 point [-]

Skimming the "disagreement" tag in Robin Hanson's archives, I found I few posts that I think are particularly relevant to this discussion:

Comment author: Wes_W 03 March 2014 07:44:05PM 0 points [-]

Excellent post. I don't have anything useful to add at the moment, but I am wondering if the second-to-last paragraph:

First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that

is just missing a period at the end, or has a fragmented sentence.

Comment author: John_Maxwell_IV 03 March 2014 07:09:23AM *  4 points [-]

Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.

I agree with your assessment. My suspicion is that this is due to nth-degree imitations of certain high status people in the LW community who have been rather shameless about speaking in extremely confident tones about things that they are only 70% sure about. The strategy I have resorted to for people like this is asking them/checking if they have PredictionBook account and if not, assuming that they are overconfident just like is common with regular human beings. At some point I'd like to write an extended rebuttal to this post.

To provide counterpoint, however, there are certainly a lot of people who go around confidently saying things who are not as smart or rational as a 5th percentile LWer. So if the 5th percentile LWer is having an argument with one of these people, it's arguably an epistemological win if they are displaying a higher level of confidence than the other person in order to convince bystanders. An LWer friend of mine who is in the habit of speaking very confidently about things made me realize that maybe it was a better idea for me to develop antibodies to smart people speaking really confidently and start speaking really confidently myself than it was for me to get him to stop speaking as confidently.

Comment author: brazil84 02 March 2014 09:17:55PM 1 point [-]

By the way, I agree with you that there is a problem with rationalists who are a lot less rational than they realize.

What would be nice is if there were a test for rationality just like one can test for intelligence. It seems that it would be hard to make progress without such a test.

Unfortunately there would seem to be a lot of opportunity for a smart but irrational person to cheat on such a test without even realizing it. For example, if it were announced that atheism is a sign of rationality, our hypothetical smart but irrational person would proudly announce his atheism and would tell himself and others that he is an atheist because he is a smart, rational person and that's how he has processed the evidence.

Another problem is that there is no practical way to assess the rationality of the person who is designing the rationality test.

Someone mentioned weight control as a rationality test. This is an intriguing idea -- I do think that self-deception plays an important role in obesity. I would like to think that in theory, a rational fat person could think about the way his brain and body work; create a reasonably accurate model; and then develop and implement a strategy for weight loss based on his model.

Perhaps some day you will be able to wear a mood-ring type device which beeps whenever you are starting to engage in self-deception.

Comment author: Viliam_Bur 03 March 2014 03:02:07PM *  2 points [-]

if it were announced that atheism is a sign of rationality, our hypothetical smart but irrational person would proudly announce his atheism

Rationality tests shouldn't be about professing things; not even things correlated with rationality. Intelligence tests also aren't about professing intelligent things (whatever those would be), they are about solving problems. Analogically, rationality tests should require people to use rationality to solve novel situations, not just guess the teacher's password.

there is no practical way to assess the rationality of the person who is designing the rationality test

If the test depends too much on trusting the rationality of the person designing the test, they are doing it wrong. Again, IQ tests are not made by finding the highest-IQ people on the planet and telling them: "Please use your superior rationality in ways incomprehensive to us mere mortals to design a good IQ test."

Both intelligence and rationality are necessary in designing an IQ test or a rationality test, but that's in a similar way that intelligence and rationality are necessary to design a new car. The act of designing requires brainpower; but it's not generally true that tests of X must be designed by people with high X.

Comment author: brazil84 03 March 2014 03:37:30PM 1 point [-]

Analogically, rationality tests should require people to use rationality to solve novel situations, not just guess the teacher's password.

I agree with this. But I can't think of such a rationality test. I think part of the problem is that a smart but irrational person could use his intelligence to figure out the answers that a rational person would come up with and then choose those answers.

On an IQ test, if you are smart enough to figure out the answers that a smart person would choose, then you yourself must be pretty smart. But I don't think the same thing holds for rationality.

If the test depends too much on trusting the rationality of the person designing the test, they are doing it wrong.

Well yes, but it's hard to think of how to do it right. What's an example of a question you might put on a rationality test?

Comment author: Viliam_Bur 04 March 2014 08:34:27AM 1 point [-]

I agree that rationality tests will be much more difficult than IQ tests. First, we already have the IQ tests so if we tried to create a new one, we already know what to do and what to expect. Second, the rationality tests can be inherently more difficult.

Still I think that if we look at the history of the IQ tests, we can take some lessons from there. I mean; imagine that there are no IQ tests yet, and you are supposed to invent the first one. The task would probably seem impossible, and there would be similar objections. Today we know that the first IQ tests got a few things wrong. And we also know that the "online IQ tests" are nonsense from the psychometrics point of view, but to people without psychological education they seem right, because their intuitive idea of IQ is "being able to answer difficult questions invented by other intelligent people", when if fact the questions in Raven's progressive matrices are rather simple.

20 years later we may have analogical knowledge about the rationality tests, and some things may seem obvious in hindsight. At this moment, while respecting that intelligence is not the same thing as rationality, IQ tests are the outside-view equivalent I will use for making guesses, because I have no better analogy.

The IQ tests were first developed for small children. The original purpose of the early IQ tests was to tell whether a 6 years old child is ready to go to elementary school, or whether we should give them another year. They probably even weren't called IQ tests yet, but school readiness tests. Only later was the idea of some people being "smarter/dumber for their age" generalized to all ages.

Analogically, we could probably start measuring rationality where it is easiest; on children. I'm not saying it will be easy, just easier that with adults. Many of the small children's logical mistakes will be less politically controversial. And it is easier to reason about the mistakes that you are already not prone to making. Some of the things we learn on children may be later useful also for studying adults.

Within intelligence, there was a controversy (and some people still try to keep it alive) whether "intelligence" is just one thing, or many different things (multiple intelligences). There will be analogical questions about "rationality". And the proper way to answer these questions is to create tests for individual hypothetical components, and then to gather the data and see how these abilities correlate. Measurement and math; not speculation. Despite making an analogy here, I am not saying the answer will be the same. Maybe "resisting peer pressure" and "updating on new evidence" and "thinking about multiple possibilities before choosing and defending one of them" and "not having a strong identity that dictates all answers" will strongly correlate with each other; maybe they will be independent or even contradictory; maybe some of them will correlate together and the other will not, so we get two or three clusters of traits. This is an empirical question and must be answered my measurement.

Some of the intelligence tests in the past were strongly culturally biased (e.g. contained questions from history or literature, knowledge of proverbs or cultural norms), some of them required specific skills (e.g. mathematical). But some of them were not. Now that we have many different solutions, we can pick the less biased ones. But even the old ones were better than nothing; useful approximations within a given cultural group. If the first rationality tests will be similarly flawed, that also will not mean the entire field is doomed; later the tests an be improved, the heavily culture-specific questions removed, getting closer to the abstract essence of rationality.

I agree there is a risk that an irrational person might have a good model of what would a rational person do (while it is impossible for a stupid person to predict how a smart person would solve a difficult problem). I can imagine a smart religious fanatic thinking: "What would HJPEV, the disgusting little heathen, do in this situation?" and running a rationality routine in a sandbox. In that case, the best we could achieve would be the tests measuring someone's capacity to think rationally if they choose to. Such person could still later become an ugly surprise. Well... I suppose we just have to accept this, and add it to the list of warnings of what the rationality tests don't show.

As an example of the questions in tests; I would probably not try to test "rationality" as a whole in a single answer, but make separate answers focused on each component. For example, a test of resisting peer pressure would describe a story where one person provides a good evidence for X, but many people provide obviously bad reasoning for Y; and you have to choose which is more likely. For a test of updating, I would provide multiple pieces of evidence, where the first three point towards an answer X, but the following seven point towards an answer Y, and might even contain explanation why the first three pieces were misleading. The reader would be asked to write an answer answer reading first three pieces, and after reading all of them. For seeing multiple solutions, I would present some puzzle with multiple solutions, and task would be to find within a time limit as much as possible.

Each of these questions has some obvious flaws. But, analogically with the IQ tests, I believe the correct approach is to try dozens of flawed questions, gather data, and see how much they correlate with each other, make a factor analysis, gradually replace them with more pure versions, etc.

Comment author: brazil84 04 March 2014 10:05:27AM 1 point [-]

Still I think that if we look at the history of the IQ tests, we can take some lessons from there. I mean; imagine that there are no IQ tests yet, and you are supposed to invent the first one. The task would probably seem impossible, and there would be similar objections.

It's hard to say given that we have the benefit of hindsight, but at least we wouldn't have to deal with what I believe to be the killer objection -- that irrational people would subconsciously cheat if they know they are being tested.

If the first rationality tests will be similarly flawed, that also will not mean the entire field is doomed; later the tests an be improved, the heavily culture-specific questions removed, getting closer to the abstract essence of rationality.

I agree, but that still doesn't get you any closer to overcoming the problem I described.

I agree there is a risk that an irrational person might have a good model of what would a rational person do (while it is impossible for a stupid person to predict how a smart person would solve a difficult problem). I can imagine a smart religious fanatic thinking: "What would HJPEV, the disgusting little heathen, do in this situation?" and running a rationality routine in a sandbox. In that case, the best we could achieve would be the tests measuring someone's capacity to think rationally if they choose to.

To my mind that's not very helpful because the irrational people I meet have been pretty good at thinking rationally if they choose to. Let me illustrate with a hypothetical: Suppose you meet a person with a fervent belief in X, where X is some ridiculous and irrational claim. Instead of trying to convince them that X is wrong, you offer them a bet, the outcome of which is closely tied to whether X is true or not. Generally they will not take the bet. And in general, when you watch them making high or medium stakes decisions, they seem to know perfectly well -- at some level -- that X is not true.

Of course not all beliefs are capable of being tested in this way, but when they can be tested the phenomenon I described seems pretty much universal. The reasonable inference is that irrational people are generally speaking capable of rational thought. I believe this is known as "standby rationality mode."

Comment author: TheOtherDave 04 March 2014 03:29:44PM -1 points [-]

I agree with you that people who assert crazy beliefs frequently don't behave in the crazy ways those beliefs would entail.

This doesn't necessarily mean they're engaging in rational thought.

For one thing, the real world is not that binary. If I assert a crazy belief X, but I behave as though X is not true, it doesn't follow that my behavior is sane... only that it isn't crazy in the specific way indicated by X. There are lots of ways to be crazy.

More generally, though... for my own part what I find is that most people's betting/decision making behavior is neither particularly "rational" nor "irrational" in the way I think you're using these words, but merely conventional.

That is, I find most people behave the way they've seen their peers behaving, regardless of what beliefs they have, let alone what beliefs they assert (asserting beliefs is itself a behavior which is frequently conventional). Sometimes that behavior is sane, sometimes it's crazy, but in neither case does it reflect sanity or insanity as a fundamental attribute.

You might find yvain's discussion of epistemic learned helplessness enjoyable and interesting.

Comment author: brazil84 04 March 2014 07:47:20PM 1 point [-]

This doesn't necessarily mean they're engaging in rational thought.

For one thing, the real world is not that binary. If I assert a crazy belief X, but I behave as though X is not true, it doesn't follow that my behavior is sane... only that it isn't crazy in the specific way indicated by X. There are lots of ways to be crazy.

More generally, though... for my own part what I find is that most people's betting/decision making behavior is neither particularly "rational" nor "irrational" in the way I think you're using these words, but merely conventional.

That is, I find most people behave the way they've seen their peers behaving, regardless of what beliefs they have, let alone what beliefs they assert (asserting beliefs is itself a behavior which is frequently conventional)

That may very well be true . . .I'm not sure what it says about rationality testing. If there is a behavior which is conventional but possibly irrational, it might not be so easy to assess its rationality. And if it's conventional and clearly irrational, how can you tell if a testee engages in it? Probably you cannot trust self-reporting.

Comment author: Lumifer 04 March 2014 08:59:58PM *  0 points [-]

<ha-ha-only-serious>

Rationality is commonly defined as winning. Therefore rationality testing is easy -- just check if the subject is a winner or a loser.

</ha-ha-only-serious>

Comment author: TheOtherDave 04 March 2014 07:52:54PM 0 points [-]

A lot of words are getting tossed around here whose meanings I'm not confident I understand. Can you say what it is you want to test for, here, without using the word "rational" or its synonyms? Or can you describe two hypothetical individuals, one of whom you'd expect to pass such a test and the other you'd expect to fail?

Comment author: brazil84 04 March 2014 08:45:39PM *  1 point [-]

Our hypothetical person believes himself to be very good at not letting his emotions and desires color his judgments. However his judgments are heavily informed by these things and then he subconsciously looks for rationalizations to justify them. He is not consciously aware that he does this.

Ideally, he should fail the rationality test.

Conversely, someone who passes the test is someone who correctly believes that his desires and emotions have very little influence over his judgments.

Does that make sense?

And by the way, one of the desires of Person #1 is to appear "rational" to himself and others. So it's likely he will subconsiously attempt to cheat on any "rationality test. "

Comment author: TheOtherDave 04 March 2014 09:19:23PM 0 points [-]

Yeah, that helps.

If I were constructing a test to distinguish person #1 from person #2, I would probably ask for them to judge a series of scenarios that were constructed in such a way that formally, the scenarios were identical, but each one had different particulars that related to common emotions and desires, and each scenario was presented in isolation (e.g., via a computer display) so it's hard to go back and forth and compare.

I would expect P2 to give equivalent answers in each scenario, and P1 not to (though they might try).

Comment author: Viliam_Bur 04 March 2014 10:53:47AM *  0 points [-]

Okay, I think there is a decent probability that you are right, but at this moment we need more data, which we will get by trying to create different kinds of rationality tests.

A possible outcome is that we won't get true rationality tests, but at least something partially useful, e.g. tests selecting the people capable of rational though, which includes a lot of irrational people, but still not everyone. Which may still appear to be just another form of intelligence tests (a sufficiently intelligent irrational person is able to make rational bets, and still believe they have an invisible dragon in the garage).

So... perhaps this is a moment where I should make a bet about my beliefs. Assuming that Stanovich does not give up, and other people will follow him (that is, assuming that enough psychologists will even try to create rationality tests), I'd guess... probability 20% within 5 years, 40% within 10 years, 80% ever (pre-Singularity) that there will be a test which predicts rationality significantly better than an IQ test. Not completely reliably, but sufficiently that you would want your employees to be tested by that test instead of an IQ test, even if you had to pay more for it. (Which doesn't mean that employers actually will want to use it. Or will be legally allowed to.) And probability 10% within 10 years, 60% ever that a true "rationality test" will be invented, at least for values up to 130 (which still many compartmentalizing people will pass). These numbers are just a wild guess, tomorrow I would probably give different values; I just thought it would be proper to express my beliefs in this format, because it encourages rationality in general.

Comment author: brazil84 04 March 2014 12:41:40PM 1 point [-]

Which may still appear to be just another form of intelligence tests (

Yes, I have a feeling that "capability of rationality" would be highly correlated with IQ.

Not completely reliably, but sufficiently that you would want your employees to be tested by that test instead of an IQ test

Your mention of employees raises another issue, which is who the test would be aimed at. When we first started discussing the issue, I had an (admittedly vague) idea in my head that the test could be for aspiring rationalists. i.e. that it could it be used to bust irrational lesswrong posters who are far less rational than they realize. It's arguably more of a challenge to come up with a test to smoke out the self-proclaimed paragon of rationality who has the advantage of careful study and who knows exactly what he is being tested for.

By analogy, consider the Crown-Marlow Social Desirability Scale, which has been described as a test which measures "the respondent's desire to exaggerate his own moral excellence and to present a socially desirable facade" Here is a sample question from the test:

  1. T F I have never intensely disliked anyone

Probably the test works pretty well for your typical Joe or Jane Sixpack. But someone who is intelligent; who has studied up in this area; and who knows what's being tested will surely conceal his desire to exaggerate his moral excellence.

That said, having thought about it, I do think there is a decent chance that solid rationality tests will be developed. At least for subjects who are unprepared. One possibility is to measure reaction times as with "Project Implicit." Perhaps self-deception is more congnitively demanding than self-honesty and therefore a clever test might measure it. But you still might run into the problem of subconscious cheating.

Comment author: Nornagest 06 March 2014 11:57:05PM *  2 points [-]

Perhaps self-deception is more congnitively demanding than self-honesty and therefore a clever test might measure it.

If anything, I might expect the opposite to be true in this context. Neurotypical people have fast and frugal conformity heuristics to fall back on, while self-honestly on a lot of questions would probably take some reflection; at least, that's true for questions that require aggregating information or assessing personality characteristics rather than coming up with a single example of something.

It'd definitely be interesting to hook someone up to a polygraph or EEG and have them take the Crowne-Marlowe Scale, though.

Comment author: brazil84 07 March 2014 06:30:22AM 0 points [-]

If anything, I might expect the opposite to be true in this context.

Well consider the hypothetical I proposed:

suppose you are having a Socratic dialogue with someone who holds irrational belief X. Instead of simply laying out your argument, you ask the person whether he agrees with Proposition Y, where Proposition Y seems pretty obvious and indisputable. Our rational person might quickly and easily agree or disagree with Y. Whereas our irrational person needs to think more carefully about Y; decide whether it might undermine his position; and if it does, construct a rationalization for rejecting Y. This difference in thinking might be measured in terms of reaction times.

See what I mean?

I do agree that in other contexts, self-deception might require less thought. e.g. spouting off the socially preferable answer to a question without really thinking about what the correct answer is.

It'd definitely be interesting to hook someone up to a polygraph or EEG and have them take the Crowne-Marlowe Scale, though.

Yes.

Comment author: Viliam_Bur 04 March 2014 04:47:22PM 1 point [-]

That sample question reminds me of a "lie score", which is a hidden part of some personality tests. Among the serious questions, there are also some questions like this, where you are almost certain that the "nice" answer is a lie. Most people will lie on one or two of ten such question, but the rule of thumb is that if they lie in five or more, you just throw the questionnaire away and declare them a cheater. -- However, if they didn't lie on any of these question, you do a background check whether they have studied psychology. And you keep in mind that the test score may be manipulated.

Okay, I admit that this problem would be much worse for rationality tests, because if you want a person with given personality, they most likely didn't study psychology. But if CFAR or similar organizations become very popular, then many candidates for highly rational people will be "tainted" by the explicit study of rationality, simply because studying rationality explicitly is probably a rational thing to do (this is just an assumption), but it's also what an irrational person self-identifying as a rationalist would do. Also, practicing for IQ tests is obvious cheating, but practicing for getting better at doing rational tasks is the rational thing to do, and a wannabe rationalist would do it, too.

Well, seems like the rationality tests would be more similar to IQ tests than to personality test. Puzzles, time limits... maybe even the reaction times or lie detectors.

Comment author: PeterDonis 06 March 2014 11:43:06PM *  0 points [-]

Among the serious questions, there are also some questions like this, where you are almost certain that the "nice" answer is a lie.

On the Crowne-Marlowe scale, it looks to me (having found a copy online and taken it) like most of the questions are of this form. When I answered all of the questions honestly, I scored 6, which according to the test, indicates that I am "more willing than most people to respond to tests truthfully"; but what it indicates to me is that, for all but 6 out of 33 questions, the "nice" answer was a lie, at least for me.

The 6 questions were the ones where the answer I gave was, according to the test, the "nice" one, but just happened to be the truth in my case: for example, one of the 6 was "T F I like to gossip at times"; I answered "F", which is the "nice" answer according to the test--presumably on the assumption that most people do like to gossip but don't want to admit it--but I genuinely don't like to gossip at all, and can't stand talking to people who do. Of course, now you have the problem of deciding whether that statement is true or not. :-)

Could a rationality test be gamed by lying? I think that possibility is inevitable for a test where all you can do is ask the subject questions; you always have the issue of how to know they are answering honestly.

Comment author: brazil84 04 March 2014 08:28:37PM 0 points [-]

Well, seems like the rationality tests would be more similar to IQ tests than to personality test. Puzzles, time limits... maybe even the reaction times or lie detectors.

Yes, reaction times seem like an interesting possibility. There is an online test for racism which uses this principle. But it would be pretty easy to beat the test if the results counted for anything. Actually lie detectors can be beaten too.

Perhaps brain imaging will eventually advance to the point where you can cheaply and accurately determine if someone is engaged in deception or self-deception :)

Comment author: [deleted] 02 March 2014 04:00:57PM *  9 points [-]

Ok, there's no way to say this without sounding like I'm signalling something, but here goes.

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

"If you can't say something you are very confident is actually smart, don't say anything at all." This is, in fact, why I don't say very much, or say it in a lot of detail, much of the time. I have all kinds of thoughts about all kinds of things, but I've had to retract sincerely-held beliefs so many times I just no longer bother embarrassing myself by opening my big dumb mouth.

Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme.

In my opinion, it's actually terrible branding for a movement. "Rationality is systematized winning"; ok, great, what are we winning at? Rationality and goals are orthogonal to each-other, after all, and at a first look, LW's goals can look like nothing more than an attempt to signal "I'm smarter than you" or even "I'm more of an emotionless Straw-Vulcan cyborg than you" to the rest of the world.

This is not a joke, I actually have a friend who virulently hates LW and resents his friends who get involved in it because he thinks we're a bunch of sociopathic Borg wannabes following a cult of personality. You might have an impulse right now to just call him an ignorant jerk and be done with it, but look, would you prefer the world in which you get to feel satisfied about having identified an ignorant jerk, or would you prefer the world in which he's actually persuaded about some rationalist ideas, makes some improvements to his life, maybe donates money to MIRI/CFAR, and so on? The latter, unfortunately, requires social engagement with a semi-hostile skeptic, which we all know is much harder than just calling him an asshole, taking our ball, and going home.

So anyway, what are we trying to do around here? It should be mentioned a bit more often on the website.

(At the very least, my strongest evidence that we're not a cult of personality is that we disagree amongst ourselves about everything. On the level of sociological health, this is an extremely good sign.)

That bit of LessWrong jargon is merely silly. Worse, I think, is the jargon around politics. Recently, a friend gave "they avoid blue-green politics" as a reason LessWrongians are more rational than other people. It took a day before it clicked that "blue-green politics" here basically just meant "partisanship." But complaining about partisanship is old hat—literally. America's founders were fretting about it back in the 18th century. Nowadays, such worries are something you expect to hear from boringly middle-brow columnists at major newspapers, not edgy contrarians.

While I do agree about the jargon issue, I think the contrarianism and the meta-contrarianism often make people feel they've arrived to A Rational Answer, at which point they stop thinking.

For instance, if Americans have always thought their political system is too partisan, has anyone in political science actually bothered to construct an objective measurement and collected time-series data? What does the time-series data actually say? Besides, once we strip off the tribal signalling, don't all those boringly mainstream ideologies actually have some few real points we could do with engaging with?

(Generally, LW is actually very good at engaging with those points, but we also simultaneously signal that we're adamantly refusing to engage in partisan politics. It's like playing an ideological Tsundere: "Baka! I'm only doing this because it's rational. It's not like I agree with you or anything! blush")

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid.

Ok, but then let me propose a counter-principle: Principle of Informative Calling-Out. I actively prefer to be told when I'm wrong and corrected. Unfortunately, once you ditch the principle of charity, the most common response to an incorrect statement often becomes, essentially, "Just how stupid are you!?", or other forms of low-information signalling about my interlocutor's intelligence and rationality compared to mine.

I need to emphasize that I really do think philosophers are showing off real intelligence, not merely showing off faux-cleverness. GRE scores suggest philosophers are among the smartest academics, and their performance is arguably made more impressive by the fact that GRE quant scores are bimodally distributed based on whether your major required you to spend four years practicing your high school math, with philosophy being one of the majors that doesn't grant that advantage. Based on this, if you think it's wrong to dismiss the views of high-IQ people, you shouldn't be dismissive of mainstream philosophy. But in fact I think LessWrong's oft-noticed dismissiveness of mainstream philosophy is largely justified.

You should be looking at this instrumentally. The question is not whether you think "mainstream philosophy" (the very phrase is suspect, since mainstream academic philosophy divides into a number of distinct schools, Analytic and Continental being the top two off the top of my head) is correct. The question is whether you think you will, at some point, have any use for interacting with mainstream philosophy and its practitioners. If they will be useful to you, it is worth learning their vocabulary and their modes of operation in order to, when necessary, enlist their aid, or win at their game.

Comment author: brazil84 02 March 2014 01:10:59PM 4 points [-]

The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

Well perhaps you should adopt a charitable interpretation of the principle of charity :) It occurs to me that the phrase itself might not be ideal since "charity" implies that you are giving something which the recipient does not necessarily deserve. Anyway, here's an example which I saw just yesterday:

The context is a discussion board where people argue, among other things, about discrimination against fat people.


Person 1: Answer a question for me: if you were stuck on the 3rd floor of a burning house and passed out, and you had a choice between two firefighter teams, one composed of men who weighted 150-170lbs and one composed of men well above 300, which team would you choose to rescue you?

Person 2: My brother is 6’ 9”, and with a good deal of muscle and a just a little pudge he’d be well over 350 (he’s currently on the thin side, probably about 290 or so). He’d also be able to jump up stairs and lift any-fucking-thing. Would I want him to save me? Hell yes. Gosh, learn to math,


It seems to me the problem here is that Person 2 seized upon an ambiguity in Person 1's question in order to dodge the central point of the question. The Principle of Charity would have required Person 2 to assume that the 300 pound men in the hypothetical were of average height and not 6'9"

I think it's a somewhat important principle because it's very difficult to construct statements and questions without ambiguities which can be seized upon by those who are hostile to one's argument. If I say "the sky is blue," every reasonable person knows what I mean. And it's a waste of everyone's time and energy to make me say something like "The sky when viewed from the surface of the Earth generally appears blue to humans with normal color vision during the daytime when the weather is clear."

So call it whatever you want, the point is that one should be reasonable in interpreting others' statements and questions.

Comment author: Vaniver 02 March 2014 10:06:31AM 0 points [-]

I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.

So, respond with something like "I don't think sanity is a single personal variable which extends to all held beliefs." It conveys the same information- "I don't trust conclusions solely because you reached them"- but it doesn't convey the implication that this is a personal failing on their part.

I've said this before when you've brought up the principle of charity, but I think it bears repeating. The primary benefit of the principle of charity is to help you, the person using it, and you seem to be talking mostly about how it affects discourse, and that you don't like it when other people expect that you'll use the principle of charity when reading them. I agree with you that they shouldn't expect that- but I find it more likely that this is a few isolated incidents (and I can visualize a few examples) than that this is a general tendency.

Comment author: Yvain 02 March 2014 07:50:21AM *  34 points [-]

I interpret you as making the following criticisms:

1. People disagree with each other, rather than use Aumann agreement, which proves we don't really believe we're rational

Aside from Wei's comment, I think we also need to keep track of what we're doing.

If we were to choose a specific empirical fact or prediction - like "Russia will invade Ukraine tomorrow" - and everyone on Less Wrong were to go on Prediction Book and make their prediction and we took the average - then I would happily trust that number more than I would trust my own judgment. This is true across a wide variety of different facts.

But this doesn't preclude discussion. Aumann agreement is a way of forcing results if forcing results were our only goal, but we can learn more by trying to disentangle our reasoning processes. Some advantages to talking about things rather than immediately jumping to Aumann:

  • We can both increase our understanding of the issue.

  • We may find a subtler position we can both agree on. If I say "California is hot" and you say "California is cold", instead of immediately jumping to "50% probability either way" we can work out which parts of California are hot versus cold at which parts of the year.

  • We may trace part of our disagreement back to differing moral values. If I say "capital punishment is good" and you say "capital punishment is bad", then it may be right for me to adjust a little in your favor since you may have evidence that many death row inmates are innocent, but I may also find that most of the force of your argument is just that you think killing people is never okay. Depending on how you feel about moral facts and moral uncertainty, we might not want to Aumann adjust this one. Nearly everything in politics depends on moral differences at least a little.

  • We may trace our disagreement back to complicated issues of worldview and categorization. I am starting to interpret most liberal-conservative issues as a tendency to draw Schelling fences in different places and then correctly reason with the categories you've got. I'm not sure if you can Aumann-adjust that away, but you definitely can't do it without first realizing it's there, which takes some discussion.

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

2. It is possible that high IQ people can be very wrong and even in a sense "stupidly" wrong, and we don't acknowledge this enough.

I totally agree this is possible.

The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm definitely right; this other person has nothing to teach me."

So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong - like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we're too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says "I'm sure that creationism is true, and it doesn't matter whether really fancy scientists who use big words tell me it isn't."

We end up in a kind of bravery debate situation here, where we have to decide whether it's worth warning people more against the first failure mode (at the risk it will increase the second), or against the second failure mode more (at the risk that it will increase the first).

And, well, studies pretty universally find everyone is overconfident of their own opinions. Even the Less Wrong survey finds people here to be really overconfident.

So I think it's more important to warn people to be less confident they are right about things. The inevitable response is "What about creationism?!" to which the counterresponse is "Okay, but creationists are stupid, be less confident when you disagree with people as smart or smarter than you."

This gets misinterpreted as IQ fetishism, but I think it's more of a desperate search for something, anything to fetishize other than our own subjective feelings of certainty.

3. People are too willing to be charitable to other people's arguments.

This is another case where I think we're making the right tradeoff.

Once again there are two possible failure modes. First, you could be too charitable, and waste a lot of time engaging with people who are really stupid, trying to figure out a smart meaning to what they're saying. Second, you could be not charitable enough by prematurely dismissing an opponent without attempting to understand her, and so perhaps missing out on a subtler argument that proves she was right and you were wrong all along.

Once again, everyone is overconfident. No one is underconfident. People tell me I am too charitable all the time, and yet I constantly find I am being not-charitable-enough, unfairly misinterpreting other people's points, and so missing or ignoring very strong arguments. Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice "be less charitable" is more helpful than the advice "be more charitable".

As I said above, you can try to pinpoint where to apply this advice. You don't need to be charitable to really stupid people with no knowledge of a field. But once you've determined someone is in a reference class where there's a high prior on them having good ideas - they're smart, well-educated, have a basic committment to rationality - advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less - it might be useful in a couple of extreme cases, but I really doubt it's where the gain for the average person lies.

In fact, it's hard for me to square your observation that we still have strong disagreements with your claim that we're too charitable. At least one side is getting things wrong. Shouldn't they be trying to pay a lot more attention to the other side's arguments?

I feel like utter terror is underrated as an epistemic strategy. Unless you are some kind of freakish mutant, you are overconfident about nearly everything and have managed to build up very very strong memetic immunity to arguments that are trying to correct this. Charity is the proper response to this, and I don't think anybody does it enough.

4. People use too much jargon.

Yeah, probably.

There are probably many cases in which the jargony terms have subtly different meaning or serve as reminders of a more formal theory and so are useful ("metacontrarian" versus "showoff", for example), but probably a lot of cases where people could drop the jargon without cost.

I think this is a more general problem of people being bad at writing - "utilize" vs. "use" and all that.

5. People are too self-congratulatory and should be humbler

What's weird is that when I read this post, you keep saying people are too self-congratulatory, but to me it sounds more like you're arguing people are being too modest, and not self-congratulatory enough.

When people try to replace their own subjective analysis of who can easily be dismissed ("They don't agree with me; screw them") with something based more on IQ or credentials, they're being commendably modest ("As far as I can tell, this person is saying something dumb, but since I am often wrong, I should try to take the Outside View by looking at somewhat objective indicators of idea quality.")

And when people try to use the Principle of Charity, once again they are being commendably modest ("This person's arguments seem stupid to me, but maybe I am biased or a bad interpreter. Let me try again to make sure.")

I agree that it is an extraordinary claim to believe anyone is a perfect rationalists. That's why people need to keep these kinds of safeguards in place as saving throws against their inevitable failures.

Comment author: ChrisHallquist 03 March 2014 12:42:28AM *  5 points [-]

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.

The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm definitely right; this other person has nothing to teach me."

So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong - like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we're too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says "I'm sure that creationism is true, and it doesn't matter whether really fancy scientists who use big words tell me it isn't."

I guess I need to clarify that I think IQ is a terrible proxy for rationality, that the correlation is weak at best. And your suggested heuristic will do nothing to stop high IQ crackpots from ignoring the mainstream scientific consensus. Or even low IQ crackpots who can find high IQ crackpots to support them. This is actually a thing that happens with some creationists—people thinking "because I'm an <engineer / physicist / MD / mathematician>, I can see those evolutionary biologists are talking nonsense." Creationists would do better to attend to the domain expertise of evolutionary biologists. (See also: my post on the statistician's fallacy.)

I'm also curious as to how much of your willingness to agree with me in dismissing Plantinga is based on him being just one person. Would you be more inclined to take a sizeable online community of Plantingas seriously?

Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice "be less charitable" is more helpful than the advice "be more charitable".

As I said above, you can try to pinpoint where to apply this advice. You don't need to be charitable to really stupid people with no knowledge of a field. But once you've determined someone is in a reference class where there's a high prior on them having good ideas - they're smart, well-educated, have a basic committment to rationality - advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less - it might be useful in a couple of extreme cases, but I really doubt it's where the gain for the average person lies.

On the one hand, I dislike the rhetoric of charity as I see it happen on LessWrong. On the other hand, in practice, you're probably right that people aren't too charitable. In practice, the problem is selective charity—a specific kind of selective charity, slanted towards favoring people's in-group. And you seem to endorse this selective charity.

I've already said why I don't think high IQ is super-relevant to deciding who you should read charitably. Overall education also doesn't strike me as super-relevant either. In the US, better educated Republicans are more likely to deny global warming and think that Obama's a Muslim. That appears to be because (a) you can get a college degree without ever taking a class on climate science and (b) more educated conservatives are more likely to know what they're "supposed" to believe about certain issues. Of course, when someone has a Ph.D. in a relevant field, I'd agree that you should be more inclined to assume they're not saying anything stupid about that field (though even that presumption is weakened if they're saying something that would be controversial among their peers).

As for "basic commitment to rationality," I'm not sure what you mean by that. I don't know how I'd turn it into a useful criterion, aside from defining it to mean people I'd trust for other reasons (e.g. endorsing standard attitudes of mainstream academia). It's quite easy for even creationists to declare their commitment to rationality. On the other hand, if you think someone's membership in the online rationalist community is a strong reason to treat what they say charitably, yeah, I'm calling that self-congratulatory nonsense.

And that's the essence of my reply to your point #5. It's not people having self-congratulatory attitudes on an individual level. It's the self-congratulatory attitudes towards their in-group.

Comment author: blacktrance 03 March 2014 04:30:06PM 2 points [-]

the problem is selective charity—a specific kind of selective charity, slanted towards favoring people's in-group.

The danger of this approach is obvious, but it can have benefits as well. You may not know that a particular LessWronger is sane, but you do know that on average LessWrong has higher sanity than the general population. That's a reason to be more charitable.

Comment author: Yvain 03 March 2014 04:22:06PM *  10 points [-]

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.

Are ethics supposed to be Aumann-agreeable? I'm not at all sure the original proof extends that far. If it doesn't, that would cover your disagreement with Alicorn as well as a very large number of other disagreements here.

I don't think it would cover Eliezer vs. Robin, but I'm uncertain how "real" that disagreement is. If you forced both of them to come up with probability estimates for an em scenario vs. a foom scenario, then showed them both each other's estimates and put a gun to their heads and asked them whether they wanted to Aumann-update or not, I'm not sure they wouldn't agree to do so.

Even if they did, it might be consistent with their current actions: if there's a 20% chance of ems and 20% chance of foom (plus 60% chance of unpredictable future, cishuman future, or extinction) we would still need intellectuals and organizations planning specifically for each option, the same way I'm sure the Cold War Era US had different branches planning for a nuclear attack by USSR and a nonnuclear attack by USSR.

I will agree that there are some genuinely Aumann-incompatible disagreements on here, but I bet it's fewer than we think.

I guess I need to clarify that I think IQ is a terrible proxy for rationality, that the correlation is weak at best. And your suggested heuristic will do nothing to stop high IQ crackpots from ignoring the mainstream scientific consensus. Or even low IQ crackpots who can find high IQ crackpots to support them.

So I want to agree with you, but there's this big and undeniable problem we have and I'm curious how you think we should solve it if not through something resembling IQ.

You agree people need to be more charitable, at least toward out-group members. And this would presumably involve taking people whom we are tempted to dismiss, and instead not dismissing them and studying them further. But we can't do this for everyone - most people who look like crackpots are crackpots. There are very likely people who look like crackpots but are actually very smart out there (the cryonicists seem to be one group we can both agree on) and we need a way to find so we can pay more attention to them.

We can't use our subjective feeling of is-this-guy-a-crackpot-or-not, because that's what got us into this problem in the first place. Presumably we should use the Outside View. But it's not obvious what we should be Outside Viewing on. The two most obvious candidates are "IQ" and "rationality", which when applied tend to produce IQ fetishism and in group favoritism (since until Stanovich actually produces his rationality quotient test and gives it to everybody, being in a self-identified rationalist community and probably having read the whole long set of sequences on rationality training is one of the few proxies for rationality we've got available).

I admit both of these proxies are terrible. But they seem to be the main thing keeping us from, on the one side, auto-rejecting all arguments that don't sound subjectively plausible to us at first glance, and on the other, having to deal with every stupid creationist and homeopath who wants to bloviate at us.

There seems to be something that we do do that's useful in this sphere. Like if someone with a site written in ALL CAPS and size 20 font claims that Alzheimers is caused by a bacterium, I dismiss it without a second thought because we all know it's a neurodegenerative disease. But a friend who has no medical training but whom I know is smart and reasonable recently made this claim, I looked it up, and sure enough there's a small but respectable community of microbiologists and neuroscientists investigating that maybe Alzheimers is triggered by an autoimmune response to some bacterium. It's still a long shot, but it's definitely not crackpottish. So somehow I seem to have some sort of ability for using the source of an implausible claim to determine whether I investigate it further, and I'm not sure how to describe the basis on which I make this decision beyond "IQ, rationality, and education".

I'm also curious as to how much of your willingness to agree with me in dismissing Plantinga is based on him being just one person. Would you be more inclined to take a sizeable online community of Plantingas seriously?

Well, empirically I did try to investigate natural law theology based on there being a sizeable community of smart people who thought it was valuable. I couldn't find anything of use in it, but I think it was a good decision to at least double-check.

On the one hand, I dislike the rhetoric of charity as I see it happen on LessWrong. On the other hand, in practice, you're probably right that people aren't too charitable. In practice, the problem is selective charity—a specific kind of selective charity, slanted towards favoring people's in-group. And you seem to endorse this selective charity.

If you think people are too uncharitable in general, but also that we're selectively charitable to the in-group, is that equivalent to saying the real problem is that we're not charitable enough to the out-group? If so, what subsection of the out-group would you recommend we be more charitable towards? And if we're not supposed to select that subsection based on their intelligence, rationality, education, etc, how do we select them?

And if we're not supposed to be selective, how do we avoid spending all our time responding to total, obvious crackpots like creationists and Time Cube Guy?

On the other hand, if you think someone's membership in the online rationalist community is a strong reason to treat what they say charitably, yeah, I'm calling that self-congratulatory nonsense. And that's the essence of my reply to your point #5. It's not people having self-congratulatory attitudes on an individual level. It's the self-congratulatory attitudes towards their in-group.

Yeah, this seems like the point we're disagreeing on. Granted that all proxies will be at least mostly terrible, do you agree that we do need some characteristics that point us to people worth treating charitably? And since you don't like mine, which ones are you recommending?

Comment author: TheAncientGeek 24 April 2014 09:45:09AM *  0 points [-]

Not being charitable to people isn't a problem, providing you don't mistake your lack of charity for evidence that they are stupid or irrational.

Comment author: ChrisHallquist 03 March 2014 06:09:51PM *  1 point [-]

I question how objective these objective criterion you're talking about are. Usually when we judge someone's intelligence, we aren't actually looking at the results of an IQ test, so that's subjective. Ditto rationality. And if you were really that concerned about education, you'd stop paying so much attention to Eliezer or people who have a bachelors' degree at best and pay more attention to mainstream academics who actually have PhDs.

FWIW, actual heuristics I use to determine who's worth paying attention to are

  • What I know of an individual's track record of saying reasonable things.
  • Status of them and their ideas within mainstream academia (but because everyone knows about this heuristic, you have to watch out for people faking it.
  • Looking for other crackpot warning signs I've picked up over time, e.g. a non-expert claiming the mainstream academic view is not just wrong but obviously stupid, or being more interested in complaining that their views are being suppressed than in arguing for those views.

Which may not be great heuristics, but I'll wager that they're better than IQ (wager, in this case, being a figure of speech, because I don't actually know how you'd adjudicate that bet).

It may be helpful, here, to quote what I hope will be henceforth known as the Litany of Hermione: "The thing that people forget sometimes, is that even though appearances can be misleading, they're usually not."

You've also succeeded in giving me second thoughts about being signed up for cryonics, on the grounds that I failed to consider how it might encourage terrible mental habits in others. For the record, it strikes me as quite possible that mainstream neuroscientists are entirely correct to be dismissive of cryonics—my biggest problem is that I'm fuzzy on what exactly they think about cryonics (more here).

Comment author: Yvain 03 March 2014 07:12:26PM 9 points [-]

Your heuristics are, in my opinion, too conservative or not strong enough.

Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with. If you're a creationist, you can rule out paying attention to Richard Dawkins, because if he's wrong about God existing, about the age of the Earth, and about homosexuality being okay, how can you ever expect him to be right about evolution? If you're anti-transhumanism, you can rule out cryonicists because they tend to say lots of other unreasonable things like that computers will be smarter than humans, or that there can be "intelligence explosions", or that you can upload a human brain.

Status within mainstream academia is a really good heuristic, and this is part of what I mean when I say I use education as a heuristic. Certainly to a first approximation, before investigating a field, you should just automatically believe everything the mainstream academics believe. But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them. So when we say we need heuristics to find ideas to pay attention to, I'm assuming we've already started by assuming mainstream academia is always right, and we're looking for which challenges to them we should pay attention to. I agree that "challenges the academics themselves take seriously" is a good first step, but I'm not sure that would suffice to discover the critique of mainstream philosophy. And it's very little help at all in fields like politics.

The crackpot warning signs are good (although it's interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out, and it also seems like people have a bad habit of being very sensitive to crackpot warning signs the opposing side displays and very obtuse to those their own side displays). But once again, these signs are woefully inadequate. Plantinga doesn't look a bit like a crackpot.

You point out that "Even though appearances can be misleading, they're usually not." I would agree, but suggest you extend this to IQ and rationality. We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it's hard to remember that stupid things are still much, much likelier to be believed by stupid people.

(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of "deserve further investigation and charitable treatment".)

You are right that I rarely have the results of an IQ test (or Stanovich's rationality test) in front of me. So when I say I judge people by IQ, I think I mean something like what you mean when you say "a track record of making reasonable statements", except basing "reasonable statements" upon "statements that follow proper logical form and make good arguments" rather than ones I agree with.

So I think it is likely that we both use a basket of heuristics that include education, academic status, estimation of intelligence, estimation of rationality, past track record, crackpot warning signs, and probably some others.

I'm not sure whether we place different emphases on those, or whether we're using about the same basket but still managing to come to different conclusions due to one or both of us being biased.

Comment author: TheAncientGeek 24 April 2014 09:55:41AM 3 points [-]

Has anyone noticed that, given the fact that most of the material on this site is esemtially about philosophy, "academic philosophy sucks" is a Crackpot Warning Sign, ie "don't listen to the hidebound establishment".

Comment author: ChrisHallquist 05 July 2014 11:41:11PM 1 point [-]

So I normally defend the "trust the experts" position, and I went to grad school for philosophy, but... I think philosophy may be an area where "trust the experts" mostly doesn't work, simply because with a few exceptions the experts don't agree on anything. (Fuller explanation, with caveats, here.)

Comment author: TheAncientGeek 06 July 2014 03:02:33PM 0 points [-]

If what philosophers specialise in clarifying questions, they can trusted to get the question right.

A typical failure mode of amateur philosophy is to substitute easier questions for harder ones.

Comment author: Protagoras 06 July 2014 12:49:58AM 4 points [-]

Also, from the same background, it is striking to me that a lot of the criticisms Less Wrong people make of philosophers are the same as the criticisms philosophers make of one another. I can't really think of a case where Less Wrong stakes out positions that are almost universally rejected by mainstream philosophers. And not just because philosophers disagree so much, though that's also true, of course; it seems rather that Less Wrong people greatly exaggerate how different they are and how much they disagree with the philosophical mainstream, to the extent that any such thing exists (again, a respect in which their behavior resembles how philosophers treat one another).

Comment author: TheAncientGeek 17 August 2015 07:12:42PM *  1 point [-]

Since there is no consensus among philosophers, respecting philosophy is about respecting the process. The negative .claims LW makes about philosophy are indeed similar to the negative claims philosophy makes about itself. LW also makes the positive claim that it has a better, faster method than philosophy but in fact just has a truncated version of the same method.

As Hallquist notes elsewhere

But Alexander misunderstands me when he says I accuse Yudkowsky “of being against publicizing his work for review or criticism.” He’s willing to publish it–but only to enlighten us lesser rationalists. He doesn’t view it as a necessary part of checking whether his views are actually right. That means rejecting the social process of science. That’s a problem.

Or, as I like to put it, if you half bake your bread, then you get your bread quicker...but its half baked,

Comment author: Vaniver 24 April 2014 02:31:31PM *  0 points [-]

You might be interested in this article and this sequence (in particular, the first post of that sequence). "Academic philosophy sucks" is a Crackpot Warning Sign because of the implied brevity. A measured, in-depth criticism is one thing; a smear is another.

Comment author: TheAncientGeek 24 April 2014 06:05:09PM *  0 points [-]

Read them ,not generally impressed.

Comment author: torekp 06 March 2014 01:42:38AM 0 points [-]

Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with.

Counterexample: your own investigation of natural law theology. Another: your investigation of the Alzheimer's bacterium hypothesis. I'd say your own intellectual history nicely demonstrates just how to pull off the seemingly impossible feat of detecting reasonable people you disagree with.

Comment author: ChrisHallquist 04 March 2014 02:08:18AM 0 points [-]

But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them.

With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don't agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accomplishes anything.

The crackpot warning signs are good (although it's interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out...

Examples?

We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it's hard to remember that stupid things are still much, much likelier to be believed by stupid people.

(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of "deserve further investigation and charitable treatment".)

I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority (and the same study found many more academics take some sort of weaker pro-religion stance). And in my experience, even highly respected academics, when they try to defend religion, routinely make juvenile mistakes that make Plantinga look good by comparison. (Remember, I used Plantinga in the OP not because he makes the dumbest mistakes per se but as an example of how bad arguments can signal high intelligence.)

So when I say I judge people by IQ, I think I mean something like what you mean when you say "a track record of making reasonable statements", except basing "reasonable statements" upon "statements that follow proper logical form and make good arguments" rather than ones I agree with.

Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true." "Good arguments" is much harder to judge, and seems to defeat the purpose of having a heuristic for deciding who to treat charitably: if I say "this guy's arguments are terrible," and you say, "you should read those arguments more charitably," it doesn't do much good for you to defend that claim by saying, "well, he has a track record of making good arguments."

Comment author: Yvain 07 March 2014 08:23:45PM 3 points [-]

I agree that disagreement among philosophers is a red flag that we should be looking for alternative positions.

But again, I don't feel like that's strong enough enough. Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

Examples?

Well, take Barry Marshall. Became convinced that ulcers were caused by a stomach bacterium (he was right; later won the Nobel Prize). No one listened to him. He said that "my results were disputed and disbelieved, not on the basis of science but because they simply could not be true...if I was right, then treatment for ulcer disease would be revolutionized. It would be simple, cheap and it would be a cure. It seemed to me that for the sake of patients this research had to be fast tracked. The sense of urgency and frustration with the medical community was partly due to my disposition and age."

So Marshall decided since he couldn't get anyone to fund a study, he would study it on himself, drank a serum of bacteria, and got really sick.

Then due to a weird chain of events, his results ended up being published in the Star, a tabloid newspaper that by his own admission "talked about alien babies being adopted by Nancy Reagan", before they made it into legitimate medical journals.

I feel like it would be pretty easy to check off a bunch of boxes on any given crackpot index..."believes the establishment is ignoring him because of their biases", "believes his discovery will instantly solve a centuries-old problem with no side effects", "does his studies on himself", "studies get published in tabloid rather than journal", but these were just things he naturally felt or had to do because the establishment wouldn't take him seriously and he couldn't do things "right".

I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority

I think it is much much less than the general public, but I don't think that has as much to do with IQ per se as with academic culture. But although I agree that the finding that IQ isn't a stronger predictor of correct beliefs than it is is interesting, I am still very surprised that you don't seem to think it matters at all (or at least significantly). What if we switched gears? Agreeing that the fact that a contrarian theory is invented or held by high IQ people is no guarantee of its success, can we agree that the fact that a contrarian theory is invented and mostly held by low IQ people is a very strong strike against it?

Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true."

Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?

Comment author: ChrisHallquist 08 March 2014 07:05:05PM 0 points [-]

Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

What's your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we're basically talking about people who all have PhDs, so education can't be the heuristic. You also proposed IQ and rationality, but admitted we aren't going to have good ways to measure them directly, aside from looking for "statements that follow proper logical form and make good arguments." I pointed out that "good arguments" is circular if we're trying to decide who to read charitably, and you had no response to that.

That leaves us with "proper logical form," about which you said:

Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?

In response to this, I'll just point out that this is not an argument in proper logical form. It's a lone assertion followed by a rhetorical question.

Comment author: Jiro 08 March 2014 03:12:50AM 3 points [-]

The extent to which science rejected the ulcer bacterium theory has been exaggerated. (And that article also addresses some quotes from Marshall himself which don't exactly match up with the facts.)

Comment author: Kawoomba 03 March 2014 04:32:57PM 1 point [-]

Are ethics supposed to be Aumann-agreeable?

If they were, uFAI would be a non-issue. (They are not.)

Comment author: Solvent 03 March 2014 01:32:07AM 2 points [-]

I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me.

That's a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn't be able to reach consensus on that no matter how hard you tried.

Comment author: fubarobfusco 03 March 2014 01:53:30AM *  7 points [-]

Three somewhat disconnected responses —

For a moral realist, moral disagreements are factual disagreements.

I'm not sure that humans can actually have radically different terminal values from one another; but then, I'm also not sure that humans have terminal values.

It seems to me that "deontologist" and "consequentialist" refer to humans who happen to have noticed different sorts of patterns in their own moral responses — not groups of humans that have fundamentally different values written down in their source code somewhere. ("Moral responses" are things like approving, disapproving, praising, punishing, feeling pride or guilt, and so on. They are adaptations being executed, not optimized reflections of fundamental values.)

Comment author: [deleted] 02 March 2014 04:08:56PM 5 points [-]

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

Besides which, we're human beings, not fully-rational Bayesian agents by mathematical construction. Trying to pretend to reason like a computer is a pointless exercise when compared to actually talking things out the human way, and thus ensuring (the human way) that all parties leave better-informed than they arrived.

Comment author: elharo 02 March 2014 11:50:41AM *  3 points [-]

IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational.

FYI IQ, whatever it measures, has little to no correlation with either epistemic or instrumental rationality, For extensive discussion of this topic see Keith Stanovich's What Intelligence Tests Miss

In brief, intelligence (as measured by an IQ test), epistemic rationality (the ability to form correct models of the world), and instrumental rationality (the ability to define and carry out effective plans for achieving ones goals) are three different things. A high score on an IQ test does not correlate with enhanced epistemic or instrumental rationality.

For examples, of the lack of correlation between IQ and epistemic rationality, consider the very smart folks you have likely met who have gotten themselves wrapped up in incredibly complex and intellectually challenging belief systems that do not match the world we live in: Objectivism, Larouchism, Scientology, apologetics, etc.

For examples of the lack of correlation between IQ and instrumental rationality, consider the very smart folks you have likely met who cannot get out of their parents basement, and whose impact on the world is limited to posting long threads on Internet forums and playing WoW.

Comment author: Kaj_Sotala 11 March 2014 10:01:08AM 1 point [-]

Keith Stanovich's What Intelligence Tests Miss

LW discussion.

Comment author: Sniffnoy 01 March 2014 10:32:12PM 6 points [-]

I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.

Getting principle of charity right can be hard in general. A common problem is when something can be interpreted as stupid in two different ways; namely, it has an interpretation which is obviously false, and another interpretation which is vacuous or trivial. (E.g.: "People are entirely selfish.") In cases like this, where it's not clear what the charitable reading is, it may just be best to point out what's going on. ("I'm not certain what you mean by that. I see two ways of interpreting your statement, but one is obviously false, and the other is vacuous.") Assuming they don't mean the wrong thing is not the right answer, as if they do, you're sidestepping actual debate. Assuming they don't mean the trivial thing is not the right answer, because sometimes these statements are worth making. Whether a statement is considered trivial or not depends on who you're talking to, and so what statements your interlocutor considers trivial will depend on who they've been talking to and reading. E.g., if they've been hanging around with non-reductionists, they might find it worthwhile to restate the basic principles of reductionism, which here we would consider trivial; and so it's easy to make a mistake and be "charitable" to them by assuming they're arguing for a stronger but incorrect position (like some sort of greedy reductionism). Meanwhile people are using the same words to mean different things because they haven't calibrated abstract words against actual specifics and the debate becomes terribly unproductive.

Really, being explicit about how you're interpreting something if it's not the obvious way is probably best in general. ("I'm going to assume you mean [...], because as written what you said has an obvious error, namely, [...]".) A silent principle of charity doesn't seem very helpful.

But for a helpful principle of charity, I don't think I'd go for anything about what assumptions you should be making. ("Assume the other person is arguing in good faith" is a common one, and this is a good idea, but if you don't already know what it means, it's not concrete enough to be helpful; what does that actually cash out to?) Rather, I'd go for one about what assumptions you shouldn't make. That is to say: If the other person is saying something obviously stupid (or vacuous, or whatever), consider the possibility that you are misinterpreting them. And it would probably be a good idea to ask for clarification. ("Apologies, but it seems to me you're making a statement that's just clearly false, because [...]. Am I misunderstanding you? Perhaps your definition of [...] differs from mine?") Then perhaps you can get down to figuring out where your assumptions differ and where you're using the same words in different ways.

But honestly a lot of the help of the principle of charity may just be to get people to not use the "principle of anti-charity", where you assume your interlocutor means the worst possible (in whatever sense) thing they could possibly mean. Even a bad principle of charity is a huge improvement on that.

Comment author: JoshuaZ 28 June 2014 11:47:18PM 1 point [-]

There are I think two other related aspects that are relevant. First, there's some tendency to interpret what other people say in a highly non-charitable or anti-charitable fashion when one already disagrees with them about something. So a principle of charity helps to counteract that. Second, even when one is using a non-silent charity principle, it can if one is not careful, come across as condescending, so it is important to phrase it in a way that minimizes those issues.

Comment author: Sniffnoy 01 March 2014 10:06:03PM *  1 point [-]

For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.

That... isn't jargon? There are probably plenty of actual examples you could have used here, but that isn't one.

Edit: OK, you did give an actual example below that ("blue-green politics"). Nonetheless, "mental model" is not jargon. It wasn't coined here, it doesn't have some specialized meaning here that differs from its use outside, it's entirely compositional and thus transparent -- nobody has to explain to you what it means -- and at least in my own experience it just isn't a rare phrase in the first place.

Comment author: Jiro 02 March 2014 07:49:12AM *  -1 points [-]

it doesn't have some specialized meaning here that differs from its use outside

It doesn't have a use outside.

I measn, yeah, literally, the words do mean the same thing and you could find someone outside lesswrong who says it, but it's an unnecessarily complicated way to say things that generally is not used. It takes more mental effort to understand, it's outside most people's expectations for everyday speech, and it may as well be jargon, even if technically it isn't. Go ahead, go down the street and the next time you ask someone for directions and they tell you something you can't understand, reply "I have a poor mental model of how to get to my destination". They will probably look at you like you're insane.

Comment author: VAuroch 02 March 2014 09:47:18AM 2 points [-]

"Outside" doesn't have to include a random guy on the street. Cognitive science as a field is "outside", and uses "mental model".

Also, "I have a poor mental model of how to get to my destination" is, descriptively speaking, wrong usage of 'poor mental model'; it's inconsistent with the connotations of the phrase, which connotes an attempted understanding which is wrong. I don't "have a poor mental model" of the study of anthropology; I just don't know anything about it or have any motivation to learn. I do "have a poor mental model" of religious believers; my best attempts to place myself in the frame of reference of a believer do not explain their true behavior, so I know that my model is poor.

Comment author: Jiro 02 March 2014 04:09:11PM 0 points [-]

it's inconsistent with the connotations of the phrase, which connotes an attempted understanding which is wrong

I suggested saying it in response to being given directions you don't understand. If so, then you did indeed attempt to understand and couldn't figure it out.

"Outside" doesn't have to include a random guy on the street.

But there's a gradation. Some phrases are used only by LWers. Some phrases are used by a slightly wider range of people, some by a slightly wider than that. Whether a phrase is jargon-like isn't a yes/no thing; using a phrase which is used by cognitive scientists but which would not be understood by the man on the street, when there is another way of saying the same thing that would be understood by the man on the street, is most of the way towards being jargon, even if technically it's not because cognitive scientists count as an outside group.

Furthermore, just because cognitive scientists know the phrase doesn't mean they use it in conversation about subjects that are not cognitive science. I suspect that even cognitive scientists would, when asking each other for directions, not reply to incomprehensible directions by saying they have a poor mental model, unless they are making a joke or unless they are a character from the Big Bang Theory (and the Big Bang Theory is funny because most people don't talk like that, and the few who do are considered socially inept.)

Comment author: shware 01 March 2014 08:16:41PM *  23 points [-]

A Christian proverb says: “The Church is not a country club for saints, but a hospital for sinners”. Likewise, the rationalist community is not an ivory tower for people with no biases or strong emotional reactions, it’s a dojo for people learning to resist them.

SlateStarCodex

Comment author: Viliam_Bur 01 March 2014 07:53:51PM *  10 points [-]

there seem to be a lot of people in the LessWrong community who imagine themselves to be (...) paragons of rationality who other people should accept as such.

Uhm. My first reaction is to ask "who specifically?", because I don't have this impression. (At least I think most people here aren't like this, and if a few happen to be, I probably did not notice the relevant comments.) On the other hand, if I imagine myself at your place, even if I had specific people in mind, I probably wouldn't want to name them, to avoid making it a personal accusation instead of observation of trends. Now I don't know what do to.

Perhaps could someone else give me a few examples of comments (preferably by different people) where LW members imagine themselves paragons of rationality and ask other people to accept them as such? (If I happen to be such an example myself, that information would be even more valuable to me. Feel free to send me a private message if you hesitate to write it publicly, but I don't mind if you do. Crocker's rules, Litany of Tarski, etc.)

I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects.

I do relate to this one, even if I don't know if I have expressed this sentinent on LW. I believe I am able to listen to opinions that are unpleasant or that I disagree with, without freaking out, much more than an average person, although not literally always. It's stronger in real life than online, because in real life I take time to think, while on internet I am more in "respond and move on (there are so many other pages to read)" mode. Some other people have told me they noticed this about me, so it's not just my own imagination.

Okay, you probably didn't mean me with this one... I just wanted to say I don't see this as a bad thing per se, assuming the person is telling the truth. And I also believe that LW has a higher ratio of people for whom this is true, compared with average population, although not everyone here is like that.

Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality.

I don't consider everyone here rational, and it's likely some people don't consider me rational. But there are also other reasons for frequent disagreement.

Aspiring rationalists are sometimes encouraged to make bets, because bet is a tax on bullshit, and paying a lot of tax may show you your irrationality and encourage you to get rid of it. Even if it's not about money; we need to calibrate ourselves. Some of us use the prediction book, CFAR has developed the calibration game.

Analogically, if I have an opinion, I say it in a comment, because that's similar to making a bet. If I am wrong, I will likely get a feedback, which is an opportunity to learn. I trust other people here intellectually to disagree with me only if they have a good reason to disagree, and I also trust them emotionally that if I happen to write something stupid, they will just correct me and move on (instead of e.g. reminding me of my mistake for the rest of my life). Because of this, I post here my opinions more often, and voice them more strongly if I feel it's deserved. Thus, more opportunity for disagreement.

On a different website I might keep quiet instead or speak very diplomatically, which would give less opportunity to disagreement; but it wouldn't mean I have higher estimate on that community's rationality; quite the opposite. If disagreement is disrespect, then tiptoeing around the mere possibility of disagreement means considering the other person insane. Which is how I learned to behave outside of LW; and I am still not near the level of disdain that a Carnegie-like behavior would require.

I've heard people cite this as a reason to be reluctant to post/comment (again showing they know intuitively that disagreement is disrespect).

We probably should have some "easy mode" for the beginners. But we shouldn't turn the whole website into the "easy mode". Well, this probably deserves a separate discussion.

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments.

On a few occassions I made fun of Mensa on LW, and I don't remember anyone contradicting me, so I thought we have a consensus that high IQ does not imply high rationality (although some level may be necessary). Stanovich wrote a book about it, and Kaj Sotala reviewed it here.

You make a few very good points in the article. Confusing intelligence with rationality is bad; selective charity is unfair; asking someone to treat me as a perfect rationalist is silly; it's good to apply healthy cynicism also to your own group; and we should put more emphasis on being aspiring rationalists. It just seems to me that you perceive the LW community as less rational than I do. Maybe we just have different people in mind when we think about the community. (By the way, I am curious if there is a correlation between people who complain that you don't believe in their sanity, and people who are reluctant to comment on LW because of the criticism.)

Comment author: hairyfigment 01 March 2014 07:19:44PM 1 point [-]

Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments.

When? The closest case I can recall came from someone defending religion or theology - which brought roughly the response you'd expect - and even that was a weaker claim.

If you mean people saying you should try to slightly adjust your probabilities upon meeting intelligent and somewhat rational disagreement, this seems clearly true. Worst case scenario, you waste some time putting a refutation together (coughWLC).

Comment author: fubarobfusco 01 March 2014 06:51:17PM 8 points [-]

One thing I hear you saying here is, "We shouldn't build social institutions and norms on the assumption that members of our in-group are unusually rational." This seems right, and obviously so. We should expect people here to be humans and to have the usual human needs for community, assurance, social pleasantries, and so on; as well as the usual human flaws of defensiveness, in-group biases, self-serving biases, motivated skepticism, and so on.

Putting on the "defensive LW phyggist" hat: Eliezer pointed out a long time ago that knowing about biases can hurt people, and the "clever arguer" is a negative trope throughout that swath of the sequences. The concerns you're raising aren't really news here ...

Taking the hat off again: ... but it's a good idea to remind people of them, anyway!


Regarding jargon: I don't think the "jargon as membership signaling" approach can be taken very far. Sure, signaling is one factor, but there are others, such as —

  • Jargon as context marker. By using jargon that we share, I indicate that I will understand references to concepts that we also share. This is distinct from signaling that we are social allies; it tells you what concepts you can expect me to understand.
  • Jargon as precision. Communities that talk about a particular topic a lot will develop more fine-grained distinctions about it. In casual conversation, a group of widgets is more-or-less the same as a set of widgets; but to a mathematician, "group" and "set" refer to distinct concepts.
  • Jargon as vividness. When a community have vivid stories about a topic, referring to the story can communicate more vividly than merely mentioning the topic. Dropping a Hamlet reference can more vividly convey indecisiveness than merely saying "I am indecisive."
Comment author: ialdabaoth 01 March 2014 05:31:10PM *  5 points [-]

Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy.

Not if you want to be accepted by that group. Being bad at signaling games can be crippling - as much as intellectual signaling poisons discourse, it's also the glue that holds a community together enough to make discourse possible.

Example: how likely you are to get away with making a post or comment on signaling games is primarily dependent on how good you are at signaling games, especially how good you are at the "make the signal appear to plausibly be something other than a signal" part of signaling games.

Comment author: ChrisHallquist 03 March 2014 04:21:33AM 0 points [-]

You're right, being bad at signaling games can be crippling. The point, though, is to watch out for them and steer away from harmful ones. Actually, I wish I'd emphasized this in the OP: trying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.

Comment author: ialdabaoth 03 March 2014 04:25:18AM 2 points [-]

[T]rying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.

Borrowing from the "Guess vs. Tell (vs. Ask)" meta-discussion, then, perhaps it would be useful for the community to have an explicit discussion about what kinds of signals we want to converge on? It seems that people with a reasonable understanding of game theory and evolutionary psychology would stand a better chance deliberately engineering our group's social signals than simply trusting our subconsciouses to evolve the most accurate and honest possible set.

Comment author: ChrisHallquist 03 March 2014 04:34:36AM 0 points [-]

The right rule is probably something like, "don't mix signaling games and truth seeking." If it's the kind of thing you'd expect in a subculture that doesn't take itself too seriously or imagine its quirks are evidence of its superiority to other groups, it's probably fine.

Comment author: orbenn 01 March 2014 04:59:07PM 1 point [-]

"rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme

Perhaps a better branding would be "effective decision making", or "effective thought"?

As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.

I think this is the core of what you are disliking. Almost all of my reading on LW is in the Sequences rather than the discussion areas, so I haven't been placed to notice anyone's arrogance. But I'm a little sadly surprised by your experience because for me, the result of reading the sequences has been to have less trust that my own level of sanity is high. I'm significantly less certain of my correctness in any argument.

We know that knowing about biases doesn't remove them, so instead of increasing our estimate of our own rationality, it should correct our estimate downwards. This shouldn't even require pride as an expense since we're also adjusting our estimates of everyone else's sanity down a similar amount. As a check to see if we're doing things right, the result should be less time spent arguing and more time spent thinking about how we might be wrong and how to check our answers. Basically it should remind us to use type 2 thinking more whenever possible, and to seek effectiveness training for our type 1 thinking whenever available.

Comment author: trist 01 March 2014 04:16:05PM *  0 points [-]

I wonder how much people's interactions with other aspiring rationalists in real life has any effect on this problem. Specifically, I think people who have become/are used to being significantly better at forming true beliefs than everyone around them will tend to discount other people's opinions more.

Comment author: devas 01 March 2014 01:29:45PM 0 points [-]

I am surprised by the fact that this post has so little karma. Since one of the...let's call them "tenets" of the rationalism community is the drive to improve one's own self, I would have imagined that this kind of criticism would have been welcomed.

Can anyone explain this to me, please? :-/

Comment author: TheOtherDave 01 March 2014 04:53:10PM 7 points [-]

I'm not sure what the number you were seeing when you wrote this was, and for my own part I didn't upvote it because I found it lacked enough focus to retain my interest, but now I'm curious: how much karma would you expect a welcomed post to have received between the "08:52AM" and "01:29:45PM" timestamps?

Comment author: devas 02 March 2014 12:08:33PM 3 points [-]

I actually hadn't considered the time; in retrospect, though, it does make a lot of sense. Thank you! :-)

Comment author: waveman 01 March 2014 11:51:30AM *  4 points [-]

Everyone (and every group) thinks they are rational. This is not a distinctive feature of LW. Christianity and Buddhism make a lot of their rationality. Even Nietzsche acknowledged that it was the rationality of Christianity that led to its intellectual demise (as he saw it), as people relentlessly applied rationality tools to Christianity.

My own model of how rational we are is more in line with Ed Seykota's (http://www.seykota.com/tribe/TT_Process/index.htm) than the typical geek model that we are basically rational with a few "biases" added on top. Ed Seykota was a very successful trader, featured in the book "Market Wizards" who concluded that trading success is not that difficult intellectually, the issues are all on the feelings side. He talks about trading but the concepts apply across the board.

For everyone who thinks that they are rational, consider a) Are you in the healthy weight range? b) Did you get the optimum amount of exercise this week? c) Are your retirement savings on track? d) Did you waste zero time today? (I score 2/4).

Personally I think it would be progress if we took as a starting point the assumption that most of the things we believe are not rational. That everything needs to be stringently tested. That taking someone's word for it, unless they have truly earned it, does not make sense.

Also: I totally agree with OP that it is routine to see intelligent people who think of themselves as rational doing things and believing things that are complete nonsense. Intelligence and rationality are, to a first approximation, orthogonal.

Comment author: AspiringRationalist 06 March 2014 01:58:44AM 1 point [-]

For everyone who thinks that they are rational, consider a) Are you in the healthy weight range? b) Did you get the optimum amount of exercise this week? c) Are your retirement savings on track? d) Did you waste zero time today? (I score 2/4).

That sentence motivated me to overcome the trivial inconvenience of logging in on my phone so I could up vote it.

Comment author: [deleted] 02 March 2014 04:17:23PM 0 points [-]

For everyone who thinks that they are rational, consider a) Are you in the healthy weight range? b) Did you get the optimum amount of exercise this week? c) Are your retirement savings on track? d) Did you waste zero time today? (I score 2/4).

I wasted some time today. Is 3-4 times per week of strength training and 1/2 hour cardio enough exercise? Then I think I get 3/4. Woot, but I actually don't see the point of the exercise, since I don't even aspire to be perfectly rational (especially since I don't know what I would be perfectly rational about).

Comment author: elharo 01 March 2014 07:54:17PM *  2 points [-]

a) Why do you expect a rational person would necessarily avoid the environmental problems that cause overweight and obesity? Especially given that scientists are very unclear amongst themselves as to what causes obesity and weight gain? Even if you adhere to the notion that weight gain and loss is simply a matter of calorie consumption and willpower, why would you assume a rational person has more willpower?

b) Why do you expect that a rational person would necessarily value the optimum amount of exercise (presumably optimal for health) over everything else they might have done with their time this week? And again given that scientists have even less certainty about the optimum amount or type of exercise, than they do about the optimum amount of food we should eat.

c) Why do you assume that a rational person is financially able to save for retirement? There are many people on this planet who live on less than a dollar a day. Does being born poor imply a lack of rationality?

d) Why do you assume a rational person does not waste time on occasion?

Rationality is not a superpower. It does not magically produce health, wealth, or productivity. It may assist in the achievement of those and other goals, but it is neither necessary nor sufficient.

Comment author: AspiringRationalist 06 March 2014 02:11:05AM 0 points [-]

c) Why do you assume that a rational person is financially able to save for retirement? There are many people on this planet who live on less than a dollar a day. Does being born poor imply a lack of rationality?

The question was directed at people discussing rationality on the internet. If you can afford some means of internet access, you are almost certainly not living on less than a dollar a day.

Comment author: CAE_Jones 06 March 2014 04:32:31AM 1 point [-]

I receive less in SSI than I'm paying on college debt (no degree), am legally blind, unemployed, and have internet access because these leave me with no choice but to live with my parents (no friends within 100mi). Saving for retirement is way off my radar.

(I do have more to say on how I've handled this, but it seems more appropriate for the rationality diaries. I will ETA a link if I make such a comment.)

Comment author: brazil84 02 March 2014 11:36:33PM -1 points [-]

Why do you expect that a rational person would necessarily value the optimum amount of exercise (presumably optimal for health) over everything else they might have done with their time this week?

Most likely because getting regular exercise is a pretty good investment of time. Of course some people might rationally choose not to make the investment for whatever reason, but if someone doesn't exercise regularly there is an excellent chance that it's akrasia at work.

One can ask if rational people are less likely to fall victim to akrasia. My guess is that they are, since a rational person is likely to have a better understanding of how his brain works. So he is in a better position to come up with ways to act consistently with his better judgment.

Comment author: brazil84 02 March 2014 05:51:02PM *  0 points [-]

Why do you expect a rational person would necessarily avoid the environmental problems that cause overweight and obesity? Especially given that scientists are very unclear amongst themselves as to what causes obesity and weight gain? Even if you adhere to the notion that weight gain and loss is simply a matter of calorie consumption and willpower, why would you assume a rational person has more willpower?

A more rational person might have a better understanding of how his mind works and use that understanding to deploy his limited willpower to maximum effect.

Comment author: Vaniver 02 March 2014 09:32:04AM *  0 points [-]

d) Why do you assume a rational person does not waste time on occasion?

Even if producing no external output, one can still use time rather than waste it. waveman's post is about the emotional difficulties of being effective- and so to the extent that rationality is about winning, a rational person has mastered those difficulties.

Comment author: DanArmak 01 March 2014 12:27:52PM *  5 points [-]

Everyone (and every group) thinks they are rational. This is not a distinctive feature of LW. Christianity and Buddhism make a lot of their rationality.

To the contrary, lots of groups make a big point of being anti-rational. Many groups (religious, new-age, political, etc.) align themselves in anti-scientific or anti-evidential ways. Most Christians, to make an example, assign supreme importance to (blind) faith that triumphs over evidence.

But more generally, humans are a-rational by default. Few individuals or groups are willing to question their most cherished beliefs, to explicitly provide reasons for beliefs, or to update on new evidence. Epistemic rationality is not the human default and needs to be deliberately researched, taught and trained.

And people, in general, don't think of themselves as being rational because they don't have a well-defined, salient concept of rationality. They think of themselves as being right.

Comment author: brazil84 02 March 2014 05:17:13PM 3 points [-]

To the contrary, lots of groups make a big point of being anti-rational

Here's a hypothetical for you: Suppose you were to ask a Christian "Do you think the evidence goes more for or more against your belief in Christ?" How do you think a typical Christian would respond? I think most Christians would respond that the evidence goes more in favor of their beliefs.

Comment author: DanArmak 02 March 2014 06:56:12PM -1 points [-]

I think the word "evidence" is associated with being pro-science and therefore, in most people's heads, anti-religion. So many Christians would respond by e.g. asking to define "evidence" more narrowly before they committed to an answer.

Also, the evidence claimed in favor of Christianity is mostly associated with the more fundamentalist interpretations; e.g. young-earthers who obsess with clearly false evidence vs. Catholics who accept evolution and merely claim a non-falsifiable Godly guidance. And there are fewer fundamentalists than there are 'moderates'.

However, suppose a Christian responded that the evidence is in the favor of Christianity. And then I would ask them: if the evidence was different and was in fact strongly against Christianity - if new evidence was found or existing evidence disproved - would you change your opinion and stop being a Christian? Would you want to change your opinion to match whatever the evidence turned out to be?

And I think most Christians, by far, would answer that they would rather have faith despite evidence, or that they would rather cling to evidence in their favor and disregard any contrary evidence.

Comment author: CCC 03 March 2014 02:31:43PM 2 points [-]

Well, as a Christian myself, allow me to provide a data point for your questions:

"Do you think the evidence goes more for or more against your belief in Christ?"

(from the grandparent post) More for.

young-earthers who obsess with clearly false evidence

Young-earthers fall into a trap; there are parts of the Bible that are not intended to be taken literally (Jesus' parables are a good example). Genesis (at least the garden-of-eden section) is an example of this.

And then I would ask them: if the evidence was different and was in fact strongly against Christianity - if new evidence was found or existing evidence disproved - would you change your opinion and stop being a Christian?

It would have to be massively convincing evidence. I'm not sure that sufficient evidence can be found (but see next answer). I've seen stage magicians do some amazing things; the evidence would have to be convincing enough to convince me that it wasn't someone, with all the skills of David Copperfield, intentionally pulling the wool over my eyes in some manner.

Would you want to change your opinion to match whatever the evidence turned out to be?

In the sense that I want my map to match the territory; yes. In the sense that I do not want the territory to be atheistic; no.

I wouldn't mind so much if it turned out that (say) modern Judaism was 100% correct instead; it would be a big adjustment, but I think I could handle that much more easily. But the idea that there's nothing in the place of God; the idea that there isn't, in short, someone running the universe is one that I find extremely disquieting for some reason.

I imagine it's kindof like the feeling one might get, imagining the situation of being in a chauffeur-driven bus, travelling at full speed, along with the rest of humanity, and suddenly discovering that there's no-one behind the steering wheel and no-one on the bus can get into the front compartment.

...extremely disquieting.

Comment author: Sophronius 06 March 2014 09:55:15PM *  1 point [-]

I imagine it's kindof like the feeling one might get, imagining the situation of being in a chauffeur-driven bus, travelling at full speed, along with the rest of humanity, and suddenly discovering that there's no-one behind the steering wheel and no-one on the bus can get into the front compartment.

This is precisely how I feel about humanity. I mean, we came from within a hair's breadth of annihilating all human life on the planet during the cold war, for pete's sake. Now that didn't come to pass, but if you look at all the atrocities that did happen during the history of humanity... even if you're right and there is a driver, he is most surely drunk behind the wheel.

Still, I can sympathise. After all, people also generally prefer to have an actual person piloting their plane, even if the auto-pilot is better (or so I've read). There seems to be some primal desire to want someone to be in charge. Or as the Joker put it: "Nobody panics when things go according to plan. Even if that plan is horrifying."

Comment author: CCC 07 March 2014 08:23:49AM 3 points [-]

I mean, we came from within a hair's breadth of annihilating all human life on the planet during the cold war, for pete's sake. Now that didn't come to pass, but if you look at all the atrocities that did happen during the history of humanity...

Atrocities in general are a point worth considering. They make it clear that, even given the existence of God, there's a lot of agency being given to the human race; it's up to us as a race to not mess up totally, and to face the consequences of the actions of others.

Comment author: Bugmaster 05 March 2014 01:41:00AM 1 point [-]

I find your post very interesting, because I tend to respond almost exactly the same way when someone asks me why I'm an atheist. The one difference is the "extremely disquieting" part; I find it hard to relate to that. From my point of view, reality is what it is; i.e., it's emotionally neutral.

Anyway, I find it really curious that we can disagree so completely while employing seemingly identical lines of reasoning. I'm itching to ask you some questions about your position, but I don't want to derail the thread, or to give the impression of getting all up in your business, as it were...

Comment author: CCC 05 March 2014 04:21:02AM 2 points [-]

Reality stops being emotionally neutral when it affects me directly. If I were to wake up to find that my bed has been moved to a hovering platform over a volcano, then I will most assuredly not be emotionally neutral about the discovery (I expect I would experience shock, terror, and lots and lots of confusion).

I'm itching to ask you some questions about your position

Well, I'd be quite willing to answer them. Maybe you could open up a new thread in Discussion, and link to it from here?

Comment author: Viliam_Bur 04 March 2014 07:50:44AM 3 points [-]

the idea that there isn't, in short, someone running the universe is one that I find extremely disquieting for some reason

If feels the same to me; I just believe it's true.

I imagine it's kindof like the feeling one might get, imagining the situation of being in a chauffeur-driven bus, travelling at full speed, along with the rest of humanity, and suddenly discovering that there's no-one behind the steering wheel and no-one on the bus can get into the front compartment.

Let's continue the same metaphor and imagine that many people in the bus decide to pretend that there is an invisible chauffeur and therefore everything is okay. This idea allows them to relax; at least partially (because parts of their minds are aware that the chauffeur should not be invisible, because that doesn't make much sense). And whenever someone in the bus suggests that we should do our best to explore the bus and try getting to the front compartment, these people become angry and insist that such distrust of our good chauffeur is immoral, and getting to the front compartment is illegal. Instead we should just sit quietly and sing a happy song together.

Comment author: CCC 04 March 2014 08:49:19AM 3 points [-]

Let's continue the same metaphor and imagine that many people in the bus decide to pretend that there is an invisible chauffeur and therefore everything is okay. This idea allows them to relax; at least partially (because parts of their minds are aware that the chauffeur should not be invisible, because that doesn't make much sense). And whenever someone in the bus suggests that we should do our best to explore the bus and try getting to the front compartment, these people become angry and insist that such distrust of our good chauffeur is immoral, and getting to the front compartment is illegal. Instead we should just sit quietly and sing a happy song together.

...I'm not sure this metaphor can take this sort of strain. (Of course, it makes a difference if you can see into the front compartment; I'd assumed an opaque front compartment that couldn't be seen into from the rest of the bus).

Personally, I don't have any problem with people trying to, in effect, get into the front compartment. As long as it's done in an ethical way, of course (so, for example, if it involves killing people, then no; but even then, what I'd object to is the killing, not the getting-into-the-front). I do think it makes a lot of sense to try to explore the rest of the bus; the more we find out about the universe, the more effect we can have on it; and the more effect we can have on the universe, the more good we can do. (Also, the more evil we can do; but I'm optimistic enough to believe that humanity is more good than evil, on balance. Despite the actions of a few particularly nasty examples).

As I like to phrase it: God gave us brains. Presumably He expected us to use them.

Comment author: Viliam_Bur 04 March 2014 10:32:05AM 0 points [-]

I assumed the front compartment was completely opaque in the past, and parts of it are gradually made transparent by science. Some people, less and less credibly, argue that the chauffeur has a weird body shape and still may be hidden behing the remaining opaque parts. But the smarter ones can already predict where this goes, so they already hypothesise an invisible chauffeur (separate magisteria, etc.). Most people probably believe some mix, like the chauffeur is partially transparent and partially visible, and the transparent and visible parts of the chauffeur's body happen to correspond to the parts they can and cannot see from their seats.

Okay, I like your attitude. You probably wouldn't ban teaching evolutionary biology at schools.

Comment author: CCC 04 March 2014 01:08:36PM 2 points [-]

I think this is the point at which the metaphor has become more of an impediment to communication than anything else. I recognise what I think you're referring to; it's the idea of the God of the gaps (in short, the idea that God is responsible for everything that science has yet to explain; which starts leading to questions as soon as science explains something new).

As an argument for theism, the idea that God is only responsible for things that haven't yet been otherwise explained is pretty thoroughly flawed to start with. (I can go into quite a bit more detail if you like).

Okay, I like your attitude. You probably wouldn't ban teaching evolutionary biology at schools.

No, I most certainly would not. Personally, I think that the entire evolution debate has been hyped up to an incredible degree by a few loud voices, for absolutely no good reason; there's nothing in the theory of evolution that runs contrary to the idea that the universe is created. Evolution just gives us a glimpse at the mechanisms of that creation.

Comment author: brazil84 02 March 2014 08:45:43PM 5 points [-]

And I think most Christians, by far, would answer that they would rather have faith despite evidence, or that they would rather cling to evidence in their favor and disregard any contrary evidence

I doubt it. That may be how their brains work, but I doubt they would admit that they would cling to beliefs against the evidence. More likely they would insist that such a situation could never happen; that the contrary evidence must be fraudulent in some way.

I actually did ask the questions on a Christian bulletin board this afternoon. The first few responses have been pretty close to my expectations; we will see how things develop.

Comment author: DanArmak 03 March 2014 08:28:40AM -1 points [-]

More likely they would insist that such a situation could never happen; that the contrary evidence must be fraudulent in some way.

That is exactly why I would label them not identifying as "rational". A rational person follows the evidence, he does not deny it. (Of course there are meta-rules, preponderance of evidence, independence of evidence, etc.)

I actually did ask the questions on a Christian bulletin board this afternoon. The first few responses have been pretty close to my expectations; we will see how things develop.

Upvoted for empirical testing, please followup!

However, I do note that 'answers to a provocative question on a bulletin board, without the usual safety guards of scientific studies' won't be very strong evidence about 'actual beliefs and/or behavior of people in hypothetical future situations'.

Comment author: brazil84 03 March 2014 10:15:42AM 2 points [-]

That is exactly why I would label them not identifying as "rational". A rational person follows the evidence, he does not deny it.

That's not necessarily true and I can illustrate it with an example from the other side. A devout atheist once told me that even if The Almighty Creator appeared to him personally; performed miracles; etc., he would still remain an atheist on the assumption that he was hallucinating. One can ask if such a person thinks of himself as anti-rational given his pre-announcement that he would reject evidence that disproves his beliefs. Seems to me the answer is pretty clearly "no" since he is still going out of his way to make sure that his beliefs are in line with his assessment of the evidence.

Upvoted for empirical testing, please followup!

However, I do note that 'answers to a provocative question on a bulletin board, without the usual safety guards of scientific studies' won't be very strong evidence about 'actual beliefs and/or behavior of people in hypothetical future situations'.

Well I agree it's just an informal survey. But I do think it's pretty revealing given the question on the table:

Do Christians make a big point of being anti-rational?

Here's the thread:

http://www.reddit.com/r/TrueChristian/comments/1zd9t1/does_the_evidence_support_your_beliefs/

Of 4 or 5 responses, I would say that there is 1 where the poster sees himself as irrational.

Anyway, the original claim which sparked this discussion is that everyone thinks he is rational. Perhaps a better way to put it is that it's pretty unusual for anyone to think his beliefs are irrational.

Comment author: DanArmak 03 March 2014 01:01:29PM *  0 points [-]

A devout atheist once told me that even if The Almighty Creator appeared to him personally; performed miracles; etc., he would still remain an atheist on the assumption that he was hallucinating.

And I wouldn't call that person rational, either. He may want to be rational, and just be wrong about the how.

One can ask if such a person thinks of himself as anti-rational given his pre-announcement that he would reject as "not rational" or "not wanting to be rational" if they disagree.

I think the relevant (psychological and behavioral) difference here is between not being rational, i.e. not always following where rationality might lead you or denying a few specific conclusions, and being anti-rational, which I would describe as seeing rationality as an explicit enemy and therefore being against all things rational by association.

ETA: retracted. Some Christians are merely not rational, but some groups are explicitly anti-rational: they attack rationality, science, and evidence-based reasoning by association, even when they don't disagree with the actual evidence or conclusions.

The Reddit thread is interesting. 5 isn't a big sample, and we got examples basically of all points of view. My prediction was that:

most Christians, by far, would answer that they would rather have faith despite evidence, or that they would rather cling to evidence in their favor and disregard any contrary evidence.

By my count, of those Reddit respondents who explicitly answered the question, these match the prediction, given the most probable interpretation of their words: Luc-Pronounced_Luke, tinknal. EvanYork comes close but doesn't explicitly address the hypothetical.

And these don't: Mageddon725, rethcir_, Va1idation.

So my prediction of 'most' is falsified, but the study is very underpowered :-)

Anyway, the original claim which sparked this discussion is that everyone thinks he is rational. Perhaps a better way to put it is that it's pretty unusual for anyone to think his beliefs are irrational.

I agree that it's unusual. My original claim was that many more people don't accept rationality as a valid or necessary criterion and don't even try to evaluate their beliefs' rationality. They don't see themselves as irrational, but they do see themselves as "not rational". And some of them further see themselves as anti-rational, and rationality as an enemy philosophy or dialectic.

Comment author: brazil84 03 March 2014 02:06:05PM 2 points [-]

And I wouldn't call that person rational, either. He may want to be rational, and just be wrong about the how.

Well he might be rational and he might not be, but pretty clearly he perceives himself to be rational. Or at a minimum, he does not perceive himself to be not rational. Agreed?

Some Christians are merely not rational, but some groups are explicitly anti-rational: they attack rationality, science, and evidence-based reasoning by association, even when they don't disagree with the actual evidence or conclusions.

Would you mind providing two or three quotes from Christians which manifest this attitude so I can understand and scrutinize your point?

The Reddit thread is interesting. 5 isn't a big sample, and we got examples basically of all points of view.

That's true. But I would say that of the 5, there was only one individual who doesn't perceive himself to be rational. Two pretty clearly perceive themselves to be rational. And two are in a greyer area but pretty clearly would come up with rationalizations to justify their beliefs. Which is irrational but they don't seem to perceive it as such.

I agree that it's unusual. My original claim was that many more people don't accept rationality as a valid or necessary criterion and don't even try to evaluate their beliefs' rationality.

Well, I agree that a lot of people might not have a clear opinion about whether their beliefs are rational. But the bottom line is that when push comes to shove, most people seem to believe that their beliefs are a reasonable evidence-based conclusion.

But I am interested to see quotes from these anti-rational Christians you refer to.

Comment author: DanArmak 04 March 2014 08:41:40AM 5 points [-]

After some reflection, and looking for evidence, it seems I was wrong. I felt very certain of what I said, but then I looked for justification and didn't find it. I'm sorry I led this conversation down a false trail. And thank you for questioning my claims and doing empirical tests.

(To be sure, I found some evidence, but it doesn't add up to large, numerous, or representative groups of Christians holding these views. Or in fact for these views being associated with Christianity more than other religions or non-religious 'mystical' or 'new age' groups. Above all, it doesn't seem these views have religion as their primary motivation. It's not worth while looking into the examples I found if they're not representative of larger groups.)

Comment author: Eugine_Nier 01 March 2014 07:16:45PM -1 points [-]

Most Christians, to make an example, assign supreme importance to (blind) faith that triumphs over evidence.

That's not what most Christians mean by faith.

Comment author: DanArmak 01 March 2014 09:56:13PM *  1 point [-]

The comment you link to gives a very interesting description of faith:

The sense of "obligation" in faith is that of duty, trust, and deference to those who deserve it. If someone deserves our trust, then it feels wrong, or insolent, or at least rude, to demand independent evidence for their claims.

I like that analysis! And I would add: obligation to your social superiors, and to your actual legal superiors (in a traditional society), is a very strong requirement and to deny faith is not merely to be rude, but to rebel against the social structure which is inseparable from institutionalized religion.

However, I think this is more of an explanation of how faith operates, not what it feels like or how faithful people describe it. It's a good analysis of the social phenomenon of faith from the outside, but it's not a good description of how it feels from the inside to be faithful.

This is because the faith actually required of religious people is faith in the existence of God and other non-evident truths claimed by their religion. As a faithful person, you can't feel faith is "duty, trust, obligation" - you feel that is is belief. You can't feel that to be unfaithful would be to wrong someone or to rebel; you feel that it would be to be wrong about how the world really is.

However, I've now read Wikipedia on Faith in Christianity and I see there are a lot of complex opinions about the meaning of this word. So now I'm less sure of my opinion. I'm still not convinced that most Christians mean "duty, trust, deference" when they say "faith", because WP quotes many who disagree and think it means "belief".

Comment author: orbenn 01 March 2014 05:16:22PM *  1 point [-]

I think we're getting some word-confusion. Groups that claim "make a big point of being anti-rational" are against the things with the label "rational". However they do tend to think of their own beliefs as being well thought out (i.e. rational).

Comment author: DanArmak 01 March 2014 06:39:04PM 0 points [-]

No, I think we're using words the same way. I disagree with your statement that all or most groups "think of their own beliefs as being well thought out (i.e. rational).". They think of their beliefs of being right, but not well thought out.

"Well thought out" should mean:

  1. Being arrived at through thought (science, philosophy, discovery, invention), rather than writing the bottom line first and justifying it later or not at all (revelation, mysticism, faith deliberately countering evidence, denial of the existence of objective truth).
  2. Thought out to its logical consequences, without being selective about which conclusions you adopt or compartmentalizing them, making sure there are no internal contradictions, and dealing with any repugnant conclusions.
Comment author: Oscar_Cunningham 01 March 2014 11:20:17AM 19 points [-]

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

Comment author: ChrisHallquist 03 March 2014 07:58:45AM 4 points [-]

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

My initial reaction to this was warm fuzzy feelings, but I don't think it's correct, any more than calling yourself a theist indicates believing you are God. "Rationalist" means believing in rationality (in the sense of being pro-rationality), not believing yourself to be perfectly rational. That's the sense of rationalist that goes back at least as far as Bertrand Russell. In the first paragraph of his "Why I Am A Rationalist", for example, Russell identifies as a rationalist but also says, "We are not yet, and I suppose men and women never will be, completely rational."

This also seems like it would be a futile linguistic fight. A better solution might be to consciously avoid using "rationalist" when talking about Aumann's agreement theorem—use "ideal rationalists" or "perfect rationalist". I also tend to use phrases like "members of the online rationalist community," but that's more to indicate I'm not talking about Russell or Dawkins (much less Descartes).

Comment author: Nornagest 05 March 2014 01:48:01AM *  4 points [-]

The -ist suffix can mean several things in English. There's the sense of "practitioner of [an art or science, or the use of a tool]" (dentist, cellist). There's "[habitual?] perpetrator of" or "participant in [an act]" (duelist, arsonist). And then there's "adherent of [an ideology, doctrine, or teacher]" (theist, Marxist). Seems to me that the problem has to do with equivocation between these senses as much as with the lack of an "aspiring". And personally, I'm a lot more comfortable with the first sense than the others; you can after all be a bad dentist.

Perhaps we should distinguish between rationaledores and rationalistas? Spanglish, but you get the picture.

Comment author: Vaniver 05 March 2014 03:46:25PM 0 points [-]

"Reasoner" captures this sense of "someone who does an act," but not quite the "practitioner" sense, and it does a poor job of pointing at the cluster we want to point at.

Comment author: polymathwannabe 05 March 2014 02:04:38AM 0 points [-]

The -dor suffix is only added to verbs. The Spanish word would be razonadores ("ratiocinators").

Comment author: [deleted] 01 March 2014 08:29:24PM 2 points [-]
Comment author: JWP 01 March 2014 05:23:38PM 10 points [-]

Identifying as a "rationalist" is encouraged by the welcome post.

We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist

Comment author: Eliezer_Yudkowsky 01 March 2014 08:15:49PM 13 points [-]

Edited the most recent welcome post and the post of mine that it linked to.

Does anyone have a 1-syllable synonym for 'aspiring'? It seems like we need to impose better discipline on this for official posts.

Comment author: Bugmaster 05 March 2014 01:13:08AM 1 point [-]

FWIW, "aspiring rationalist" always sounded quite similar to "Aspiring Champion" to my ears.

That said, why do we need to use any syllables at all to say "aspiring rationalist" ? Do we have some sort of a secret rite or a trial that an aspiring rationalist must pass in order to become a true rationalist ? If I have to ask, does that mean, I'm not a rationalist ? :-/

Comment author: somervta 04 March 2014 12:49:44AM 4 points [-]

Consider "how you came to aspire to rationality/be a rationalist" instead of "identify as an aspiring rationalist".

Or, can the identity language and switch to "how you came to be interested in rationality".

Comment author: brazil84 02 March 2014 11:22:48PM 0 points [-]

The only thing I can think of is "na" e.g. in Dune, Feyd Rauthah was the "na-baron," meaning that he had been nominated to succeed the baron. (And in the story he certainly was aspiring to be Baron.)

Not quite what you are asking for but not too far either.

Comment author: CCC 02 March 2014 04:56:10AM 2 points [-]

Looking at a thesaurus, "would-be" may be a suitable synonym.

Other alternatives include 'budding', or maybe 'keen'.

Comment author: wwa 02 March 2014 02:48:30AM *  2 points [-]

demirationalist - on one hand, something already above average, like in demigod. On the other, leaves the "not quite there" feeling. My second best was epirationalist

Didn't find anything better in my opinion, but in case you want to give it a (somewhat cheap) shot yourself... I just looped over this

Comment author: Oscar_Cunningham 01 March 2014 07:17:58PM 3 points [-]

And the phrase "how you came to identify as a rationalist" links to the very page where in the comments Robin Hanson suggests not using the term "rationalist", and the alternative "aspiring rationalist" is suggested!

Comment author: 7EE1D988 01 March 2014 11:03:17AM 3 points [-]

Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."

I couldn't agree more to that - to a first approximation.

Now of course, the first problem is with people who think a person is either rational in general or not, right in general, or not. Being right or rational is conflated with intelligence, for people can't seem to imagine that a cognitive engine which output so many right ideas in the past could be anything but a cognitive engine which outputs right ideas in general.

For instance and in practice, I'm pretty sure I strongly disagree with some of your opinions. Yet I agree with this bit over there, and other bits as well. Isn't it baffling how some people can be so clever, so right about a huge bundle of things (read : how they have opinion so very much like mine), and then suddenly you find they believe X, where X seems incredibly stupid and wrong for obvious reasons to you.

I posit that people want to find others like them (in a continuum with finding a community of people like them, some place where they can belong), and it stings to realize that even people who hold many similar opinions still aren't carbon copies of you, that their cognitive engine doesn't work exactly the same way as yours, and that you'll have to either change yourself, or change others (both of which can be hard, unpleasant work), if you want there to be less friction between you (unless you agree to disagree, of course).

Problem number two is simply that whether you think yourself right about a certain problem, have thought about it for a long time before coming to your own conclusion, doesn't preclude new, original information, or intelligent arguments to sway your opinion. I'm often pretty darn certain about my beliefs (those I care about anyway, that is, usually the instrumental beliefs and methods I need to attain my goals) but I know better than not to change my opinion or belief for a topic about which I care if I'm conclusively shown to be wrong (but that should go without saying in a rationalist community).

Comment author: elharo 01 March 2014 07:46:36PM 0 points [-]

Rationality, intelligence, and even evidence are not sufficient to resolve all differences. Sometimes differences are a deep matter of values and preferences. Trivially, I may prefer chocolate and you prefer vanilla. There's no rational basis for disagreement, nor for resolving such a dispute. We simply each like what we like.

Less trivially, some people take private property as a fundamental moral right. Some people treat private property as theft. And a lot of folks in the middle treat it as a means to an end. Folks in the middle can usefully dispute the facts and logic of whether particular incarnations of private property do or do not serve other ends and values, such as general happiness and well-being. However perfectly rational and intelligent people who have different fundamental values with respect to private property are not going to agree, even when they agree on all arguments and points of evidence.

There are many other examples where core values come into play. How and why people develop and have different core values than other people is an interesting question. However even if we can eliminate all partisan-shaded argumentation, we will not eliminate all disagreements.

Comment author: brilee 01 March 2014 02:23:33PM 0 points [-]

'''I posit that people want to find others like them (in a continuum with finding a community of people like them, some place where they can belong), and it stings to realize that even people who hold many similar opinions still aren't carbon copies of you, that their cognitive engine doesn't work exactly the same way as yours, and that you'll have to either change yourself, or change others (both of which can be hard, unpleasant work), if you want there to be less friction between you (unless you agree to disagree, of course).'''

Well said.

Comment author: 7EE1D988 01 March 2014 10:58:35AM 12 points [-]

I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses.

Some people are just bad at explaining their ideas correctly (too hasty, didn't reread themselves, not a high enough verbal SAT, foreign mother tongue, inferential distance, etc.), others are just bad at reading and understanding other's ideas correctly (too hasty, didn't read the whole argument before replying, glossed over that one word which changed the whole meaning of a sentence, etc.).

I've seen many poorly explained arguments which I could understand as true or at least pointing in interesting directions, which were summarily ignored or shot down by uncharitable readers.

Comment author: RobinZ 23 April 2014 04:13:27PM 1 point [-]

Some people are just bad at explaining their ideas correctly (too hasty, didn't reread themselves, not a high enough verbal SAT, foreign mother tongue, inferential distance, etc.), others are just bad at reading and understanding other's ideas correctly (too hasty, didn't read the whole argument before replying, glossed over that one word which changed the whole meaning of a sentence, etc.).

This understates the case, even. At different times, an individual can be more or less prone to haste, laziness, or any of several possible sources of error, and at times, you yourself can commit any of these errors. I think the greatest value of a well-formulated principle of charity is that it leads to a general trend of "failure of communication -> correction of failure of communication -> valuable communication" instead of "failure of communication -> termination of communication".

I've seen many poorly explained arguments which I could understand as true or at least pointing in interesting directions, which were summarily ignored or shot down by uncharitable readers.

Actually, there's another point you could make along the lines of Jay Smooth's advice about racist remarks, particularly the part starting at 1:23, when you are discussing something in 'public' (e.g. anywhere on the Internet). If I think my opposite number is making bad arguments (e.g. when she is proposing an a priori proof of the existence of a god), I can think of few more convincing avenues to demonstrate to all the spectators that she's full of it than by giving her every possible opportunity to reveal that her argument is not wrong.

Regardless of what benefit you are balancing against a cost, though, a useful principle of charity should emphasize that your failure to engage with someone you don't believe to be sufficiently rational is a matter of the cost of time, not the value of their contribution. Saying "I don't care what you think" will burn bridges with many non-LessWrongian folk; saying, "This argument seems like a huge time sink" is much less likely to.

Comment author: Lumifer 23 April 2014 04:38:24PM 2 points [-]

a useful principle of charity should emphasize that your failure to engage with someone you don't believe to be sufficiently rational is a matter of the cost of time, not the value of their contribution.

So if I believe that someone is stupid, mindkilled, etc. and is not capable (at least at the moment) of contributing anything valuable, does this principle emphasize that I should not believe that, or that I should not tell that to this someone?

Comment author: TheAncientGeek 24 April 2014 10:20:57AM -1 points [-]

If you haven't attempted to falsity your belief by being charitable, then you should stop believing it. It's bad data.

Comment author: RobinZ 23 April 2014 07:01:47PM *  1 point [-]

I see that my conception of the "principle of charity" is either non-trivial to articulate or so inchoate as to be substantially altered by my attempts to do so. Bearing that in mind:

The principle of charity isn't a propositional thesis, it's a procedural rule, like the presumption of innocence. It exists because the cost of false positives is high relative to the cost of reducing false positives: the shortest route towards correctness in many cases is the instruction or argumentation of others, many of whom would appear, upon initial contact, to be stupid, mindkilled, dishonest, ignorant, or otherwise unreliable sources upon the subject in question. The behavior proposed by the principle of charity is intended to result in your being able to reliably distinguish between failures of communication and failures of reasoning.

My remark took the above as a basis and proposed behavior to execute in cases where the initial remark strongly suggests that the speaker is thinking irrationally (e.g. an assertion that the modern evolutionary synthesis is grossly incorrect) and your estimate of the time required to evaluate the actual state of the speaker's reasoning processes was more than you are willing to spend. In such a case, what the principle of charity implies are two things:

  • You should consider the nuttiness of the speaker as being an open question with a large prior probability, akin to your belief prior to lifting a dice cup that you have not rolled double-sixes, rather than a closed question with a large posterior probability, akin to your belief that the modern evolutionary synthesis is largely correct.
  • You should withdraw from the conversation in such a fashion as to emphasize that you are in general willing to put forth the effort to understand what they are saying, but that the moment is not opportune.

Minor tyop fix T1503-4.

Comment author: TheAncientGeek 23 April 2014 07:24:49PM -1 points [-]

I do see what you are describing as being the standard PoC at all. May I suggest you are call it something else.

Comment author: RobinZ 23 April 2014 09:03:38PM 0 points [-]

How does the thing I am vaguely waving my arms at differ from the "standard PoC"?

Comment author: Lumifer 23 April 2014 07:14:43PM 3 points [-]

the cost of false positives is high relative to the cost of reducing false positives

I don't see it as self-evident. Or, more precisely, in some situations it is, and in other situations it is not.

The behavior proposed by the principle of charity is intended to result in your being able to reliably distinguish between failures of communication and failures of reasoning.

You are saying (a bit later in your post) that the principle of charity implies two things. The second one is a pure politeness rule and it doesn't seem to me that the fashion of withdrawing from a conversation will help me "reliably distinguish" anything.

As to the first point, you are basically saying I should ignore evidence (or, rather, shift the evidence into the prior and refuse to estimate the posterior). That doesn't help me reliably distinguish anything either.

In fact, I don't see why there should be a particular exception here ("a procedural rule") to the bog-standard practice of updating on evidence. If my updating process is incorrect, I should fix it and not paper it over with special rules for seemingly-stupid people. If it is reasonably OK, I should just go ahead and update. That will not necessarily result in either a "closed question" or a "large posterior" -- it all depends on the particulars.

Comment author: RobinZ 24 April 2014 03:00:27PM 2 points [-]

A small addendum, that I realized I omitted from my prior arguments in favor of the principle of charity:

Because I make a habit of asking for clarification when I don't understand, offering clarification when not understood, and preferring "I don't agree with your assertion" to "you are being stupid", people are happier to talk to me. Among the costs of always responding to what people say instead of your best understanding of what they mean - especially if you are quick to dismiss people when their statements are flawed - is that talking to you becomes costly: I have to word my statements precisely to ensure that I have not said something I do not mean, meant something I did not say, or made claims you will demand support for without support. If, on the other hand, I am confident that you will gladly allow me to correct my errors of presentation, I can simply speak, and fix anything I say wrong as it comes up.

Which, in turn, means that I can learn from a lot of people who would not want to speak to me otherwise.

Comment author: Lumifer 24 April 2014 03:37:21PM 1 point [-]

responding to what people say instead of your best understanding of what they mean

Again: I completely agree that you should make your best effort to understand what other people actually mean. I do not call this charity -- it sounds like SOP and "just don't be an idiot yourself" to me.

Comment author: TheAncientGeek 23 April 2014 09:44:05PM 5 points [-]

I'll say it again: POC doesn't mean "believe everyone is sane and intelligent", it means "treat everyone's comments as though they were made by a sane , intelligent, person".

Comment author: satt 18 August 2015 12:14:52AM *  1 point [-]

As I operationalize it, that definition effectively waters down the POC to a degree I suspect most POC proponents would be unhappy with.

Sane, intelligent people occasionally say wrong things; in fact, because of selection effects, it might even be that most of the wrong things I see & hear in real life come from sane, intelligent people. So even if I were to decide that someone who's just made a wrong-sounding assertion were sane & intelligent, that wouldn't lead me to treat the assertion substantially more charitably than I otherwise would (and I suspect that the kind of person who likes the(ir conception of the) POC might well say I were being "uncharitable").

Edit: I changed "To my mind" to "As I operationalize it". Also, I guess a shorter form of this comment would be: operationalized like that, I think I effectively am applying the POC already, but it doesn't feel like it from the inside, and I doubt it looks like it from the outside.

Comment author: TheAncientGeek 18 August 2015 07:02:55AM *  0 points [-]

You have uncharutably interpreted my formulation to mean 'treat everyone's comments as though they were made by a sane intelligent person who may .or may have been having an off day". What kind of guideline is that?

The charitable version would have been "treat everyone's comments as though they were made by someone sane and intelligent at the time".

Comment author: satt 19 August 2015 02:30:22AM 0 points [-]

(I'm giving myself half a point for anticipating that someone might reckon I was being uncharitable.)

You have uncharutably interpreted my formulation to mean 'treat everyone's comments as though they were made by a sane intelligent person who may .or may have been having an off day". What kind of guideline is that?

A realistic one.

The charitable version would have been "treat everyone's comments as though they were made by someone sane and intelligent at the time".

The thing is, that version actually sounds less charitable to me than my interpretation. Why? Well, I see two reasonable ways to interpret your latest formulation.

The first is to interpret "sane and intelligent" as I normally would, as a property of the person, in which case I don't understand how appending "at the time" makes a meaningful difference. My earlier point that sane, intelligent people say wrong things still applies. Whispering in my ear, "no, seriously, that person who just said the dumb-sounding thing is sane and intelligent right now" is just going to make me say, "right, I'm not denying that; as I said, sanity & intelligence aren't inconsistent with saying something dumb".

The second is to insist that "at the time" really is doing some semantic work here, indicating that I need to interpret "sane and intelligent" differently. But what alternative interpretation makes sense in this context? The obvious alternative is that "at the time" is drawing my attention to whatever wrong-sounding comment was just made. But then "sane and intelligent" is really just a camouflaged assertion of the comment's worthiness, rather than the claimant's, which reduces this formulation of the POC to "treat everyone's comments as though the comments are cogent".

The first interpretation is surely not your intended one because it's equivalent to one you've ruled out. So presumably I have to go with the second interpretation, but it strikes me as transparently uncharitable, because it sounds like a straw version of the POC ("oh, so I'm supposed to treat all comments as cogent, even if they sound idiotic?").

The third alternative, of course, is that I'm overlooking some third sensible interpretation of your latest formulation, but I don't see what it is; your comment's too pithy to point me in the right direction.

Comment author: TheAncientGeek 25 August 2015 09:39:12AM 0 points [-]

But then "sane and intelligent" is really just a camouflaged assertion of the comment's worthiness, rather than the claimant's, which reduces this formulation of the POC to "treat everyone's comments as though the comments are cogent". [..] ("oh, so I'm supposed to treat all comments as cogent, even if they sound idiotic?"

Yep.

You have assumed that cannot be the correct interpretation of the PoC, without saying why. In light of your other comments, it could well be that you are assuming that the PoC can only be true by correspondence to reality or false, by lack of correspondence. But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. Their equivalent of the true/false axis are the Works/Does Not Work axis. So would adoption of the PoC work as way of understanding people, and calibrating your confidence levels?...that is the question.

Comment author: TheAncientGeek 24 August 2015 09:34:12PM *  0 points [-]

A realistic one.

But not one that tells you unambiguously what to do, ie not a usable guideline at all.

There's a lot of complaint about this heuristic along the lines that it doesn't guarantee perfect results...ie, its a heuristic

And now there is the complaint that its not realistic, it doesn't reflect reality.

Ideal rationalists can stop reading now.

Everybody else: you're biased. Specifically, overconfident,. Overconfidence makes people overestimate their ability to understand what people are saying, and underestimate the rationality of others. The PoC is a heuristic which corrects those. As a heuristic, an approximate method, it i is based on the principle that overshooting the amount of sense people are making is better than undershooting. Overshooting would be a problem, if there were some goldilocks alternative, some way of getting things exactly right. There isn't. The voice in your head that tells you you are doing just fine its the voice of your bias.

Comment author: TheAncientGeek 17 August 2015 06:46:17PM *  1 point [-]

Ie, its a defeasible assumption. If you fail, you have evidence that it was a dumb comment. Ift you succeed, you have evidence it wasn't. Either way, you have evidence, and you are not sitting in an echo chamber where your beliefs about people's dumbness go forever untested, because you reject out of hand anything that sounds superficially dumb, .or was made by someone you have labelled , however unjustly,as dumb.

Comment author: Lumifer 17 August 2015 08:14:18PM 1 point [-]

your beliefs about people's dumbness go forever untested

That's fine. I have limited information processing capacity -- my opportunity costs for testing other people's dumbness are fairly high.

In the information age I don't see how anyone can operate without the "this is too stupid to waste time on" pre-filter.

Comment author: TheAncientGeek 18 August 2015 07:37:28AM 0 points [-]

The PoC tends to be advised in the context of philosophy, where there is a background assumption of infinite amounts of time to consider things, The resource-constrained version would be to interpret comments charitably once you have, for whatever reason, got into a discussion....with the corollary of reserving some space for "I might be wrong" where you haven't had the resources to test the hypothesis.

Comment author: Lumifer 18 August 2015 02:20:53PM *  0 points [-]

background assumption of infinite amounts of time to consider things

LOL. While ars may be longa, vita is certainly brevis. This is a silly assumption, better suited for theology, perhaps -- it, at least, promises infinte time. :-)

If I were living in English countryside around XVIII century I might have had a different opinion on the matter, but I do not.

interpret comments charitably once you have, for whatever reason, got into a discussion

It's not a binary either-or situation. I am willing to interpret comments charitably according to my (updateable) prior of how knowledgeable, competent, and reasonable the writer is. In some situations I would stop and ponder, in others I would roll my eyes and move on.

Comment author: Lumifer 24 April 2014 03:22:54PM 1 point [-]

it means "treat everyone's comments as though they were made by a sane , intelligent, person".

I don't like this rule. My approach is simpler: attempt to understand what the person means. This does not require me to treat him as sane or intelligent.

Comment author: TheAncientGeek 24 April 2014 05:09:36PM 2 points [-]

How do you know how many mistakes you are or aren't making?

Comment author: TheAncientGeek 17 August 2015 06:54:20PM -1 points [-]

The PoC is a way of breaking down "understand what the other person says" into smaller steps, not .something entirely different, Treating your own mental processes as a black box that always delivers the right answer is a great way to stay in the grip of bias.

Comment author: RobinZ 23 April 2014 09:33:46PM 3 points [-]

The prior comment leads directly into this one: upon what grounds do I assert that an inexpensive test exists to change my beliefs about the rationality of an unfamiliar discussant? I realize that it is not true in the general case that the plural of anecdote is data, and much the following lacks citations, but:

  • Many people raised to believe that evolution is false because it contradicts their religion change their minds in their first college biology class. (I can't attest to this from personal experience - this is something I've seen frequently reported or alluded to via blogs like Slacktivist.)
  • An intelligent, well-meaning, LessWrongian fellow was (hopefully-)almost driven out of my local Less Wrong meetup in no small part because a number of prominent members accused him of (essentially) being a troll. In the course of a few hours conversation between myself and a couple others focused on figuring out what he actually meant, I was able to determine that (a) he misunderstood the subject of conversation he had entered, (b) he was unskilled at elaborating in a way that clarified his meaning when confusion occurred, and (c) he was an intelligent, well-meaning, LessWrongian fellow whose participation in future meetups I would value.
  • I am unable to provide the details of this particular example (it was relayed to me in confidence), but an acquaintance of mine was a member of a group which was attempting to resolve an elementary technical challenge - roughly the equivalent of setting up a target-shooting range with a safe backstop in terms of training required. A proposal was made that was obviously unsatisfactory - the equivalent of proposing that the targets be laid on the ground and everyone shoot straight down from a second-story window - and my acquaintance's objection to it on common-sense grounds was treated with a response equivalent to, "You're Japanese, what would you know about firearms?" (In point of fact, while no metaphorical gunsmith, my acquaintance's knowledge was easily sufficient to teach a Boy Scout merit badge class.)
  • In my first experience on what was then known as the Internet Infidels Discussion Board, my propensity to ask "what do you mean by x" sufficed to transform a frustrated, impatient discussant into a cheerful, enthusiastic one - and simultaneously demonstrate that said discussant's arguments were worthless in a way which made it easy to close the argument.

In other words, I do not often see the case in which performing the tests implied by the principle of charity - e.g. "are you saying [paraphrase]?" - are wasteful, and I frequently see cases where failing to do so has been.