Another month, another rationality quotes thread. The rules are:

  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
New Comment
78 comments, sorted by Click to highlight new comments since: Today at 11:55 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]8y200

Reasoning can take us to almost any conclusion we want to reach, because we ask “Can I believe it?” when we want to believe something, but “Must I believe it?” when we don’t want to believe. The answer is almost always yes to the first question and no to the second.

--Jon Haidt, The Righteous Mind

0Sarunas8y
I remember reading the idea expressed in this quote in an old LW post, older than Haidt's book which was published in 2012, and it is probably older than that. In any case, I think that this is a very good quote, because it highlights a bias that seems to be more prevalent than perhaps any other cognitive bias discussed here and motivates attempts to find better ways to reason and argue. If LessWrong had an introduction whose intention was to motivate why we need better thinking tools, this idea could be presented very early, maybe even in a second or third paragraph.

I think psychologist Tom Gilovich is the original source of the "Can I?" vs. "Must I?" description of motivated reasoning. He wrote about it in his 1991 book How We Know What Isn't So.

For desired conclusions, we ask ourselves, "Can I believe this?", but for unpalatable conclusions we ask, "Must I believe this?

Probably people have seen this before, but I really like it:

People often say that motivation doesn't last. Well, neither does bathing, that's why we recommend it daily.

-Zig Ziglar

2lmm8y
I don't see the point. The whole point of "motivating doesn't last" is "you will only be able to sustain effort if there is something in your day-to-day that motivates you to continue, not some distant ideal.

The key to avoiding rivalries is to introduce a new pole, which mediates your relationship to the antagonist. For me this pole is often Scripture. I renounce my claim to be thoroughly aligned with the pole of Scripture and refocus my attention on it, using it to mediate my relationship with the antagonistic party. Alternatively, I focus on a non-aggressive third party. You may notice that this same pattern is observed in the UK parliamentary system of the House of Commons, for instance. MPs don’t directly address each other: all of their interactions are mediated by and addressed to a non-aggressive, non-partisan third party – the Speaker. This serves to dampen antagonisms and decrease the tendency to fall into rivalry. In a conversation where such a ‘Speaker’ figure is lacking, you need mentally to establish and situate yourself relative to one. For me, the peaceful lurker or eavesdropper, Christ, or the Scripture can all serve in such a role. As I engage directly with this peaceful party and my relationship with the aggressive party becomes mediated by this party, I find it so much easier to retain my calm.

Alastair Roberts

2Ben Pace8y
Having recently watched a few of these discussions/debates in the commons (watched via youtube) it is noticeable how the speaker is able to temper the mood and add a little levity. There is one popular political youtube account called 'Incorrigible Delinquent' and he begins each of his uploads with the speaker quite humorously saying " You are an incorrigible delinquent! "
1[anonymous]8y
This should be developed into a Discussion post (if it hasn't.)

"Update: many people have read this post and suggested that, in the first file example, you should use the much simpler protocol of copying the file to modified to a temp file, modifying the temp file, and then renaming the temp file to overwrite the original file. In fact, that’s probably the most common comment I’ve gotten on this post. If you think this solves the problem, I’m going to ask you to pause for five seconds and consider the problems this might have. (...) The fact that so many people thought that this was a simple solution to the probl

... (read more)

If anyone is trying to tell you it’s not complicated, be very, very suspicious

-- Tyler Cowen

Seek simplicity and distrust it.

A.N. Whitehead

1Good_Burning_Plastic8y
There's this guy called William of Occam who must really be spinning in his grave right now.
1g_pepper8y
I interpreted the Whitehead quote to mean that you should seek the simplest explanation that explains whatever it is you are trying to explain. This is consistent with Occam's Razor. I assumed that "distrust it" meant subject the explanation to additional tests to confirm or falsify the explanation. So, I didn't see this quote as contradicting William of Occam; instead it built on Occam's Razor to describe the essence of the scientific method. This interpretation is supported if you look at the context of the quote:
5Richard_Kennaway8y
Here also is Einstein: Or in the pithier paraphrase usually quoted:
0[anonymous]8y
(someone, could be Einstein)
3dxu8y
Depends on what the subject matter is. Sometimes, it really doesn't need to be complicated.
0Lumifer8y
True, though this "depends" applies to pretty much everything.
2VoiceOfRa8y
I'd be even more suspicious of someone telling me that it's not that simple.

Each individual instance of outperformance can be put into its own coherent narrative, can be made to look logical and earned on its own terms. But when you throw them together it's hard to escape the impression of a coin-flipping contest with a song and dance at the end.

Matt Levine

If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him.

Cardinal Richelieu

[-][anonymous]8y60

I want you to be the Admiral Nagumo of my staff. I want your every thought, every instinct as you believe Admiral Nagumo might have them. You are to see the war, their operations, their aims, from the Japanese viewpoint and keep me advised what you are thinking about, what you are doing, and what purpose, what strategy, motivates your operations. If you can do this, you will give me the kind of information needed to win this war.

-Admiral Nimitz from Edwin Layton, And I Was There, 1985, p. 357.

There’s nothing rigorous about looking for shiny objects that happen to be statistically significant.

Andrew Gelman

"Direct action is not always the best way. It is a far greater victory to make another see through your eyes than to close theirs forever."

Kreia, KOTOR 2

0[anonymous]8y
We should have a thread for anti-rationality quotes some time. Kotor 2 would be a gold mine. :) HK-47, assassin droid.

I think the common thread in a lot of these [horrible] relationships is people who have managed to go through their entire lives without realizing that “Person did Thing, which caused me to be upset” is not the same thing as “Person did something wrong”, much less “I have a right to forbid Person from ever doing Thing again”.

--Ozymandias (most of the post is unrelated)

Looking for mental information in individual neuronal firing patterns is looking at the wrong level of scale and at the wrong kind of physical manifestation. As in other statistical dynamical regularities, there are a vast number of microstates (i.e., network activity patterns) that can constitute the same ghloal attractor, and a vast numbmer of trajectories of microstate-to-microstate changes that will tend to converge to a common attractor. But it is the final quasi-regular network-level dynamic, like a melody played by a million-instrument orchestra, that is the medium of mental information. - Terrence W. Deacon, Incomplete Nature: How Mind Emerged from Matter, pp. 516 - 517.

Harms take longer to show up & disprove than benefits. So evidence-based medicine disproportionately channels optimism

Saurabh Jha

9DanArmak8y
That seems like selection bias. You do a lot of studies and experiments, and filter out most proposed medicine because it causes harm quickly, or doesn't cause benefits quickly enough or at all. Then you market whatever survived testing. Obviously, if it's still harmful, the harms will show up only slowly, while the benefits will show up quickly - otherwise you would have filtered it out before it reached the consumer. This is like saying engineering disproportionately channels optimism, because almost all the appliances you buy in the store work now and only fail later. If they had failed immediately, they would have been flagged in QC and never got to the shop.
-2ChristianKl8y
If an appliance you buy fails than you know that it fails. If a drug reduces your IQ by 5 points you won't know. Drugs also don't get tested for whether or not they reduce your IQ by 5 points.
-2VoiceOfRa8y
Yes, it's still a bias. The difference is, if they fail, you can always buy a new appliance. You can't buy a new body.
2Good_Burning_Plastic8y
For some underwhelming value of "always", and anyway appliances aren't all that engineering makes. Off the top of my head, cases when "harms take longer to show up & disprove than benefits" outside medicine included leaded gasoline, chlorofluorocarbons, asbestos, cheap O-rings in space shuttles, the 1940 Tacoma Narrows Bridge, the use of two-digit year numbers...
0VoiceOfRa8y
Look at Feynman's analysis. I'd say this is a good example of disproportionate channeling of optimism.
0Good_Burning_Plastic8y
Yes. My point was that disproportionate channeling of optimism isn't something specific to medicine (let alone to evidence-based medicine). EDIT: Hmm, I guess I originally took "disproportionally" to mean "compared to how much other things channel optimism" whereas it'd make more sense to interpret it as "compared to how much medicine channels pessimism".
0Glen8y
Are there any other systems for judging medicine that more accurately reflects reality? I know very little about medicine in general, but it would be interesting to hear about any alternate methods that get good results.
0ChristianKl8y
It's hard to say how effective various alternative styls of medicine happen to be. There's research that suggests Mormon's can recognize other Mormon from non-Mormons by looking at whether the skin of the other person looks healthy. Then Mormon's seem to live 6 to 10 years longer than other Americans. On the other hand the nature of claims like this is that it's hard to have reliable knowledge about it.
[-][anonymous]8y20

"The first step is to establish that something is possible; then probability will occur."

Elon musk

2[anonymous]8y
"It is a mistake to hire huge numbers of people to get a complicated job done. Numbers will never compensate for talent in getting the right answer (two people who don't know something are no better than one), will tend to slow down progress, and will make the task incredibly expensive." Elon musk Merry Christmas beloved LessWrong family. I think I finally get the format of these threads. How did I not read them properly earlier!
2[anonymous]8y
"My biggest mistake is probably weighing too much on someone's talent and not someone's personality. I think it matters whether someone has a good heart." I recently watched a company go from a billion in revenues to zero when a founder stole $90 million from the company. Integrity, humility, and doing your best is by far the most important consideration when evaluating whether to work for someone. Elon musk
[-][anonymous]8y20

If you want to understand another group, follow the sacredness.

--Jon Haidt, The Righteous Mind

If the rule you followed brought you to this, of what use was the rule?

-- The killer shortly before killing his victim in No Country for Old Men

9gwern8y
--Artabanus, uncle of Xerxes; book 7 of Herodotus's Histories (I could swear I'd seen this on a LW quote thread before, but searching turns up nothing.)
6Glen8y
(To make it clear: I have never seen the movie in question, so this is not a comment on the specifics of what happened) Just because it turned out poorly doesn't make it a bad rule. It could have had a 99% chance to work out great, but the killer is only seeing the 1% where it didn't. If you're killing people, then you can't really judge their rules, since it's basically a given that you're only going to talk to them when the rules fail. Everything is going to look like a bad rule if you only count the instances where it didn't work. Without knowing how many similar encounters the victim avoided with their rule, I don't see how you can make a strong case that it's a bad (or good) rule.
1Lumifer8y
That kinda depends on the point of view. If you take the frequentist approach and think about limits as n goes to infinity, sure, a single data point will tell you very little about the goodness of the rule. But if it's you, personally you, who is looking at the business end of a gun, the rule indeed turned out to be very very bad. I think the quote resonates quite well with this. Besides, consider this. Let's imagine a rule which works fine 99% of the time, but in 1% of the cases it leaves you dead. And let's say you get to apply this rule once a week. Is it a good rule? Nope, it's a very bad rule. Specifically, your chances of being alive at the end of the year are only 0.99^52 = about 60%, not great. Being alive after ten years? About half a percent.
0roland8y
I agree. But this is not how I saw the quote. For me it is just a cogent way of asking "is your application of rationality leading to success"?
1Richard_Kennaway8y
Shorn of context, it could be. But what is the context? I gather from the Wikipedia plot summary that Chigurh (the killer) is a hit-man hired by drug dealers to recover some stolen drug money, but instead kills his employers and everyone else that stands in the way of getting the money himself. To judge by the other quotes in IMDB, when he's about to kill someone he engages them in word-play that should not take in anyone in possession of their rational faculties for a second, in order to frame what he is about to do as the fault of his victims. Imagine someone with a gun going out onto the street and shooting at everyone, while screaming, "If the rule you followed brought you to this, of what use was the rule?" Is it still a rationality quote?
0roland8y
I saw the movie and the context of the quote was that the killer was about to kill a guy that was chasing him. So we could say that the victim underestimated the killer. He was not randomly selected.
[-][anonymous]8y10

And we must study through reading, listening, discussing, observing and thinking. We must not neglect any one of those ways of study. The trouble with most of us is that we fall down on the latter -- thinking -- because it's hard work for people to think, And, as Dr. Nicholas Murray Butler said recently, 'all of the problems of the world could be settled easily if men were only willing to think.'

Thomas Watson

[This comment is no longer endorsed by its author]Reply
6ChristianKl8y
These days we often have people who do think but don't do the other well enough.
1[anonymous]8y
I think that this part of the quote is an overstatement.
-1Lumifer8y
I actually think it's naive bullshit.
[-][anonymous]8y00

The basic key that I follow when engaging with antagonistic individuals is to recognize that we will always tend to imitate someone. In mimetic rivalries, the antagonism can come to dominate so much that the third pole (and there is always a third pole – a relationship, an issue, a symptom, etc.) becomes interchangeable. The key to avoiding rivalries is to introduce a new pole, which mediates your relationship to the antagonist. For me this pole is often Scripture. I renounce my claim to be thoroughly aligned with the pole of Scripture and refocus my atte

... (read more)
[This comment is no longer endorsed by its author]Reply

Consensus tends to be dominated by those who will not shift their purported beliefs in the face of evidence and rational argument.

Jim

This appears to be empirically incorrect, at least in some fields. A few examples:

  • Creationists are much less willing to adjust their beliefs on the basis of evidence and argument than scientifically-minded evolutionists, but evolution rather than special creation is the consensus position these days.
  • It looks to me (though I confess I haven't looked super-hard) as if the most stubborn-minded economists are the adherents of at-least-slightly-fringey theories like "Austrian" economics rather than the somewhere-between-Chicago-and-Keynes mainstream.
  • Consensus views in hard sciences like physics are typically formed by evidence and rational argument.
4Viliam8y
Depends on what you mean by "consensus". For example, in some organizations it means "we will not make a decision until literally everyone agrees with it". In which case, stubborn people make all the decisions (until the others get sufficiently pissed off and fire them).
0gjm8y
Probably true. But I don't think that's the sort of thing Jim is talking about in the post redlizard was quoting from; do you?
0Viliam8y
Oh. I haven't followed the link before commenting. Now I did... and I don't really see the connection between the article and consensus. The most prominent example is how managers misunderstood the technical issues with Challenger: but that's about putting technically unsavvy managers into positions of power over engineers, not about consensus. (I wonder if this is an example of a pattern: "Make a statement. Write an article mostly about something else, using arguments that a reader will probably agree with. At the end, a careless reader is convinced about the statement.")
2VoiceOfRa8y
Technically unsavy manages who insisted that the engineers tell them what they wanted to hear, i.e., who insisted that they be included in the consensus and then refused to shift their position.
-1gjm8y
I think that level of logical rigour is par for the course for this particular author.
4DanArmak8y
We have a special name for this; it's called science, and it's rather rare. It might still be a pretty good generalization of all human behavior to say that consensus tends to be dominated by those who won't change their opinion. Actually, I don't think it's a good generalization for reasons other than science. Most conflicts or debates devolve to politics, where people support someone instead of some opinion or position. And in politics, the top person or party is often replaced by a different one.
0VoiceOfRa8y
Even a lot of what gets called "science" isn't.
[-][anonymous]8y-20

"For it is easy to criticise and break down the spirit of others, but to know yourself takes maybe a lifetime" Bruce Lee

"Remember, my friend to enjoy your planning as well as your accomplishment, for life is too short for negative energy". -Bruce Lee

"We should devote outselves to being self-sufficient and must not depend upon the external rating by others for our happiness" -Bruce

"Remember, my friend, to ejoy your plannng as well as your accomplishment, for life is too short for negative neergy" Lee *Just realised a

... (read more)
[-][anonymous]8y-30

Because we live in a culture that fears being alone, being rejected, feeling unworthy and unlovable, we confuse love with attachment, dependency, sexual attraction, romantic illusion, lust, infatuation, or obligation.

-Loner Wolf

[-][anonymous]8y-40

How to predict if bombing ISIS in Syria is a good idea:

  1. Draw up a comprehensive spreadsheet of every 'Western' intervention (and almost-but-not-quite-intervention) in a >foreign country.

  2. Rate each case by how similar it is to the present case (e.g. location, how long ago it was, civil war vs no civil war, >religious war vs non-religious war, how many countries support the intervention, cultural differences between the >countries involved, level of involvement, etc).

  3. Rate how much each intervention (or decision not to intervene) helped or hurt

... (read more)
1Anders_H8y
How do you plan to do this without counterfactual knowledge?
0[anonymous]8y
take your pick it requires a good handle of experiment design but biostatisticians do this day in day out. Hopefully risk analysts do this too in defense institutions.
3Anders_H8y
The original quote said to rate each intervention by how much it helped or hurt the situation, i.e. its individual-level causal effect. None of those study designs will help you with that: They may be appropriate if you want to estimate the average effect across multiple similar situations, but that is not what you need here. This is a serious question. How do you plan to rate the effectiveness of things like the decision to intervene in Libya, or the decision not to intervene in Syria, under profound uncertainty about what would have happened if the alternative decision had been made?
0[anonymous]8y
Yes I concede that cross-level inferences between aggregate (average of multiple similar situations) and individual level causes has less predictive power than inferences across identical levels of inference. However, I reckon it's the best available means to make such an inference. Analysts has tools to model and simulate scenarios. Analysis of competiting hypothesis is staple in intelligence methodology. It's also used by earth scientists, but I haven't seen it used elsewhere. Based on this approach, analysts can: * make a prediction about outcomes without interventions in libya with and without intervention * when they choose to intervene on non-intervene, calculate those outcomes * over the long term of making comparisons between predicted and actual outcomes, they make decide to re-adjust their predictions post-hoc for the counterfactual branch I'm not trying to downplay the level of uncertainty. Just that the methodological considerations remain constant.
2ChristianKl8y
Just for completion, Anders_H is one of those guys.
0[anonymous]8y
How self-referentially absurd. More precisely, epidemiologists do this day in day out using biostatistical models, then applying causal inference (the counterfactual knowledge part incl.). I said biostatisticians because epidemiology isn't in the common vernacular. Ironically, counterfactual knowledge is, to those familiar with the distinction, distinctly removed from the biostatistical domain. Just for the sake of intellectual curiosity, I wonder what kind of paradox was just invoked prior to this clarification. It wouldn't be the epimenides paradox since that refers to an individual making a self-referentially absurd claim: Anyone?
0ChristianKl8y
Yes, Anders_H is Doctor of Science in Epidemiology. He's someone worth listening to when he tells you about what can and can't be done with experiment design.
0[anonymous]8y
Oooh, an appeal to authority. If that is the case he is no doubt highly accomplished. However, that need not translate to blind deference. This is a text conversation, so rhetorical questions aren't immediately apparent. Moreover, we're in a community that explicitly celebrates reason over other modes of rhetoric. So, my interpretation of his question about counterfactual conditions was interpreted was sincere rather than disingenuous.
0ChristianKl8y
Yes, but if you disagree you can't simply point to biostatisticians do this day in day out and a bunch of wikipedia articles but actually argue the merits of why you think that those techniques can be used in this case.
0Richard_Kennaway8y
That is a tendentious way of comparing the two: a cold, abstract "level of improvement" against the more concrete "dollars" and very concrete "dead people". It suggests the writer is predisposed to find that intervention is a bad idea. But what is improvement, but resources then available to apply to better things, and live people living better lives? And why the reference class "Western"?
0Vaniver8y
Presumably, Wiblin is talking about Western bombing of ISIS in Syria. If one finds that Turkish interventions have been effective and American interventions haven't, say, then that's an argument that Americans shouldn't intervene now (but Turks should).
0Richard_Kennaway8y
Choose your reference class, get the result you want. Is Turkey "Western" or not? It wants to join the EU (but hasn't been admitted yet). Russia is bombing Syria. Why exclude it from the class of foreign interventions? For that matter, I don't know what military actions, if any, Turkey has taken in Syria, but that would also be a foreign intervention. Not to mention the the smallness of N in the proposed study and the elastic assessment. I googled some of the phrases in the OP but only got hits to the OP. Is this even a quote?
-2Jiro8y
Rating each decision on a scale of 1 to 10 and then taking a weighted average is a recipe for biasing the result against intervention, since you've created a hard upper limit for how much you count an intervention as helping, so you'll count a successful intervention as 10 and be unable to count a successful intervention that does even more good as more than 10. (This has a similar problem at the low end of the scale, but that doesn't affect the final result since you can't go below zero intervention.) This also produces bad results in cases where the intervention failed because it was insufficient. You'd end up concluding that intervention is bad when it may just be that insufficient intervention is bad. This method has clause 2 to cover similarity of case, but not similarity of intervention, and at any rate "similarity" is a fuzzy concept. If bombing half the country is a disaster and bombing a whole country succeeds, is bombing half a country "similar" to bombing a whole country? (Actually, you usually end up compressing all the dispute over intervention into a dispute over how similar two cases are.) And it's generally a bad idea to put on a numerical scale things that you can't actually measure numerically. It gives a false appearance of accuracy and precision, like a company executive who wants to see figures for his company improve but doesn't actually care where the figures come from. Also, "level of improvement created" is subject to noise. It is possible for an improvement to fail for reasons unrelated to the effectiveness of the intervention, like if the country gets hit by a meteor the next day (or more realistically, gets invaded or attacked the next day).
-2The_Lion8y
Basically one huge problem here is that there isn't enough data compared to the number of variables involved. Not to mention that this is a problem in what Taleb would call extremistan, i.e., the distribution of possible outcomes from intervening, or not-intervening, are fat-tailed and include a lot of rare possibilities that haven't yet shown up in the data at all.
[-][anonymous]8y-40

" 'Bill is wrong, but bill works hard, so even though its the wrong solution, he's likely to succeed', and that the best compliment I ever received"

-Bill Gates, quoting someone else

If you were to rank order and say, I'm going to start a company, what's the highest return on investment for the risk? Space and Cars would be at the bottom.

-Elon Musk in the same vid