Another month, another rationality quotes thread. The rules are:

  • Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.
  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
New Comment
235 comments, sorted by Click to highlight new comments since: Today at 2:44 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It seemed rather short

Misao Okawa, the world's oldest person, when asked "how she felt about living for 117 years."

One kid said to me, “See that bird? What kind of bird is that?” I said, “I haven’t the slightest idea what kind of a bird it is.” He says, “It’s a brown-throated thrush (or something). Your father doesn’t teach you anything!” But it was the opposite. My father had taught me, looking at a bird, he says, “Do you know what that bird is? It’s a brown-throated thrush. But in Portuguese, it’s a Bom da Peida; in Italian, a Chutto Lapittida." He says, "In Chinese, it’s a Chung-long-tah, and in Japanese, it’s a Katano Tekeda, et cetera." He says, "Now you know all the languages you want to know what the name of that bird is, and when when you’re finished with all that," he says, "you’ll know absolutely nothing whatever about the bird. You’ll only know about humans in different places, and what they call the bird. Well," he says, "let’s look at the bird and what it’s doing."

--Richard Feynman, source. Full video (The above passage happens at about the 7:00 mark in the full version.)

N.B. The transcript provided differs slightly from the video. I have followed the video.

Related to: Replace the Symbol with the Substance

Feynman knew physics but he didn't know ornithology. When you name a bird, you've actually identified a whole lot of important things about it. It doesn't matter whether we call a Passer domesticus a House Sparrow or an English Sparrow, but it is really useful to be able to know that the male and females are the same species, even though they look and sound quite different; and that these are not all the same thing as a Song Sparrow or a Savannah Sparrow. It is useful to know that Fox Sparrows are all Fox Sparrows, even though they may look extremely different depending on where you find them.

Assigning consistent names to the right groups of things is colossally important to biology and physics. Not being able to name birds for an ornithologist would be like a physicist not being able to say whether an electron and a positron are the same thing or not. Again it doesn't matter which kind of particle we call electron and which we call positron (arguably Ben Franklin screwed up the names there by guessing wrong about the direction of current flow) but it matters a lot that we always call electrons electrons and positrons positrons. Similarly it's important for a chemist to know that Helium 3 and Helium 4 are both Helium and not two different things (at least as far as chemistry and not nuclear physics is concerned).

Names are useful placeholders for important classifications and distinctions.

Feynman knew physics but he didn't know ornithology. When you name a bird, you've actually identified a whole lot of important things about it.

I think Feynman's point was that a name is meaningful if you already know the other information. I can memorize a list of names of North American birds, but at the end I'll have learned next to nothing about them. I can also spend my days observing birds and learn a lot without knowing any of their names.

Assigning consistent names to the right groups of things is colossally important to biology and physics.

I don't think anyone will disagree with this. The hard part, though, is properly setting up the groups in the first place. Good classification systems took years (or centuries) of work and refinement to become the systems we take for granted today.

Not being able to name birds for an ornithologist would be like a physicist not being able to say whether an electron and a positron are the same thing or not.

Feynman has been quoted elsewhere criticizing students for parroting physics terminology without having the least idea of what they're actually talking about. There's the anecdote about students who knew all about the laws of refraction but failed to identify water as a medium with a refractive index.

3[anonymous]9y
Feynman wasn't really wrong, he just failed to mention that if you want to remember anything about a certain bird that you observed you will have to invent a name for it, because 'the traveler hath no memory'. Original names are OK if you only want the knowledge for yourself.
7Capla9y
I'm reminded of another Feynman anecdote: when he invented his own mathematical notion in middle school. It made more sense to him, but he soon realized that it was no good for communicating ideas to others.
0johnlawrenceaspden8y
Every time I try to learn to sight-sing I get sidetracked by trying to invent better notation for music. After many repeats of this process I've decided that music notation is pretty good, given the constraints under which it used to operate. Now I'm trying to just force myself to learn to sight-sing, already.
6lmm9y
Did you deliberately pick this example, where Feynman speculated that they might be the same thing? Names are useful as shorthand for a bundle of properties - but only once you know the actual bundle of properties. I sometimes think science should be taught with the examples first, and only given the name once students have identified the concept.
0[anonymous]9y
Semantics are important. On the other hand you don't get additional knowledge from getting the name in an additional language that treats the concept with the same semantic borders.
0fortyeridania9y
Yes, this is true.
6DanielLC9y
Knowing the name of the bird tells you next to nothing about it, but once you know the name of the bird it becomes much easier for people to tell you about them.
3dxu9y
Also related: Guessing the Teacher's Password

[Transcript from video, hence long and choppy]

I think the way the battle lines are drawn in the world we live in, the battle lines typically fall in terms of 'what are your conclusions?' Like: are you a republican; are you a democrat; are you a libertarian; are you a socialist? And the more I think about it, this strikes me as extremely odd.

Why should the battle lines be drawn in terms of conclusions? Another way of drawing the battle lines would be, say, in terms of how people think. So if I take someone like Matt [Yglesias?], who's one of the commenters - I read Matt's blog all the time. Matt, I think, would agree that he and I disagree on a lot of issues. Not on everything, but we disagree a lot. We disagree every day. We sort of write back and forth to each other and to others, and even if we don't call each other by name, we're, like, disagreeing in public every day.

But at the same time when I read Matt I have this feeling like 'if I were a progressive, this is the argument I would make'. I feel that way when I read Matt. There's other writers, like when I read Paul Krugman, I don't feel that way. I don't think if I were progressive I would argue like Paul Krugman.

So this me

... (read more)

Why should the battle lines be drawn in terms of conclusions?

Suppose I agree with someone's conclusion, and disagree with them on the method used to reach that conclusion. Are we political allies, or enemies? That is, of course "politics" is the answer to 'why should the battle lines be drawn this way?'

Now, for Tyler as a pundit, the answer is different. Staying in an intellectual realm where he thinks like the other people around him makes it so any disagreements are interesting and intelligible.

This is sort of related to what Scott argues in "In Favor Of Niceness, Community, And Civilization".

8Lumifer9y
I think the reasons for Tyler's positions are deeper than that. Don't think in terms of a single-round game, think in terms of a situation where you have to co-exist with the other party for a relatively long time and have some kind of a relationship with it. The conclusions about a particular specific issue of today are not necessarily all that important compared to sharing a a general framework of approaches to things, a similar way of analyzing them...
5Vaniver9y
I also had in mind this bit of wisdom from Robin. As stated, this primarily matters for pundits. Notice that the methods of thinking that he's talking about don't reliably lead to the same conclusions; different values and different facts mean that two people who think very similarly (i.e. structure arguments in the same way) may end up with opposite policy preferences, able to look at each other and say "yes, I get what you think and why you think it, but I think the opposite." And so a particular part of the blogosphere will discuss policies in one way, another part another way, it'll be discussed a third way on television, and so on. But the battle lines will still be drawn in terms of conclusions, because policy conclusions are what actually get implemented, and it doesn't seem sensible to describe the boundaries between the areas where policies are discussed as "battle lines," when what they actually are is an absence of connections.
2TheOtherDave9y
When dealing with someone who comes to different conclusions than I do, but whose way of thinking I understand well, it's relatively easy for me to negotiate with them -- I can predict what offers they'll value, and roughly to what degree, and what aspects of their own negotiating position they're likely to be OK with trading off. Whereas negotiating with someone whose way of thinking I don't understand is relatively hard, and I can expect a significant amount of effort to be expended overcoming the friction of the negotiation itself, and otherwise benefiting nobody. Of course, I don't have to negotiate with someone who agrees with me, so in the short term that's an easy tradeoff in favor of agree-on-conclusions. But if I'm choosing people I want to work with in the future, it's worth asking how well agreeing on conclusions now predicts agreeing on conclusions in the future, vs. how well understanding each other now predicts understanding each other in the future. For my own part, I find mutual understanding tends to be more persistent. That said, I'm not sure whether negotiation is more a part of what you're calling "politics" here, or what you're calling "punditry," or neither, or perhaps both. But negotiation is a huge part of what I consider politics, and not an especially significant part of what I consider punditry.
2Lumifer9y
I continue to disagree. This matters a lot for people who are interested in maintaining the status quo and are very much against any drastic and revolutionary changes -- which often enough come from a different way of thinking.
5PeterisP9y
"Are we political allies, or enemies?" is rather orthogonal to that - your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals. For example, a powerful and popular extreme radical member of the "opposite" camp that has conclusions that you disagree with, uses methods you disagree with, and is generally toxic and spewing hate - that's often a prime example of your political ally whose actions incite the moderate members of society to start supporting you and focusing on your important issues instead of something else. The existance of such a pundit is important to you, you want them to keep doing what their do and have their propaganda actions be successful up to a point. I won't go into examples of particular politicians/parties of various countries, that gets dirty quickly, but many strictly opposed radical groups are actually allies in this sense against the majority of moderates; and sometimes they actively coordinate and cooperate despite the ideological differences. On the other hand, a public speaker that targets the same audience as you do, shares the same goals/conslusions that you do, and the intended methods to achieve it, but simply does it consistently poorly - by using sloppy arguments that alienate part of the target audience, or by disgusting personal behavior that hurts the image of your organization. That's a good example of a political enemy, one that you must work to silence, to get them ignored and not heard; despite being "aligned" with your conclusions. And of course, a political competitor that does everything that you want to do but holds a chair/position that you want for yourself, is also a political enemy. Infighting inside powerful political groups is a normal situation, and when (and if) it goes to public, very interesting political arguments appear to distinguish one from their political enemy despite sharing most of the platform.

your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.

! That's not how other humans interpret "alliance," and using language like that is a recipe for social disaster. This is a description of convenience. Allies are people that you will sacrifice for and they will sacrifice for you. The NAACP may benefit from the existence of Stormfront, but imagine the fallout from a fundraising letter that called them the NAACP's allies!

Whether or not someone is an ally or an enemy depends on the context. As the saying goes, "I against my brother, and I and my brother against my cousins, I and my brother and my cousins against the world"--the person that has the same preferences as you, and thus competes with you for the same resources, is potentially an enemy in the local scope but is an ally in broader scopes.

5Epictetus9y
Allies are those who agree to cooperate with you. An alliance may be temporary, limited in scope, and subject to conditions, but in the end it's all about cooperation. A stupid enemy who makes mistakes certainly benefits your cause and is a useful tool, but he's no ally.

One problem is that most people think we are always in the short run. No matter how many times you teach students that tight money raises rates in the short run (liquidity effect) and lowers them in the long run (income and Fisher effects), when the long run actually comes around they will still see the fall in interest rates as ECB policy "easing". And this is because most people think the term "short run" is roughly synonymous with "right now." It's not. Actually "right now" we see the long run effects of policies done much earlier. We are not in an eternal short run. That's the real problem with Keynes's famous "in the long run we are all dead."

Scott Sumner

6Zubon9y
In practice, the economic "long run" can happen exceedingly quickly. Keynes was probably closer to right with "Markets can remain irrational longer than you can remain solvent," but if you plan on the basis of "in the long run we are all dead," you might find out just how short that long run can be.
0D_Alex9y
If we need to look to economics for rationality quotes, we are getting towards the bottom of the barrel, Robin Hanson notwithstanding.
1Grant9y
Macroeconomics? Sure, its highly politicized so in many cases I'll agree with that. But microeconomics is in many ways the study of how to rationally deal with scarcity. IMO, traditional micro assuming homo-economicus is actually more interesting (and useful, outside of politics) than the behavioral stuff for this reason.

Nothing is more dangerous than an idea if it's the only one you have.

-- Émile Auguste Chartier, Propos sur la religion, 1938

Because it is often easy to detect the operation of motivated belief formation in others, we tend to disbelieve the conclusions reached in this way, without pausing to see whether the evidence might in fact justify them. Until around 1990 I believed, with most of my friends, that on a scale of evil from 0 to 10 (the worst), Communism scored around 7 or 8. Since the recent revelations I believe that 10 is the appropriate number. The reason for my misperception of the evidence was not an idealistic belief that Communism was a worthy ideal that had been betrayed by actual Communists. In that case, I would simply have been victim of wishful thinking or self-deception. Rather, I was misled by the hysterical character of those who claimed all along that Communism scored 10. My ignorance of their claims was not entirely irrational. On average, it makes sense to discount the claims of the manifestly hysterical. Yet even hysterics can be right, albeit for the wrong reasons. Because I sensed and still believe that many of these fierce anti-Communists would have said the same regardless of the evidence, I could not believe that what they said did in fact correspond to the evidence. I made the mistake of thinking of them as a clock that is always one hour late rather than as a broken clock that shows the right time twice a day.

Jon Elster, Explaining Social Behavior: More Nuts and Bolts for the Social Sciences, Cambridge, 2007, pp. 136-137, n. 16

I just realized what bothers me about this quote. It seems boil down to Elster trying to admit that he was wrong without having to give credit to those who were right.

8gjm9y
Yup, he appears to be doing that. On the grounds that he has other reasons for thinking they don't deserve credit for it. Rather than commenting on the credibility of that in Elster's specific case (which would depend on knowing more than I do about Elster and about the anti-communists he paid attention to), I'll remark that there certainly are cases in which most of us here would do likewise. (Not literally zero credit, but extremely little, which I think is also what Elster's doing.) For instance: * One of your friends is an avid lottery enthusiast and keeps urging you to buy a ticket "because today might be your lucky day". He disdains your statements that buying lottery tickets is a substantial loss on average and insists that he's made a profit from playing the lottery. (Maybe he actually has, maybe not.) Eventually you give in and buy one ticket. It happens to win a large prize. * Another of your friends is a fundamentalist of some sort and tells you confidently that the current scientific consensus on evolution is all bunk. Any time she reads of any scientific claim about evolution she is liable to tell you confidently that in time it'll be refuted by later research. One day, a new discovery is made that refutes something you had said to her about evolution (e.g., that X is more closely related to Y than to Z). * Another worships the ancient Roman gods and tells you with great confidence that it will rain tomorrow because he has made sacrifice to Jupiter, Neptune and the lares and penates of his household. You are expecting a dry day because that's what the weather forecasts say. It does in fact rain a bit.
3[anonymous]9y
Is being anti-lottery some kind of badge of honor amongst intelligent people? It is entertainment, not investment. It is spending money to buy a feeling excited expectance. It is like buying a movie ticket. Does anyone consider buying a ticket to scary horror movie irrational? Some people just like that kind of excitement. People who buy lottery tickets just like different kinds of excitement, dream, fantasy. As for the argument that it is a mis-investment of emotions that is also false, people can decide to work forward the goal then what happens is a lot of grinding, they can still dream about something else, it is not like you cannot dream while you grind. Realistic goals do not need a lot of dream investment but rather time and effort and it is safe to invest dreams in unrealistic ones. When I have read Eliezer's mis-investment of emotions argument it came accross to me an elitistic Bay Area upper middle class thing. People in slums usually need to grind until they get a better schooling and job experience to escape it, this takes time investment not dream investment, and this leaves them free to dream about one day being a prince.

Realistic goals do not need a lot of dream investment but rather time and effort and it is safe to invest dreams in unrealistic ones.

I think this is factually untrue. It seems to me that time and effort investment follows dream investment, for basic psychological reasons.

When I have read Eliezer's mis-investment of emotions argument it came accross to me an elitistic Bay Area upper middle class thing.

I think that's because you misread it, or you're identifying correct financial attitudes with being upper middle class and throwing in the rest of the descriptions for free. Here's the part where he talks about mechanisms:

If not for the lottery, maybe they would fantasize about going to technical school, or opening their own business, or getting a promotion at work—things they might be able to actually do, hopes that would make them want to become stronger.

Going to technical school is not an "elitistic Bay Area upper middle class thing." Yes, later he talks about dot-com startups doing IPOs, but the vast majority of new businesses started are things like barbershops and restaurants, and people go to technical school to learn how to repair air conditioning systems, ... (read more)

1[anonymous]9y
Basically you are saying constant grinding requires constant motivation - or discipline? But in reality all it takes is the precommitment of shame. Example 1. you come from a working-class or slum family, get into a university as the first one in the family. Your mom and grandma brags the whole kin and neighborhood full with what a genius you are. At that point, you are strongly precommitted, not exactly through your own choice, you don't want the shame of 100 people treating yoiu like a genius to be let down by you dropping out. Example 2. you get your first real job and it sucks, but your dad is proudly supporting a family for 25 years know on a similarly sucky one and for you to get his approval / not feel ashamed in his eyes you need to stick to it until you get enough experience for a better one. I think the elitism part is precisely in the lack of this kind of shame-precommitment: elites have discretionary goals, doing what they want, not what they must to get ahead, and thus need constant motivation. If you would quit a job once it stops being fun, you are of the elites in this sense. If you stick to it until it does not feel shameful to quit, then not. And this is why for the majority constant motivation is not required for constant grinding.
7Lumifer9y
I think it's quite the reverse: elites have strong shame-precommittment, it's only a few levels higher. All your family went to Harvard and you're going to fail?? Your ancestors have Ph.Ds three generations back and you're not enrolling in a graduate program?? X-D Of course I mean elites not of the Kardashian kind.
8gjm9y
I was careful to specify that your hypothetical friend enjoins you to buy lottery tickets on the grounds that it is good for you financially. I agree that if you get great enjoyment from the thought that you might win the lottery, buying lottery tickets may be worth it for you. (But two caveats on that last point. Firstly, if you enjoy daydreaming about getting rich then you can equally daydream about unexpected legacies, spectacular success of companies in your pension/investment portfolio if you have one, eccentric billionaire arbitrarily giving you a pile of money, etc. Of course these are improbable, but so is winning much in the lottery. Secondly, "dream investment" may lead you astray by, e.g., making all the most mentally salient paths to success the terribly improbable ones involving lotteries rather than the more-probable ones involving lots of hard work, and demotivating the hard work. Whether it actually has that effect is a question for the psychologists; I don't know whether it's one that's been answered.)
7Good_Burning_Plastic9y
Good point; I'm retracting my comment elsethread. I'm guessing the hard part is figuring out which way the causation goes -- maybe not having mentally salient paths to success involving lots of hard work makes people more likely to buy lottery tickets, rather than or as well as vice versa.
4Lumifer9y
Why do you need to pay money to someone in order to daydream? The problem is that "dreaming" often replaces grinding.
3[anonymous]9y
Don't people who go to amusement parks or Disneyland basically pay other people in order to have a daydream session? I mean, I can't imagine people walking around dreaming about winning a lottery, it would be Charlie and the Chocolate Factory. (Now that's a book about humanity outcompeted by a more profitable life form under the guidance of an omnipotent being.)
2Lumifer9y
No, they pay other people to provide experiences for them, experiences which they can't get otherwise on their own.
2[anonymous]9y
How is 'you can safely put on a princess's dress when you are in certain company, and pay in some amount of social embarrassment if you are wrong about the company' different from 'you can safely pay a small amount for a chance to put on any dress you want in any company whatsoever'? Buying a ticket is an experience you can't get otherwise on your own. (I mean yes, I largely agree with you, but I am not sure what exactly I agree with, therefore the nitpicking.)
3Lumifer9y
Huh? I don't understand.
3[anonymous]9y
Well, in what way is buying a ticket not paying other people to provide you an experience which you can't get otherwise on your own? Earning money is different, you expect to be paid a fixed sum and for many, there are multiple ways to do it.
-1Lumifer9y
In the way that I can, on my own, daydream about having a million dollars. I don't need to pay other people for that.
2Epictetus9y
If you want a strictly positive chance at getting a million dollars and the thrill of looking up the lottery drawings to see if you won, then you have to pay for it. People buy lottery tickets to have a fleeting, tangible hope, not just an imagined one.
-1Lumifer9y
You have a strictly positive chance of having a rich and unknown to you relative die and leave her fortune to you. Ah, that's a good point. Yes, if you want the gambling thrill, then you have to pay for it, I agree. However from the expected-loss point of view, going to a casino is much better than buying lottery tickets...
0Nornagest9y
For that matter, there's a strictly positive chance that a meteor made of two tons of platinum will fall from the sky tomorrow and flatten my car in the driveway before I'm done brushing my teeth. The probability of almost anything you can think of is going to be positive, unless it's physically impossible -- and even there you have model uncertainty to take into account.
0Lumifer9y
Yes, of course, which is why talking about "strictly positive chances" in this context (of suddenly acquiring wealth) is kinda silly.
0seer9y
No, you give them appropriate credit for their correct predictions, and and appropriate de-credit for their incorrect predictions.

You shouldn't give credit or discredit directly for correctness of predictions, if you have information about how those predictions were made. If you saw someone make their guess at tomorrow's Dow Jones figure by rolling dice, you don't then credit them with any extra stock-market expertise when it happens that their guess was on the nose; they just got lucky. (Though if they do it ten times in a row you may start to suspect that they have both stock-market expertise and skill in manipulating dice.)

0Good_Burning_Plastic9y
Substantial? The tickets of all lotteries I'm familiar with cost less than a movie ticket.
1Lumifer9y
Yes. Households earning less than $13,000 a year spend a shocking 9% of their money on lottery tickets.

Someone else follows the citation trail and claims the source thinks the actual number is lower:

households with an income of less than $10 000 spend, on average, approximately 3% of their income on the lottery.

Upvoted for checking claims :-)

The link actually says that he cannot find the original source for the 9% number, but in the process found a 3% number.

I'll dig around for better numbers if I have time, but we can also look at significance from the other end:

State lotteries have become a significant source of revenue for the states, raising $17.6 billion in profits for state budgets in the 2009 fiscal year (FY) with 11 states collecting more revenue from their state lottery than from their state corporate income tax during FY2009.

(Wikipedia)

P.S. An interesting paper. Notable quotes:

The introduction of a state lottery is associated with a decline of $115 per quarter in household non-gambling consumption. This figure implies a monthly reduction of $23 in per-adult consumption, which compares to average monthly sales of $18 per lottery-state adult. The response is most pronounced for low-income households, which on average reduce non-gambling consumption by three percent. Among households in the lowest income third of the CEX sample, the data demonstrate a statistically significant reduction in expenditures on food eaten in the home (3.1 percent) and on home mortgage, rent, and o

... (read more)
9Good_Burning_Plastic9y
Okay, now I can see where all the people giving financial reasons why lotteries are bad are coming from.
7Good_Burning_Plastic9y
$300/year (unless someone is a bored millionaire) is still shocking to me.
5Nornagest9y
Assume a flat distribution from 0 to 10000 and it's $150 a year, or about a lottery ticket and a half per week at $2 a ticket. Not too unreasonable. But on the other hand, you've got to figure lottery spending's unevenly distributed, probably following something along the lines of the 80/20 rule, and that brings us back to a ticket a day or higher.
5gjm9y
Seems plenty unreasonable to me. If your income is somewhere on "a flat distribution from 0 to $10000" then you are probably just barely getting by, and perpetually one minor financial difficulty away from disaster. If you were able to save $150/year, that could make a really substantial difference to your financial resilience. (Though I don't much like pronouncing from my quite comfortable position on how those in poverty should spend their money. It's liable to sound like a claim of superiority, but in fact I do plenty of stupid and counterproductive things and it's entirely possible that if I were suddenly thrown into poverty I'd manage much worse than those people; I doubt I'd be buying lottery tickets, but I'd probably be making other mistakes that they don't.) [EDITED to fix a bit of incredibly clunky writing style.]
1Good_Burning_Plastic9y
It still break my formerly favourite analogy, movie tickets -- I don't think the average household making <$10k/year spends $150/year on movie tickets. (Some such households probably do, but I strongly doubt the average one does.)
4[anonymous]9y
But more on booze, probably, otherwise how could they bear it.
2johnlawrenceaspden8y
A family of four can probably blow $50 seeing one movie.
0gjm9y
Substantial as a fraction of what you spend on lottery tickets. Obviously if you don't spend much you can't lose much.
4satt9y
I think you've boiled away Elster's message there. He's vivifying a general observation about (meta-)motivated cognition (the first quoted sentence) with an embarrassing personal example (the following sentences).
-3Jayson_Virissimo9y
Jane Elmer, Explaining Anti-Social Behavior: More Amps and Volts for the Social Sciences EDIT: In case it wasn't clear, I disagree that "it is often easy to detect the operation of motivated belief formation in others". Also, when your opponents strongly believe that they are right and are trying to prevent a great harm (whether they have good arguments or not), this often feels from the inside like they are "manifestly hysterical".
2arundelo9y
Or just:
0Jiro9y
How is having a paragraph that applies to [$POLITICAL_BELIEF] not the same as making a fully general argument? Or are you just saying that the original statement about Communism was a fully general argument?
2arundelo9y
I'm saying that I think the original quote (which I did think was good) would have been improved qua Rationality Quote by removing the specific political content from it. (Much like the "Is Nixon a pacifist?" problem would have been improved by coming up with an example that didn't involve Republicans.)
4Pablo9y
I think the problems associated with providing concrete political examples are in this case mitigated by the author's decision to criticize people on opposite sides of the political debate (Soviet communists and hysterical anti-communists), and by the author's admission that his former political beliefs were mistaken to a certain degree.
2arundelo9y
True.
-5Lumifer9y

When you see a good move, look for a better one.

Emanuel Lasker

Lasker may have said this, but it also pre-dates him: http://en.wikipedia.org/wiki/Emmanuel_Lasker#Quotations

It's also not always good advice. Sometimes you should just satisfice. Chess is often one of these times, as you have a clock. If you see something that wins a rook, and spend the rest of your time trying to win a queen, you're not going to win the game.

9dxu9y
Of course it isn't. But I don't think that's a very good standard to be holding most forms of advice to. Very little advice is always good advice; nearly all sayings have exceptions. The fact is, however, that Lasker's (sort of Lasker's, anyway) quotation is useful most of the time, both in chess and out of chess (since unless you're playing a blitz game, you're likely to have plenty of time to think), and for a rationality quote, that suffices.
2ChristianKl9y
I don't think that's the case. Of LW I would expect that more people suffer from perfectionism than there are people who optimize for satisfaction too much.
2dxu9y
On LW, certainly. In general, no.
5Good_Burning_Plastic9y
This raises an interesting question -- What should I do with Rationality Quotes entries which I think are preaching to the choir, i.e. they are good advice for most of the general population but most of the people who will actually read them here had better reverse? Should I upvote them or downvote them?
3TheOtherDave9y
Would you rather see more quotes like that? Or fewer? Or are you not sure?
1bentarm9y
It's not at all obvious to me that the failure mode of not looking for a better move when you've found a good one is more common than the failure mode of spending too long looking for a better move when you've found a good one - in general, I think the consensus is that people who are willing to satisfice actually end up happier with their final decisions than people who spend too long maximising, but I agree that this doesn't apply in all areas, and that there are likely times when this would be useful advice. In the particular example I gave, if you've already found a move that wins a rook, then it's all-but irrelevant if you're missing a better move that wins a queen, as winning a rook is already equivalent to winning the game, but there are obviously degrees of this (it's obviously not irrelevant if you settle for winning a pawn and miss checkmate). This suggests you should be careful how you define a "satisficing" solution, but not necessarily that satisficing is a bad strategy (in the extreme, if your "good move" is a forced checkmate, then it's obviously a waste of time to look for a "better move", whatever that might mean).
1dxu9y
Hm... I'm not sure you're interpreting me all that charitably. You keep on mentioning a dichotomy between satisficing and maximizing, for instance, as if you think I'm advocating maximizing as the better option, but really, that's not what I'm saying at all! I'm saying that regardless of whether you have a policy of satisficing or maximizing, both methods benefit from additional time spent thinking. Good satisficing =/= stopping at the first solution you see. This is especially common in programming, I find, where you generally aren't a time limit (or at least, not a "sensitive" time limit in the sense that fifteen extra minutes will be significant), and yet people are often willing to settle for the first "working" solution they see, even though a little extra effort could have bought them a moderate-to-large increase in efficiency. You can consciously decide "I want to satisfice here, not maximize," but if you have a policy of stopping at the first "acceptable" solution, you'll miss a lot of stuff. I'm not saying satisficing is bad, or even that satisficing isn't as good an option as maximizing; I'm saying that even when satisficing, you should still extend your search depth by a small amount to ensure you aren't missing anything. (And I'm speaking from real life experience here when I say that yes, that is a common failure mode.) In terms of the chess analogy (which incidentally I feel is getting somewhat stretched, but whatever), I note that you only mention options that are very extreme--things like losing rooks, queens, or getting checkmated, etc. Often, chess is more complicated than that. Should you move your knight to an outpost in the center of the board, or develop your bishop to a more active square? Should you castle, moving your king to safety, or should you try and recoup a lost pawn first? These are situations in which the "right" move isn't at all obvious, and if you spot a single "good" move, you have no easy way of knowing if there's not a better
4Lumifer9y
Taken literally, this is obviously and trivially true. You get more resources, your solution is likely to improve. But in the context, the benefit is not costless. Time (in particular in a chess game) is a precious resource -- to justify spending it you need cost-benefit analysis. Your position offers no criteria and no way to figure out when you've spent enough resources (time) and should stop -- and that is the real issue at hand.
5dxu9y
Position is also a precious resource in chess. You need to structure your play so that the trade-off between time and position is optimal, and cutting off your search the moment you think of a playable move is not that trade-off. Evidence in favor: 1. I've personally competed in several mid-to-high-level chess tournaments and have an ELO rating of 1853. Every time I've ever blundered, it's been because of a failure to give the position a second look. Furthermore, I can't recall a single time the act of giving the position a second look has ever led me to time trouble, except in the (trivial) sense that every second you use is precious. 2. I have personally interacted with a great deal of other high-rated players, all of whom agree that you should in general think through moves carefully and not just play the first good-looking move that you see. 3. Lasker, a world-champion-level player, was the one quoted as giving this advice, and according to Wikipedia (thanks, bentarm), the saying actually predates him. If the saying has survived this long, that's evidence in favor of it being true. Nor am I claiming to offer such a way. I agree that the optimal configuration is difficult to identify, and furthermore that if it weren't so, a great deal of economics would be vastly simpler. My claim is a far weaker one: that whatever the optimal configuration is, stopping after the first solution is not it. This may sound trivial, and to a regular LW reader, it very well may be, but based on my observations, very few regular (as in not explicitly interested in self-improvement) people actually apply this advice, so it does seem important enough to merit a rationality quote dedicated to it.
3Lumifer9y
By the way, in certain situations it's analytically solvable -- see e.g. here.
3dxu9y
That's really interesting. Thanks for the link!
2Lumifer9y
You're successfully demolishing a strawman. Is anyone claiming what you are arguing against?
1[anonymous]9y
Perhaps lesson is that all such sayings mere wisdom-facets, not whole diamond. Appreciate the facet for its beauty, yes, but understand that there are others, including the one most opposite on the other side...perhaps should be something generally understood in thread such as this. Do not sense real disagreement in this conversation. Thinking has benefits, all agree, and thinking has costs, all agree...doubt Lasker himself waited to move until he knew he had the most perfect move, and yet he no doubt lost and observed others losing because of a move played too rashly....
1Lumifer9y
That's the optimal situation :-) Sometimes such sayings are a body part of an elephant. And occasionally -- of a donkey X-D
0dxu9y
No, which is why I feel Lasker's quote is a good rationality quote. If people are constantly expressing disagreement, that's evidence that something's wrong. (A decent level of disagreement is healthy, I feel, but not too much.) What happened is this: bentarm interpreted my position differently from what I intended and disagreed with his/her interpretation of my position, so I clarified said position and (hopefully) resolved the disagreement. If there's no longer anyone arguing against me, then that means I accomplished what I aimed to do.
0[anonymous]9y
Of course it isn't. But I don't think that's a very good standard to be holding most forms of advice to. Very little advice is always good advice; nearly all sayings have exceptions. The fact is, however, that Lasker's (sort of) quotation is useful most of the time, both in chess and out of chess (since unless you're playing a blitz game, you're likely to have plenty of time to think), and for a rationality quote, that suffices.
5IlyaShpitser9y
It is worth noting that Lasker often played the opponent, not the board (e.g. he was known to pick a move he knew was not optimal, but which his opponent found most uncomfortable). He would go for tactics vs positional players, and for slow positional play vs tactical players. He was very annoying to play against, apparently. Also was the champion for 27 years, while having an academic career. See also "nettlesomeness": http://marginalrevolution.com/marginalrevolution/2013/11/nettlesomeness-and-the-first-half-of-the-carlsen-anand-match.html
3Curiouskid9y
See also: "The Perfect/Great is the enemy of the Good"
1parabarbarian9y
Without the Perfect, the Good would have no standard for measurement. This is especially important when making popcorn or building airplanes.

Always take into consideration the fact that you might be dead wrong

--Sam Vimes, Jingo, Terry Pratchett

The vanity of teaching often tempteth a Man to forget he is a Blockhead.

George Savile, 1st Marquess of Halifax, Political, Moral and Miscellaneous Reflections

The mistakes are there, waiting to be made.

Savielly Tartakower, on the starting position in chess. Source.

027chaos9y
I don't play chess, or know how to play at all well, nor am I interested in learning. But are there any books by or about chess masters that I might find interesting, for teaching good habits of thought? Or even just a list of famous chess quotations?
2macrojams9y
"Willy Hendriks, Move First, Think Later: Sense and Nonsense in Improving Your Chess. To me, more interesting as behavioral economics and as epistemology than as a chess book. The author claims that most chess advice is bad, and that we figure out positional strategies only by trying concrete moves, not by applying general principles. You do need chess knowledge to profit from the book, but if you can manage it, it is one of the best books on how to think that I know. - See more at: http://marginalrevolution.com/marginalrevolution/2013/04/what-ive-been-reading-24.html#sthash.PdwwzDJR.dpuf" - Tyler Cowen
1IlyaShpitser9y
Chess fundamentals by Capablanca. Still the best book on learning positional chess, and in general "good taste" in position evaluation. There is a certain clarity of thought in this book. I am not sure how useful it is or whether it can "rub off." Available for free. ---------------------------------------- I think there are some vaguely autobiographical things by Botvinnik on preparing for matches, but it's more about discipline than thought habits.
0kamerlingh9y
The Art of Learning: A Journey in the Pursuit of Excellence by Josh Waitzkin is the memoir of a chess child prodigy who later became a Tai Chi Chuan world champion. It's organized around his advice on developing the good habits of thought that he discovered when he was training for chess. But they are applicable to many domains: he makes the argument that the habits that made him excel at chess were also what made him a world-class competitor in Tai Chi Chuan.
0[anonymous]9y
There is something in Nate Silver's The signal and the noise.
0slicko9y
Luckily you only have to make fewer mistakes than your opponent to win.
4[anonymous]9y
Describing good play as "making few mistakes" seems like the wrong terminology to me. A mistake is not a thing, in and of itself, it's just the entire space of possible games outside the very narrow subset that lead to victory. If you give me a list of 100 chess mistakes, you've actually told me a lot less about the game than if you've given me a list of 50 good strategies -- identifying a point in the larger space of losing strategies encodes far less information than picking one in the smaller space of winning. And the real reason I'm nitpicking here is because my advisor has always proceeded mostly by pointing out mistakes, but rarely by identifying helpful, effective strategies, and so I feel like I've failed to learn much from him for very solid information-theoretic reasons.
4gjm9y
Have you discussed this with him? Perhaps he hasn't noticed this and would be delighted to talk strategies. Perhaps he has a reason (good or bad) for doing as he does. (E.g., he may think that you'll learn more effectively by finding effective strategies for yourself, and that pointing them out explicitly will stunt your development in the longer run.) Perhaps his understanding of effective strategies is all implicit and he can't communicate it to you explicitly.
2[anonymous]9y
I've tried talking to him about it: he really does seem to possess only implicit understanding of what works and what doesn't. Well, that, and it just doesn't seem to occur to him, even upon my repeated requests, to lay out guidelines ahead of time.
4dxu9y
Actually, most chess players define a mistake as a move that falls outside the subset of moves that either maintains equality OR leads to victory. This classification significantly reduces the size of mistake-space in chess.
0[anonymous]9y
True, but it still leaves mistake-space the much larger space.
2slicko9y
My first downvote, yay! Didn't feel that bad :) Anyway, my comment was merely an attempt to allay the philosophical worries expressed in the parent quote and so I used the same terms; it wasn't meant as pedagogy.
1PlacidPlatypus9y
Minor nitpick, surely you mean possible moves, rather than possible games? The set of games that lead to defeat is necessarily symmetrical with the set that lead to victory, aside from the differences between black and white.

You cannot change and yet remain the same, though this is what most people want.

--Patrick J. MacDonald

7shminux9y
An interesting quote, but isn't it basically the definition of identity? The part that remains the same while changing all the while?

The specific context of that is in changing bad habits; the only way to improve is to do something different. Typically people would rather keep doing the same thing, but with better consequences.

1noitanigami9y
Changing while remaining the same is what Algebra is all about. Identify the quality you wish to hold invariant, then find the transformations that do so. Changing things while leaving them the same in important ways is how problems are solved.

Having got on well by adopting a certain line of conduct, it is impossible to persuade men that they can get on well by acting otherwise. It thus comes about that a man's fortune changes, for she changes his circumstances but he does not change his ways.

-Niccolo Machiavelli, The Discourses

Any model makes some inaccurate predictions but models can retain utility despite significant propensities for inaccuracy. Inaccurate predictions aid the choice of models for future predictions. Because of this, the central scientific problem in the computational study of the MBH mechanism is not the inaccuracy of the predictions. Rather, it is the absence of any particular prediction at all.

--R. Erik Plata and Daniel A. Singleton, A Case Study of the Mechanism of Alcohol-Mediated Morita Baylis-Hillman Reactions. The Importance of Experimental Observations.

"The danger in trying to do good is that the mind comes to confuse the intent of goodness with the act of doing things well."

  • Ursula k. Le Guin, Tales from Earthsea

When writing the history, the writer is sitting outside the time, in Olympian detachment, surveying what was said and done, with the knowledge of what overhyped fads will fall by the wayside, and what ignored actions will prove to be crucial. He hasn't got that for the present era; the writer is still meshed in the circumstances that lead to the hyping and the ignoring. Not to mention that he is very likely to be a partisan in the matter -- most who write histories of a thing are passionately attached to the thing itself. Which can also lead to a shocking change in tone in the last chapter too, as the calm recitation of facts gives way to the sound of axes grinding, even if the writer manages to make interesting observations.

The irony is that anyone who's done the history of things will have read, in his research, many, many, many writers making idiots of themselves because they do not realize they are enmeshed in their era, and yet this does not stop doing the same thing over again.

Mary Catelli

"Politics selects for people who see the world in black and white, then rage at all the darkness" -- Megan McArdle

4ChristianKl9y
Which people do you mean with that? Politicians might talk in terms of black and white to appeal to voters but most of them don't think that way.
3Zubon9y
When you talk in terms of black and white all the time, it is very easy to forget that you don't think that way.
-4ChristianKl9y
This looks like a result of mind kill. The fact that you let yourself be blinded by someone's strategy means that you fail at reasoning. It doesn't help to moralize.
-2Lumifer9y
Follow the link and it will become clearer.

Anger is an evolutionary strategy that helps us deal with threats. It focuses our mind on the target, suppresses our fear and drives us to attack.

Anger is not evolution's answer to generic "threats." You don't get angry at the saber-toothed tiger charging you. Rather, it is a response to threats to social cohesion. People who break the rules make us angry even when they don't directly harm us. It's why people find themselves yelling at pedestrians who cross against the light even when the delay to the driver is a matter of seconds.

That's why politicians are angry: because they are trying to artificially create a sense of social cohesion in their coalition of voters.

Rob Lyman, in a discussion of why so many politicians have an angry persona.

[-][anonymous]9y220

You don't get angry at the saber-toothed tiger charging you.

The what? Rob never stubbed a toe in the dark and then launched an angry tirade on the offending piece of furniture?

The number of times I told my first, very bad car to eat a bag o' penises is, well, high.

And there is the saying that programmers know the language of swearing best - many bugs make one angry, not angry at something clear, just angry. Angry at the situation in general. Like why the eff had this had to happen to me when I need to run this script before I can go home? Aaargh. That kind of thing.

-2Salemicus9y
The furniture was put there by a thoughtless person not following the social rules. The bad car was built by shoddy engineers not living up to your expectations. The bug in the code was put there by a careless programmer not following agreed practice. And so on. Your examples merely serve to reinforce the notion that what makes us angry is people breaking the (possibly unwritten) rules or violating social cohesion.

Your examples merely serve to reinforce the notion that what makes us angry is people breaking the (possibly unwritten) rules or violating social cohesion.

That clashes with my introspection, unlike DeVliegendeHollander's account. When I stub my toe in the dark and start swearing, my thoughts are not anything to do with social rules or their violation (at least not at a conscious level); typically no one else is around, no other person enters my mind, and I'm just annoyed that I'm unnecessarily experiencing pain, and that annoyance doesn't feel like it has a moral element to it. It feels like a straightforward reaction to unexpected, benefit-free pain.

[-][anonymous]9y130

Sounds rather forced to me. How about a simpler hypothesis that anger is frustration, the expression of the bad feelings coming from expectations not being fulfilled?

2Salemicus9y
So would you get angry if a sabre-toothed tiger charged at you when you weren't expecting it? Do you get angry when a clear day gives way to rain? Do you get angry when a short story has a twist ending? Expectations not being fulfilled doesn't necessarily cause anger. It may lead to sadness, or laughter, or fear, or disappointment, or any number of emotions. But it normally only leads to anger when the frustrated expectation is about social rules.
7Good_Burning_Plastic9y
FWIW, Salemicus::anger ("how dare you!") and annoyance feel slightly but not very different to my System 1, much more similar to each other than, say, the various feelings that English labels as "love", and I don't normally feel the need of using different words for the two unless I want to be pedantic. I realize that anger is supposed to be what "They offered me a lousy offer in this Ultimatum game so I'd better turn it down even if I CDT::will be worse off otherwise people TDT::would continue to make me similarly lousy offers" feels from the inside, but my System 1 has only a vague understanding of that, let alone of the fact that unanimate objects aren't actually playing Ultimatum with me (and I can't be alone on this last point otherwise no-one would have ever hypothesised that lightning came from Zeus), but YMMV. BTW, are you two native English speakers? (FTR I'm not.) This might be a case of languages labeling feeling-space differently, rather than or as well as people's feeling-spaces being different.
5[anonymous]9y
I am not, but I got convinced by Salemicus's argument. I realized that what I translate as "anger at the weather" is better translated as "being mad at the weather" or "being pissed at the weather" and anger here is not something like a short fuck-you feeling but more like the urge to launch a long rant or dressing-down.
3Salemicus9y
I am a native speaker, yes. I find it interesting that our intuition clashes so. I immediately found RL's account compelling on the basis, whereas others did not. This could be a case of different labelling, or even different emotional experience.
1Good_Burning_Plastic9y
The weirdest thing is that I do have the intuition "corresponding" (FLOABW) to the fact that if deterring someone from doing something can work in principle it might be a good idea to try but if it cannot possibly work it makes no sense to try (the "Sympathy or Condemnation" section of the "Diseased thinking" post makes perfect sense to me); when Mencius Moldbug pointed out that people react differently to the threat of anthropogenic global warning differently to the way they'd react to hypothetical global warning due to the Sun, I knew exactly what he was talking about. But, Rob Lyman's example is a very poor choice of a pointer to that intuition for me, exactly because it points me to stuff like stubbing a toe in the dark instead.
1seer9y
That's perfectly rational behavior. The two causes give different predictions about likely future warming.
3Good_Burning_Plastic9y
He explicitly specified that the predicted increase of radiative forcing due to solar activity in his hypothetical would equal the predicted increase of radiative forcing due to greenhouse gases in the real world. Sure, there is still a difference between the two situations akin to that described in the Diseased Thinking post I linked upthread, in that shaming people into not emitting as much CO2 might in principle work whereas shaming the Sun into not shining as much cannot possibly work (though Moldbug still has a point as the cost-effectiveness of the former is probably orders of magnitude less than most people would guess). I know you can't shame a saber-toothed tiger into not charging you either, but still Moldbug's example worked for me and Lyman's didn't for whatever reason. EDIT: Might be because I'd think of an increase in the solar constant in Far Mode but I'd think of a saber-toothed tiger in Near Mode.
0Desrtopa9y
For me at least, the answers are no, yes, and no respectively. We can further refine the prior hypothesis by stipulating that the bad feelings arise from expectations not being fulfilled in an unpleasurable way, which would stop it from generating the third situation as an example. As for the first, perhaps one might experience anger if it were not being overridden by the more pressing reaction of fear. Or perhaps the hypothesis is off base, but it seems to generate some correct predictions of anger which the hypothesis that anger only arises from frustrated expectations about social rules fails to generate.
0[anonymous]9y
My intuitive answer would be yes, but now I am realizing that for me sadness or fear is probably much closer to anger than for you. In my mind they all are "feel bad, be unhappy and express it too". I suppose if we define anger in a very granular and precise way and not just as a general bad feeling, "being mad at" but more like, giving a long rant, it can only apply to humans because I will swear to the rain but only briefly, to let steam out, I will not give a long angry rant to it. I will be "mad at it", but not angry in that social sense that is clear. Halfway conceded: anger in the very granular sense only applies to humans. But. Can you think of a counter-example where 1) humans violate our expectations 2) but it is no a social rule or cohesion violation, and do we get angry or not? This is very tricky, because our expectations are, of course, based on social rules! Usually. Now I am searching for a case when not.
1Salemicus9y
I already did give such an example - a short story with a "twist" ending. Such an ending violates our expectations (that's what makes it a "twist") but it doesn't break any social rule, so people often find these amusing, clever, etc. On the other hand, a "twist" ending in a context where there is a social rule against such endings might well make people angry - for example, if the recent movie Exodus: Gods and Kings had ended with the Israelites being drowned in the Red Sea and the Pharoah triumphant, that would no doubt have upset many viewers.
0[anonymous]9y
Hmmm... most social rules generally want people to behave in a predictable ways, for various reasons, so they avoid surprises. It seems almost like surprises are only allowed in special cases... I almost accept your point now, but one objection. A good and a weak soccer teams play a match. Surprisingly, the weaker one wins. It was fair play. Nobody violated a rule. Still the fans of the losing one are angry - at their own team, because how could they let a much weaker team win. Is that a social rule violation that if you are generally better you are never allowed to lose? Or just an expectation violation? Is it more of a bias on the side of fans: their team must have violated to rule to try hard and not be lazy, because they cannot imagine any other explanation? If you generally agree, I accept your point with a modification: anger is about perceived social rule violation: but people are not perfect judges of social rule violations, there are mistakes made both ways, and tententious, bias-driven mistakes. Thus, as in my soccer example, sometimes all you see at first is a violated expectation. You see no rule violation. Then you need to figure out why exactly may other people think it is a rule violation. This is not always easy and we don't do it that often, and thus often we just see a violated expectation, and not see how others perceive it as a rule-violation.
8[anonymous]9y
I just want to say I am glad to have lost this debate, because it is working. For me. I mean, yesterday I was able to manage my anger better by asking myself questions like "what social rule I think is broken here? Is that a real one or just my wish? If real, a reasonable one?" even when the answer was yes/yes just being conscious of it worked. I think I will shamelessly steal and apply this idea in discussions where it can be useful. Thanks a lot.
0ChristianKl9y
I think according two common usage of the terms they refer to different emotions. Anger is a state of energy. Frustration is a rather passive state. Anger doesn't get triggered for every unfulfilled expectation. It get's triggered if things aren't as they "should" be. If you think you don't deserve what you are expecting you get frustrated upon not getting it but not angry.
0dxu9y
And since the concept of "should" evolved as a primarily social mechanism, it makes sense that anger would be triggered by (perceived) social affronts.

“Things are not as they seem. They are what they are.” ― Terry Pratchett, Thief of Time

But, above all, there is the conviction that the pursuit of truth, whether in the minute structure of the atom or in the vast system of the stars, is a bond transcending human differences.

-- Arthur Eddington, "The Future of International Science", as quoted in An Expedition to Heal the Wounds of War: the 1919 Eclipse Expedition and Eddington as Quaker Adventurer

Gordon [Tullock] was on my dissertation committee. After reading all 252 pages of my dissertation within twelve hours of my submitting it, Gordon caught me in the Public Choice hallway at Virginia Tech to give me his assessment: "Minimal but acceptable." To which I replied, "Optimal. Done!"

-- Richard McKenzie, quoted on Econlog

Related engineer joke: "anybody can build a bridge that won’t collapse–but it takes a real engineer to build a bridge that just barely avoids collapse."

[-][anonymous]9y100

As to a "science" of human conduct, I have mentioned some difficulties, notably that one of the most distinctive traits of man is make-believe, hypocrisy, concealment, dissimulation, deception. He is the clothes-wearing animal, but the false exterior he gives to his body is nothing to that put on by his mind.

Frank Knight, "The Role of Principles in Economics and Politics" p.11

Probably not found anywhere online, my favorite college professor, Ernest N. Roots, used to say, " Things that are simply remarkable, become remarkably simple, once they are understood". This has been my personal defense against arguments from ignorance ever since.

4Vaniver8y
Welcome to LW! We post a new Rationality Quotes thread every month; the current one is October 2015 for a few more days, but you can find a link to the most recent one on the right sidebar if you're looking at Main (the header "Latest Rationality Quote" is a link to the page, above a link to the latest quote).

No, science is not a set of answers; it is a procedure.

Nassim Taleb

4James_Miller9y
True for scientists. But for most people science is indeed a set of answers

I am a scientist, albeit the most junior kind of scientist, and I reckon "science" can legitimately refer to a set of answers or a methodology or an institution.

I doubt anyone in this thread would object if I called a textbook compiling scientific discoveries a "science textbook". I'm not sure even Taleb would blink at that (if it were in a low-stakes context, not in the midst of a heated argument).

-2Grant9y
The information in a science textbook is (or should be) considered scientific because of the processes used to vet it. Absent of this process its just conjecture. I often wonder if this position is unpopular because of its implications for economics and climatology.
5soreff9y
http://xkcd.com/397/
0[anonymous]9y
Well, this is a problem you have if your culture is so egalitarian that common people think they are entitled to their own opinions instead of quoting an authority: hopefully one that uses the scientific method properly.
[-][anonymous]9y60

Eric S. Raymond: "Interesting human behavior tends to be overdetermined."

Example sources:

http://esr.ibiblio.org/?p=4213

http://esr.ibiblio.org/?m=20020525

http://esr.ibiblio.org/?p=6599

I didn't understand this quote out of context so I followed one of the links and he explains it in this comment:

It's something I learned from animal ethology. An "overdetermined" behavior is one for which there are multiple sufficient explanations. To unpack: "For every interesting behavior of animals and humans there is more than one valid and sufficient causal theory." Evolution likes overdetermined behaviors; they serve multiple functions at once.

Science is the belief in the ignorance of experts.

Richard Feynman, What is Science?

0gedymin9y
This description fits philosophy much better than science.

Of all the causes which conspire to blind
Man's erring judgment, and misguide the mind,
What the weak head with strongest bias rules,
Is pride, the never-failing vice of fools.

-Alexander Pope, An Essay on Criticism

Facts push other facts into and then out of consciousness at speeds that neither permit nor require evaluation.

Neil Postman from Amusing ourselves to Death, p 70

2JoshuaZ9y
I'm not sure I understand. Can you expand on what the point is?
0hargup9y
Postman said this in context of television and new age media, where even "news" other relevant information is shown for its entertainment value and not because it can help us take better decisions.

Suppose I think, after doing my accounts, that I have a large balance at the bank. And suppose you want to find out whether this belief of mine is "wishful thinking." You can never come to any conclusion by examining my psychological condition. Your only chance of finding out is to sit down and work through the sum yourself.

-- C. S. Lewis

1lmm9y
This seems obviously false. Am I missing something?
5g_pepper9y
I think that C.S. Lewis means that when a person puts forth an assertion, you should ascertain the truth of falsity of the assertion by examining the assertion alone; the mental state of the person making the assertion is irrelevant. Presumably Lewis is arguing against the genetic fallacy, or more specifically, Bulverism. Edit: Why the downvote? My comment was fairly non-controversial (I thought).
-1Jiro9y
Whether a belief is wishful thinking is inherently an assertion about the mental state of a person. It is meaningless to say that you should examine the assertion instead of the mental state, since the assertion is an assertion about the mental state.
1g_pepper9y
I don't know about that. Merriam Webster defines wishful thinking as: So if my calculations are accurate, per Merriam Webster's definition, I have not engaged in wishful thinking.
1Jiro9y
Something can be wishful thinking and true at the same time. Doing the sum wouldn't prove that it's not wishful thinking. Of course having the sum be correct is a necessary condition for non-wishful thinking, but it does not determine the existence of non-wishful-thinking all by itself.
2DanielLC9y
No it's not. You can be wrong for reasons other than wishful thinking.
0Jiro9y
When A is being correct and B is wishful thinking, what I said is that A implies B, which reduces to (B || ~A). What you're saying is that ~A does not imply ~B, which reduces to (B && ~A). Of course, these two statements are compatible.
1DanielLC9y
I think you messed up there. Being correct certainly doesn't imply wishful thinking. You were saying that non-wishful thinking implies being correct. That is ~B implies A. Or ~A implies B, which is equivalent. If I checked my balance and due to some bank error was told that I had a large balance, I would probably have the sum be incorrect but still be using non-wishful thinking. The sum being correct is not a necessary condition for non-wishful thinking. All the other combinations are possible as well, though I don't feel like going through all the examples.
0Jiro9y
You're right, I meant to say that B implies A, not to say that A implies B. However, that is still equivalent to (B || ~A) so the rest, and the conclusion, still follow.
1DanielLC9y
B implies A would be wishful thinking implies that you are correct. This is obviously false. You clearly intended to have a not in there somewhere. Double check your definitions. I was giving an example of (~A && ~B). If you want an example of (A && B), it would be that I don't even look at my statements and just assume that I have tons of money because that would be awesome, but I also just happen to have lots of money.
0Jiro9y
It being a law of the Internet that corrections usually contain at least one error, that applies to my own corrections too. In this case the error is the definitions of A and B. A=being correct, B=non-wishful-thinking. "Having the sum be correct is a necessary condition for non-wishful thinking" means B implies A, which in turn is equivalent to (B || ~A). "You can be wrong for reasons other than wishful thinking" means ~(~B implies ~A), which is equivalent to ~(~B || A), which is equivalent to B && ~A. Same conclusions as before, and they're still not inconsistent.
1DanielLC9y
Now that we have that out of the way, we can start communicating. A counterexample to (B || ~A) would be (~B && A), so wishful thinking while still being correct. As I said in my last post, you just assume you have a lot of money because it would be awesome, and by complete coincidence, you actually do have a lot of money. Now that we have established the language correctly and I looked through my first post again, you are correct and I misread it. I tried to go back and count through all the mistakes that lead to our mutual confusion, and I just couldn't do it. We have layers of mistakes explaining each others mistakes.

History teaches us, gentlemen, that great generals remain generals by never underestimating their opposition.

Gen. Antonio Lopez de Santa Anna: The Alamo: Thirteen Days to Glory (1987) (TV)

8fortyeridania9y
Overestimating can be costly too. That's why bluffing can work, in poker as in war. Examples/articles: * Empty Fort Strategy * 100 horsemen and the empty city (gated). Here are two articles summarizing the original paper: Miami SBA and ScienceDaily
0PeterisP9y
The most important decisions are before starting a war, and there the mistakes have very different costs. Overestimating your enemy results in peace (or cold war) which basically means that you just lose out on some opportunistic conquests but underestimating your enemy results in a bloody, unexpectedly long war that can disrupt you for a decade or more - there are many nice examples of that in 20th century history.
2fortyeridania9y
Peace or cold war are not the only possible outcomes. Surrender is another. An example is the conquest of the Aztecs by Cortez, discussed here, here, and here. Surrender can (but need not) have disastrous consequences too.
0ChristianKl9y
Generals are not the people who decide whether or not a war gets fought but who decide over individual battles.
7DanielLC9y
If you're unbiased then you should be underestimating your opposition about half the time.

If your loss function is severely skewed, you do NOT want to be unbiased.

4DanielLC9y
What you want is to have a distribution. You will expect your opposition to be about as strong as it is. You will prepare for the possibility that it is stronger or weaker.
2Lumifer9y
A distribution is nice but often you have to commit to a choice. In such cases you generally want to minimize your expected loss (or maximize the gain) and if the loss function is lopsided, the forecast implied by the choice can be very biased indeed.
0TheMajor9y
Even with a very skewed loss function you want to have an accurate estimate of your opposition, which will be an underestimate about half of the time, and then take excessive precautions. Your loss function does not influence your beliefs, only your actions.
0Lumifer9y
Yes, but actions is what you should care about -- if these are determined, your beliefs (which in this case do not pay rent) don't matter much.
1TheMajor9y
. So why would I want to bias myself after I've decided to take excessive precautions? I think we're in agreement btw, we care about actions, and if you have a very skewed loss function then it is rational to spend a lot of effort on improbable scenarios in which you lose heavily, which from the outside looks similar to a person with a less skewed loss function thinking that those scenarios are actually plausible. I was just trying to point out that DanielLC's reply was correct and your previous one is not - even with a skewed loss function this should not produce feedback to the actual beliefs, only to your actions. So no, you DO want to be unbiased, it's just that an unbiased estimate/posterior distribution can still lead to asymmetric behaviour (by which I mean spending an amount of time/effort to prepare for a possible future disproportionate to the actual probability to this future occurring).
0Lumifer9y
Well, let me unroll what I had in mind. Imagine that you need to estimate a single value, a real number, and your loss function is highly skewed. For me this would work as follows: * Get a rough unbiased estimate * Realize that I don't care about the unbiased estimate because of my loss function * Construct a known-biased estimate that takes into account my loss function * Take this known-biased estimate as the estimate that I'll use from now on * Formulate a course of action on the basis of the the biased estimate The point is that on the road to deciding on the course of action it's very convenient to have a biased estimate that you will take as your working hypothesis.
5TheMajor9y
Yes. My point is that this new biased estimate is not your 'real estimate' - this is simply not your best guess/posterior distribution given your information. But as I remarked above your rational actions given a skewed loss function resemble the actions of a rational agent with a less risk-averse loss function with a different estimate, so in order to determine your actions you can compute what [an agent with a less skewed loss function and your (deliberately) biased estimate] would do, and then just copy those actions. But despite all of this, you still want to be unbiased. It's fine to use the computational shortcut mentioned above to deal with skewed loss functions, but you need your beliefs to stay as accurate as possible to not get strange future behaviour. A small, simplified example: Suppose you are in possesion of 1001$ total (all your assets included), and it costs $1000 to buy a cure for a fatal disease you happen to have/a ticket to heaven/insurance for cryonics. You most definitely don't want to lose more than one dollar. Then a guy walks up to you and offers a bet - you pay 2$, after which you are given a box which contains between 0$ and 10$, with uniform probability (this strange guy is losing money, yes). Clearly you don't take the bet - since you don't actually care much whether you have 1000$ or 1001$ or 1009$, but would be terribly sad if you had only 999$. But instead of doing the utility calculation you can also absorb this into your probability distribution of the box - you only care about scenarios where the box contains less than a dollar, so you focus most of your attention on this, and estimate that the box will contain less than a dollar. The problem now arises if you happen to find a dollar on the street - it is now a good idea to buy a box, although the agents who have started to believe the box contains at most a dollar will not buy it. To summarise: absorbing sharp effects of your utility function into biased estimates can be a dec
3Vaniver9y
It seems to me that it's best to use "your beliefs" to refer to the entire underlying distribution. Yes, you should not bias your beliefs--but the point of estimates is to compress the entire underlying distribution into "the useful part," and what is the useful part will depend primarily on your application's loss function, not a generalized unbiased loss function.
1Lumifer9y
Sure it is my "real" estimate -- because I take real action on its basis. Let me make a few observations. First, any "best" estimate narrower than a complete probability distribution implies some loss function which you are minimizing in order to figure out which estimate is "best". Let's take the plain-vanilla case of estimating the central point of a distribution which produced some sample of real numbers. The usual estimate for that is the average of the sample numbers (the sample mean) and it is indeed optimal ("the best") for a particular, quadratic, loss function. But, for example, change the loss function to absolute deviation (L1) and now the median becomes "the best estimate". The point is that to prefer any estimate over some other estimate, you must have a loss function already. If you are calling some estimate "best", this implies a particular loss function. Second, the usefulness of any estimate is determined by the use you intend for it. "Suitability for a purpose" is an overriding criterion for estimates you produce. Different purposes ("produce an unbiased estimate" and "select a course of action" are different purposes) often require different estimates. Third, "unbiased" is not an unalloyed blessing. In many situations you face the bias-variance tradeoff and sometimes you do want to have some bias.
0fortyeridania9y
This is a good point. A helpful discussion of asymmetric loss functions is here.
0Desrtopa9y
Only if you have no margin within which you can be considered to be "correctly estimating."

The history of human thought would make it seem that there is difficulty in thinking of an idea even when all the facts are on the table.

--Isaac Asimov, "How Do People Get New Ideas?"

[-][anonymous]9y30

Social problems are not only hard but finally insoluble. Yet many of them will inevitably get some kind of "treatment"; it is a question of better or worse, or of making things better, more or less, or making them worse than before, even to downright disaster. As I remember hearing "Tommy" Adams say in a classroom, we must not call any problems insoluble which must be solved in some way and for which some solutions are better, or worse, than others.

Frank Knight, "The Role of Principles in Economics and Politics" p.19

His cla... (read more)

I never was good at learning things. I did just enough work to pass. In my opinion it would have been wrong to do more than was just sufficient, so I worked as little as possible.

Manfred von Richthofen

Scott Adams posted his "My best tweets" collection. About half of them are examples of instrumental rationality in action, and most are worth a laugh. Some of my favorites from the Arguing with Idiots section are in the repiies.

9shminux9y
9seer9y
Depends on whether your goal is to convince the person you're talking to, or convince outside observers.
0Manfred9y
I just hope this is sufficiently selected that people who really do have problems with attacking people don't read this.
8DanielLC9y
If you actually are attacking them, you should still run away. Just for a different reason.
6shminux9y
2Jiro9y
I cannot construct a coherent argument for intelligent design, depending on what you mean by "coherent". I could construct an argument which is grammatically correct and uses lies, but I don't think you meant to count that as "coherent".
6Epictetus9y
If you have at your disposal an intelligent being who gets to decide the laws of physics and gets to set the initial conditions, then intelligent design is an easy consequence: "God set up the universe in such a way that allowed life to evolve according to His predetermined laws". If we ever get enough computing power to simulate intelligent life, then those simulations will have been intelligently designed and an argument very similar to the above will be true (an intelligent person wrote a program and set the initial parameters in such a way that intelligence was simulated). You can write a number of refutations of this argument (life sucks, problem of evil, Occam's razor, etc.), but I'd still say it's coherent.
2shminux9y
The quote basically describes the principle of charity 2.0: you seek to understand the logic of a position foreign to you not just to refute it or to convince the other person, or to construct a compromise. You do it to better understand your own side and any potential fallacies you ordinarily do not see in your own logic.
0Jiro9y
What if your understanding is "it has no valid logic"?
2Lumifer9y
You probably can if you start with a different set of axioms. Note that, for example, "God exists" is not a lie but a non-falsifiable proposition.
2Jiro9y
According to supporters of intelligent design, "intelligent design" implies not using any religious premises. So if you started with that axiom, then you're not really talking about intelligent design after all.
1Jayson_Virissimo9y
I don't think this is quite right. I think they claim that intelligent design doesn't imply using any religious premises. ~□(x)(Ix⊃Ux) rather than □(x)(Ix⊃~Ux) In other words, there is nothing inconsistent with a theist (using religious premises) and a directed panspermia proponent (not using any religious permises) both being supporters of intelligent design.
0Jiro9y
Okay, change it to "their version of intelligent design doesn't use any religious premises" and change my original statement to "I can't construct a coherent argument for their version of intelligent design".
1Lumifer9y
I don't think so, though it's possible to quibble about the definition of "religious premises". Intelligent design necessary implies an intelligent designer who is, basically, a god, regardless of whether it's politically convenient to identify him as such.
2Jiro9y
Supporters of intelligent design may end up basically having a god as their conclusion, but they won't have it as one of their premises. And they have to do it that way. If God was one of their premises, teaching it in government schools would be illegal.
1Lumifer9y
I think you're confusing the idea of intelligent design and cultural wars in the US. The question was whether you can construct "a coherent argument for intelligent design", not whether you would be willing to play political games with your congresscritters and school boards.
2Jiro9y
No, the question was whether the "rationality quote" makes sense. I offered intelligent design as a counterexample, a case where it doesn't. Telling me that you don't think that what I described is intelligent design is a matter of semantics; its usefulness as a counterexample is not changed depending on whether it's called "intelligent design" or "American politically expedient intelligent-design-flavored product".
0Lumifer9y
And I disagree, I think it does perfectly well. The quote applies to actual positions, not to politically-based posturing.
2Jiro9y
That dilutes the quote to the point of uselessness. Probably most positions that people take involve posturing. But if you really want a different example, how about homeopathy? I can't construct an argument for that which is coherent in the sense that was probably intended, although I could construct an argument for that which is grammatically correct but based on falsehoods or on obviously bad reasoning.
1seer9y
What lies are those? What evidence convinced you that they are in fact lies? (That's how I would start.)
4Jiro9y
I said that I could construct such an argument. I think you'll agree that I am capable of constructing an argument that uses lies. It does not follow that I think all intelligent design proponents are liars, just that I could not reproduce their arguments without saying things that are (with my own level of knowledge) lies. (If you really want an irrelevant example of intelligent design proponents lying, http://en.wikipedia.org/wiki/Wedge_strategy )
127chaos9y
It's a heuristic, not an automatic rule. Excluding religion and aesthetics, I can't think of any cases where it doesn't work. There are probably some which I just haven't thought of, but there certainly aren't very many.
0Jiro9y
I mentioned homeopathy above.
027chaos9y
You don't have a small natural intuition in your brain saying that homeopathy makes sense? I do, although of course I ignore it.
0Jiro9y
I don't think that's the same thing as being able to construct a coherent argument.
1shminux9y
027chaos9y
Better tell that to every book on negotiation ever, I guess. The human concept of justice is fickle, but nonetheless real. Appeals to it, if done skillfully, can be very advantageous.
-1shminux9y
Just letting you know that I dislike your repetitive snark.
-1shminux9y
227chaos9y
This seems anti-rational, like a boo-light.

This is Hari's business. She takes innocuous ingredients and makes you afraid of them by pulling them out of context.... Hari's rule? "If a third grader can't pronounce it, don't eat it." My rule? Don't base your diet on the pronunciation skills of an eight-year-old.

From http://gawker.com/the-food-babe-blogger-is-full-of-shit-1694902226

[S]tupidity is one of two things we see most clearly in retrospect. The other is missed chances.

Stephen King, 11/22/63

“My gripe is not with lovers of the truth but with truth herself. What succor, what consolation is there in truth, compared to a story? What good is truth, at midnight, in the dark, when the wind is roaring like a bear in the chimney? When the lightning strikes shadows on the bedroom wall and the rain taps at the window with its long fingernails? No. When fear and cold make a statue of you in your bed, don't expect hard-boned and fleshless truth to come running to your aid. What you need are the plump comforts of a story. The soothing, rocking safety of a ... (read more)

-1hairyfigment9y
I admit that I have no children, but even that last part seems almost wholly false to me. Now, I might tell my hypothetical child that I'm a high Bayesian adept in the Conspiracy (passing actuarial exams/ordeals of initiation counts), that if spirits existed I'd be a mighty ceremonial magician (also probable) and therefore no ghost would dare harm my child.

The Bible says that God made the world in six days. Great Uncle Charles thinks it took longer: but we need not worry about it, for it is equally wonderful either way

-- Margaret Vaughan Williams

https://en.wikipedia.org/wiki/Vaughan_Williams

[-][anonymous]9y-20

[F]ingertips without maps are empty; maps without fingertips are blind.

-- Paul Churchland, chapter 2 of Plato's Camera

I know a man who, when I ask him what he knows, asks me for a book in order to point it out to me, and wouldn't dare tell me that he has an itchy backside unless he goes immediately and studies in his lexicon what is itchy and what is a backside.

-Montaigne, On Pedantry

This argument also relies on a ridiculous definition of rational.

Whilst rational economic actors do attempt to maximise their profit, the argument ignores that this takes place in the context of varying time windows. In effect it argues that it’s “rational” to take a tiny increase in profit today even if that destroys your business and all the potential long term profits you could obtain tomorrow and the day after. This definition is absurd and no actual business works that way.

Mike Hearn, Replace by Fee, a Counter Argument