I think there's widespread assent on LW that the sequences were pretty awesome. Not only do they elucidate upon a lot of useful concepts, but they provide useful shorthand terms for those concepts which help in thinking and talking about them. When I see a word or phrase in a sentence which, rather than doing any semantic work, simply evokes a positive association to the reader, I have the useful handle of "applause light" for it. I don't have to think "oh, there's one of those...you know...things where a word isn't doing any semantic work but just evokes a positive association the reader". This is a common enough pattern that having the term "applause light" is tremendously convenient.

I would like this thread to be a location where people propose such patterns in comments, and respondents determine (a) whether this pattern actually exists and / or is useful; (b) whether there is already a term or sufficiently-related concept that adequately describes it; and (c) what a useful / pragmatic / catchy term might be for it, if none exists already.

I would like to propose some rules suggested formatting to make this go more smoothly.

(ETA: feel free to ignore this and post however you like, though)

When proposing a pattern, include a description of the general case as well as at least one motivating example. This is useful for establishing what you think the general pattern is, and why you think it matters. For instance:

General Case:

When someone uses a term without any thought to what that term means in context, but to elicit a positive association in their audience.

Motivating Example:

I was at a conference where someone said AI development should be "more democratic". I didn't understand what they meant in context, and upon quizzing them, it turned out that they didn't either. It seems to me that they just used the word "democratic" as decoration to make the audience attach positive feelings to what they were saying.

When I think about it, this seems like quite a common rhetorical device.

When responding to a pattern, please specify whether your response is:

(a) wrangling with the definition, usefulness or existence of the pattern

(b) making a claim that a term or sufficiently-related concept exists that adequately describes it

(c) suggesting a completely fresh, hitherto-uncoined name for it

(d) other

(ETA: or don't, of you don't want to)

Obviously, upvote suggestions that you think are worthy. If this post takes off, I may do a follow-up with the most upvoted suggestions.

New Comment
51 comments, sorted by Click to highlight new comments since:

General case:

Small differences in the means of a normal distributions cause large differences at the tails.

Motivating example:

East Africans are slightly better at distance running than the rest of the world population, so if a randomly-picked Ethiopian and a randomly-picked someone-else compete in a marathon, the Ethiopian has a better chance of winning, but not by very much. But at the extreme right tail of the distribution (i.e. at Olympic-level running competitions), the top runners are almost all Ethiopians and Kenyans.

In my head I call it "threshold amplification" but I wonder if there's an official name for this.

I would love a name for this too since the observation is important for why 'small' differences in means for normally distributed populations can have large consequences, and this occurs in many contexts (not just IQ or athletics).

Also good would be a quick name for log-normal distribution-like phenomenon.

The normal distribution can be seen as the sum of lots of independent random variables; so for example, IQ is normally distributed because the genetics is a lot of small additive variables. The log-normal is when it's the multiple of lots of independent variables; so any process where each step is necessary, as has been proposed for scientific productivity in having multiple steps like ideas->research->publication.

The normal distribution has the unintuitive behavior that small changes in the mean or variance have large consequences out on the thin tails. But the log-normal distribution has the unintuitive behavior that small improvements in each of the independent variables will yield large changes in their product, and that the extreme datapoints will be far beyond the median or average datapoints. ('Compound interest' comes close but doesn't seem to catch it because it refers to increase over time.)

IQ is normally distributed because the genetics is a lot of small additive variables.

IQ is normally distributed because the distribution of raw test scores is standardized to a normal distribution.

And why was the normal distribution originally chosen? Most of intelligence seems explained by thousands of alleles with small additive effects - and such a binomial situation will quickly converge to a normal distribution.

The phrase "additive effects" doesn't make sense except in reference to some metric. If your metric is IQ, then that's circular.

If your metric is IQ, then that's circular.

No, it's not, because IQ is itself extracted from a wide variety of cognitive measures.

You seem to be claiming that there are some unspecified underlying other metrics of which IQ is simply a linear combination. If so, then IQ is not the ultimate metric. Which doesn't contradict my claim (claiming that P is not true does not contradict the claim that P -> Q). It does raise the question of what those metrics are.

It does raise the question of what those metrics are.

To expand on what I just said: IQ is a factor extracted from a wide variety of cognitive measures, whose genetic component is largely explained by additive effects from a large number of alleles of small effect with important but relatively small nonlinear contributions. That is, intelligence is largely additive because additive models explain much of observed variance and things like the positive manifold of cognitive tests.

Please be more precise in your comments, or stop wasting my time due to your lack of reading comprehension and obtuseness like you did before in my Parable post.

I think that "multiplicative" or "geometric" describes such phenomena.

[-]satt40

I've suspected for a long time that that was the insight Carl Sagan had while high and showering with his wife:

I can remember one occasion, taking a shower with my wife while high, in which I had an idea on the origins and invalidities of racism in terms of gaussian distribution curves. It was a point obvious in a way, but rarely talked about. I drew the curves in soap on the shower wall, and went to write the idea down. One idea led to another, and at the end of about an hour of extremely hard work I found I had written eleven short essays on a wide range of social, political, philosophical, and human biological topics. Because of problems of space, I can’t go into the details of these essays, but from all external signs, such as public reactions and expert commentary, they seem to contain valid insights.

(It is a little interesting, & amusing, to see someone inferring the "invalidit[y] of racism" from an observation more often used as a justification for racial hereditarian attitudes!)

Here's one that I don't think has a name-- the belief that [some desirable thing] should just happen. For example, the belief that people should just have different emotional reactions than they do, or that a government policy should just have good effects.

[-]Shmi160

Eliezer called this believing in the should-universe.

Incidentally, this expression is very intuitive and has an amazingly low inferential distance. Multiple times IRL and online I would reply to someone "it's too bad we don't live in a should-universe" in response to a should-statement, and my reply is instantly understood, without having to explain much, except maybe saying "the should-universe is an imaginary one where what you think should happen actually does, every time".

Related notion: in the humongous SSC post Reactionary Philosophy in an Enormous Planet-Sized Nutshell, the section "Reach for the Tsars" deals with proposals to solve problems which could only be implemented by dictatorial fiat, and describes it as a "czar's-eye view" solution.

Expecting different emotions than the ones actually observed looks to me like typical mind fallacy.

It may be a typical mind fallacy if the person actually has the emotional habits they're demanding from other people. Now that I think about it, people sometimes demand that their own emotions should just be different.

However, a statement can include more than one fallacy, and I think fantasies of lack of process can also be in play.

The whole FAI project resulted from Eiiezer realizing that a process was needed for AIs to be benevolent rather than a disaster.

As may be obvious, I now think the bias could be named the lack of process bias, though the "it should just happen!" bias might be more intuitive.

I was going to ask, "Do we ever demand emotional habits we don't have ourselves?", but then I noticed it was yet another typical mind fallacy on my part.

Meta-comment on this: I had a couple of examples I was going to suggest, but the process of following the above rules made it obvious that they were cases of existing concepts.

General Case:

In an otherwise well-constructed discussion on a subject, the author says something that reveals a significant misunderstanding of the subject, casting doubt on the entire piece, and the ability of the author to think about it sensibly.

Motivating Example:

A few years ago, a lot of public libraries in the UK were closed under austerity measures. Author Philip Pullman (a highly-educated, eloquent and thoughtful man) gave a speech on the subject, which was transcribed and widely circulated online. It was about the non-pecuniary value of libraries, and their value as educational and community resources. It was a very strong speech, but at one point it put forward the proposition that the value of libraries are completely incalculable and beyond measure. This took the wind out of the speech's sails for me, and my takeaway was "you write and speak very well, but you clearly can't be trusted to think about this subject in any useful way".

I experience this quite a lot. I'll be reading something online, mentally nodding along, thinking "yeah, this makes sense", and then the author will undermine all their credibility, not by saying something radical or obnoxious or unworkable or ignorant, but by saying something that demonstrates they don't know how to think properly about the issue.

"Red flag" isn't exactly what you want but has served me well enough in similar conversations.

That's similar but possibly not the same as the Gell-Mann amnesia effect.

but at one point it put forward the proposition that the value of libraries are completely incalculable and beyond measure

This could also just be rhetorical. Almost any sufficiently long argument will contain some really wrong or dumb elements, but most will contain some that simply aren't meant to be taken literally.

I think the format creates a frictional cost that will prevent valuable contributions. Can't we just post willy-nilly and let the good stuff bubble up to the top of the comments as usual?

If you like. I'll change "rules" to "suggested format in the post.

We could call the collection ... RationalityTropes! Or maybe something catchier: SmartTropes. Not LWTropes, though, that sounds like something RationalWiki would set up as a sucks-site.

Oh god, it's following me.

...ahem. Strictly speaking, a trope is a storytelling device -- a more-or-less stereotyped pattern that an author uses to express theme or symbolism or compactly illustrate character or plot. (There's an even narrower definition, too, but that's the common one.) TV Tropes' habit of giving real-life examples is therefore improper.

This would be something more general, and I'm not sure English has a word for it.

TV Tropes' habit of giving real-life examples is therefore improper.

I don't think it's improper. There's a common notion saying that humans perceive life (history, politics, society) in terms of narratives. We construct those narratives using stereotypes; in other words, the narratives have common themes and archetypes. There is a lot of commonality to how we parse a book, and how we parse actual events. For the same reason, people reporting on real events deliberately present them as narratives.

So it's legitimate for TV Tropes to list tropes "in real life"; what they're really describing is the narrative through which humans perceive (and remember) that real life.

Saying people perceive life in terms of narratives is correct. Describing motifs in those narratives in a quasi-objective way isn't. The very reason a stereotype is not a fact is that it doesn't show up in everyone's internal narrative in a given situation.

I don't object to finding an example of, say, dramatic irony in William Schirer's Rise and Fall of the Third Reich, which explicitly is a narrative covering real events. I do object to saying the same thing about World War 2, the real one.

I'm not sure I understand your point. I'm afraid I may be strawmanning or misconstruing your position in my reply. If I do, please point it out.

Describing motifs in those narratives in a quasi-objective way isn't. The very reason a stereotype is not a fact is that it doesn't show up in everyone's internal narrative in a given situation.

Certainly, different people can tell very different stories (both internal and external) about the same events. They can perceive different tropes or motifs at work. And when they talk about these events, each will describe them as he sees them.

Any one story can objectively contain a certain motif. Reality itself doesn't contain motifs, because it's not a story. And people can disagree about motifs because they tell different (internal) stories about the same set of facts. If that's what you're saying, I completely agree.

Also, sometimes it makes sense to try to be as objective as possible and describe facts without fitting any theory or story to them. That's not the same as saying those stories don't exist. We just ignore them some of the time.

However:

I don't object to finding an example of, say, dramatic irony in William Schirer's Rise and Fall of the Third Reich, which explicitly is a narrative covering real events. I do object to saying the same thing about World War 2, the real one.

World War 2 did not exist in reality. All there was, was a huge amount of individual events. It takes human storytelling to join them into the story of a great global war. To say that the Japanese were part of the war, but it only started in 1939 even though they had been at war with China and the USSR for years before that, because Hitler's invasion of Poland is more narratively important than his invasion of Czechoslovakia... That is pure narrative storytelling.

Facts from the territory are much smaller-scale than WW2; it exists only in our maps, and it's inherently a human narrative, which means it can legitimately exhibit irony, although of course people can disagree about the irony in a particular story. The territory doesn't contain irony, but nobody would say it does, because nobody would say an individual event local in space and time is ironic without reference to a larger narrative.

I see no relevant difference between Schirer's book and anything you or I might say or think about "World War 2"; one is written down and the other is not, that is all.

General case:

When someone posts links from webpage X, which can be refuted from webpage Y (or vice versa), and so on, without adding anything themselves to the discussion.

Motivating example:

I've often seen things posted on climate change, lifted directly from http://wattsupwiththat.com/ , that can be refuted from http://www.skepticalscience.com/ , which can often be re-refuted from the original website, and so on. Since we're just letting the websites talk to each other, and neither poster has any relevant expertise, this seems a pointless waste.

An argument that halts in disagreement (or fails to halt in agreement) because the interlocutors are each waiting for another to provide a skillful assessment of their own inexpertly-referenced media sounds a lot like a software process deadlock condition in computer science. Maybe there's a more specific type of deadlock, livelock, resource starvation, ..., in the semantic neighborhood of your identified pattern.

Dropping references, while failing to disclaim your ability to evaluate the quality and relevance of topical media, could be called a violation of pragmatic expectations of rational discourse, like Grice's prescriptive maxims.

Maybe a telecommunications analogy would work, making reference to amplifiers \ repeaters \ broadcast stations that degrade a received signal if they fail to filter \ shape it to the characteristics of the retransmission channel.

"Rhetorical reenactment" sounds like "historical reenactment" and hints at the unproductive, not-directly-participatory role in the debate of the people sharing links.

I'm not sure whether to start a new comment thread on this, but a related phenomenon:

Blog A has a post about some subject. Blog B has a post that is mostly just recapitulating the points of Blog A, and links to Blog A. Blog C has a post also on the subject, and rather than linking to Blog A, links to Blog B. Blog D then comes along and links to Blog C, and so on, and so rather than a bunch of blog posts all linking to the original post, you have a chain of blogs citing blogs citing blogs citing blogs. (This sort of phenomenon shows up a lot of times when Snopes tries to research something, although often it's print media citing each other). I'm reminded of the phrase "it's turtles all the way down", and think of this as "turtle citing", although perhaps a more descriptive phrase would be "recursive citation".

Another related phenomenon is people using anchor text for their links that really doesn't reflect the actual link content.

/u/Morendil calls this 'leprechauns'; in a Wikipedia context, one might use 'citogenesis'. I run into this occasionally - most recently: https://en.wikipedia.org/wiki/Talk:Bicycle_face#Serious_sourcing_issues

This happens when the debaters' personal level of knowledge and expertise has been exceeded by external sources introduced to the debate. Essentially, then, each person is using an appeal to authority with different ideas of what level of authority their sources have, since it well beyond their abilities to verify their sources' arguments. Terms like Epistemic Closure in political contexts address the related phenomena of conflicting but self-consistent networks of authoritative sources.

I'd call the underlying issue an "Epistemic Divide"

[-]satt00

For a year or two I've occasionally thought about writing a post about the principle that one shouldn't try to explain what isn't true, which feels to me important and deserves a catchy name. (But (1) I couldn't think of clear-cut, uncontroversial, specific local examples to exhibit, and (2) it's a special case of "Your Strength as a Rationalist" and "Fake Explanations", so I haven't bothered.)

[-]satt00

Some people are more rational than others, but no one is "rational" simpliciter, because no one meets the stringent criterion of applying perfect Bayesian reasoning to everything (or even most things). Consequently, calling people "rational" without qualification is an inflationary use of the term.

Nonetheless, people on LW sometimes refer to rationality as if it's a binary quality some people have and some people don't, which doesn't make sense to me. Searching LW for the phrase "rational people" returns similar examples of this. (In fairness, a lot of examples of the phrase refer to hypothetical ideal reasoners, or are ironic uses, which I'm OK with.)

It'd be useful to replace "rational" in these contexts with a word for someone who meets the looser standard of "thinks systematically & impartially about something without labouring under any obvious bias or appealing to fallacies" — basically the kind of ideal a traditional rationalist or sceptic might use. I've been using the word "quasi-rational", but there's probably a catchier word out there. (Pre-rational? Proto-rational? Sub-rational?)

[-]tut20

No. When a word is used "simpliciter" all qualifications that are obviously necessary are implicit. So when somebody is said to be rational it means that with regards to the things that are relevant in the context that you are talking about they are more rational than the usual standard (probably most people, or most people in some group that is obvious from the context).

So the term you are looking for is "rational".

[-]satt00

I don't think that can be true in general. One of my examples had someone invoking Aumann's agreement theorem as follows:

So it seems to me that the Aumann's Agreement Theorem is irrelevant in the real life... until you gain enough rationality and social skills to find and recognize other rational people, and to gain their trust.

Interpreting "rational people" in a quantitative, "more rational than the usual standard" sense there won't work, because Aumann's agreement theorem assumes perfect Bayesian rationality, not merely better-than-usual rationality. I reckon the sentence I quoted is just plain false unless one interprets "rational people" in an absolute sense.

[-]tut00

Yes, that statement is just plain false. The problem behind this is people referring to game theoretic agents as "[perfectly] rational people", and then others hearing them assuming that the 'rational people' in game theory are the same kind as real 'rational people'.

Rationality means more that one thing. One of the things it means is taking the pro-science, anti-god side in the Culture Wars. That may well be what it means when used as a binary.

[-]satt00

Yes, people sometimes use "rational" to refer to that too. But using the word in that sense on LW has a much bigger risk of muddying the meaning of the term here, since the word's local canonical meaning is quite different.

There is a pattern that has several names already, but each is problematic:

liberal: has strong conflict between literal meaning (open, permissive), and actual meaning (corresponding to a largely arbitrary political clustering)

leftist: more abstract than "liberal", and thus without the literal meaning baggage, but tainted by its use by people on the right as a slur against anyone who opposes rightist extremism

political correctness: has been corrupted by its use to mean whatever a person wants it to mean, from anything to hypersensitivity to any sensitivity at all

feminazi: no explanation needed, I think

social justice: even more of a literal meaning conflict that "liberal". Strongly suggests that the issue actually is social justice, rather than mind-killing in ostensive service of social justice

social justice warrior: the best term I've seen, and the added "warrior" term helps convey the sense of irrationality, but still has much of the problems of "social justice".

I really wish there were some good term for the leftist flavor of anti-rationality. Calling them "social justice warrior" just invites the question "Why are you opposed to social justice?"

I really wish there were some good term for the leftist flavor of anti-rationality.

There is one, and it's quite specific: Lysenkoism. If you mean something other than the historical movement based on Lysenko's theories, why not call it neo-Lysenkoism or something like that?