Casus Belli: As I was scanning over my (rather long) list of essays-to-write, I realized that roughly a fifth of them were of the form "here's a useful standalone concept I'd like to reify," à la cup-stacking skills, fabricated options, split and commit, and sazen.  Some notable entries on that list (which I name here mostly in the hope of someday coming back and turning them into links) include: red vs. white, walking with three, setting the zero point[1], seeding vs. weeding, hidden hinges, reality distortion fields, and something-about-layers-though-that-one-obviously-needs-a-better-word.

While it's still worthwhile to motivate/justify each individual new conceptual handle (and the planned essays will do so), I found myself imagining a general objection of the form "this is just making up terms for things," or perhaps "this is too many new terms, for too many new things."  I realized that there was a chunk of argument, repeated across all of the planned essays, that I could factor out, and that (to the best of my knowledge) there was no single essay aimed directly at the question "why new words/phrases/conceptual handles at all?"

So ... voilà.

(Note that there is some excellent pushback + clarification + expansion to be found in the comments.)


Core claims/tl;dr

  1. New conceptual distinctions naturally beget new terminology.

    Generally speaking, as soon as humans identify a new Thing, or realize that what they previously thought was a single Thing is actually two Things, they attempt to cache/codify this knowledge in language.  

    Subclaim: this is a good thing; humanity is not, in fact, near the practical limits of its ability to incorporate and effectively wield new conceptual handles.
     
  2. New terminology naturally begets new conceptual distinctions.

    Alexis makes a new distinction, and stores it in language; Blake, via encountering Alexis's language, often becomes capable of making the same distinction, as a result.  In particular, this process is often not instantaneous—it's not (always) as simple as just listening to a definition. Actual practice, often fumbling and stilted at first, leads to increased ability-to-perceive-and-distinguish; the verbal categories lay the groundwork for the perceptual/conceptual ones.
     
  3. These two dynamics can productively combine within a culture.

    Cameron, Dallas, and Elliot each go their separate ways and discover new conceptual distinctions not typical of their shared culture.  Cameron, Dallas, and Elliot each return, and each teach the other two (a process generally much quicker and easier than the original discovery).  Now Cameron, Dallas, and Elliot are each "three concepts ahead" in the game of seeing reality ever more finely and clearly, at a cost of something like only one-point-five concept-discovery's worth of work.

    (This is not a metaphor; this is in fact straightforwardly what has happened with the collection of lessons learned from famine, disaster, war, politics, and science, which have been turned into words and phrases and aphorisms that can be successfully communicated to a single human over the course of mere decades.)
     
  4. That which is not tracked in language will be lost.

    This is Orwell's thesis—that in order to preserve one's ability to make distinctions, one needs conceptual tools capable of capturing the difference between (e.g.) whispers, murmurs, mumbles, and mutters.  Without such tools, it becomes more difficult for an individual, and much more difficult for a culture or subculture, to continue to attend to, care about, and take into account the distinction in question.
     
  5. The reification of new distinctions is one of the most productive frontiers of human rationality.

    It is not the only frontier, by a long shot.  But both [the literal development of new terminology to distinguish things which were previously thought to be the same thing, or which were previously invisible] and [other processes isomorphic to that process] are extremely relevant to the ongoing improvement of our mental tech.  c.f. Eliezer's sequences, which could reasonably be described as a tool whose main purpose is to cause some 50-500 new explicit concepts to be permanently ingrained in the reader's lexicon.

Background I: NVC

There is a communication paradigm called "Nonviolent Communication," often abbreviated as NVC.

NVC makes certain prescriptions about language, forbidding some words and phrases while encouraging others.  If you set out to learn it, much of what you will do in the early stages is likely to feel like fiddling with your speech.  Applying an effortful translation filter, as if you were taking English sentences and laboriously converting them into Russian using a dictionary and a grammar handbook.

For instance, a simple-seeming sentence like "You betrayed me" is just straightforwardly not expressible in NVC.  If you were talking to a chatbot that had been programmed to only understand NVC, it would be literally incapable of parsing "You betrayed me."  It would receive that string as incomprehensible gibberish, and throw up an error message.

You can convey to the chatbot your underlying state of mind—the beliefs, experiences, observations, and emotions that led you to want to express "You betrayed me."  But in order to do that, you'll have to say a lot of other words, in what often feels (to a beginner) like a stupidly obtuse, rigid, ridiculous, ritualistic game of let's-pretend-we-don't-understand-what-words-mean.

"Fine.  You betrayed me, but since we're playing this idiotic game where I'm not allowed to say that, what I'll say is that, uh, when I saw you do X, despite the fact that you agreed to—sorry, despite the fact that I recall you agreeing to do Y, then I felt betr—what?  Seriously?  Okay, fine, I felt angry and manipul—FINE, I felt ANGRY and HURT and USED and I DID NOT LIKE IT."

This is often deeply dissatisfying.  There is a thing that is meant by "you betrayed me," a certain kind of social and emotional impact that the sentence is trying to have.  And the NVC version fails to convey that thing.  It fails to convey that thing by design—is deliberately structured so as to make that thing inexpressible, and that (exact) impact unachievable.

Which means that many people abandon it, more or less immediately, because the actual thing they want is that impact, and therefore NVC is not the tool they need.

But an interesting thing happens, if one practices NVC for long enough (which for me was about four or five total hours, across a couple of weeks).

What happens is that one begins to feel unease around sentences like "You betrayed me." There begins to be a tiny flinch, a note of something not-quite-right.  It becomes easier to notice, and then harder to ignore—that there's something wrong with that sentence, and sentences like it.  Something wrong in truth, not just something not-in-compliance—something that more normal habits of thought and speech gloss over.

To be clear, NVC's recommended replacement sentences aren't flawless, either, which is why I do not often explicitly use NVC.  But they are better.  They are less false along an axis that is subtle (at first) and hard to put one's finger on (at first), and which the practice of NVC, with all of its clumsy rules and forms, helps bring into focus.

And thus, even though I don't actively use NVC, I'm glad I did actively use it, long enough for the update to sink in.  

I think of NVC as being like a martial arts kata—a series of formalized dance steps that vaguely resemble the movements of combat. Practicing kata does not directly make one better at winning fights or defending oneself, and it might even be net negative if it instills in someone false confidence.

(Analogously: NVC practitioners thinking that they can't possibly be engaged in violence, as long as they're in compliance with the rules of a system that has "nonviolent" right in the name!)

But practicing kata does help one to sink into and absorb a new vocabulary of movement that is utterly unlike the movements of walking or typing or driving a car.  It helps to reshape one's sense of balance and one's intuitions, to carve new neural pathways and ingrain new reflexes, and those things can indeed be subsequently recruited and reassembled and recombined in ways that help to win fights and defend oneself.

NVC is a similar kind of ... stepping stone?

... or incubator, maybe.

It isn't The Thing™, but it can help a certain kind of person find their way to The Thing™.  It (eventually) causes (some) people to not want to say sentences like "you betrayed me"...

...and furthermore the root of that hesitation is not (in my experience and in the experience of those I've talked to about this) because those sentences are in conflict with NVC, but rather because those sentences actually aren't (quite) true, in a way that NVC helped them learn to recognize and develop distaste for.  It's not that people are simply obeying the rules of a system; it's that practicing within the system has genuinely improved their ability to see.


Background II: Nate Soares on Jargon

The following is the lightly edited text of a tweetstorm from March 20, 2021.

Thread about a particular way in which jargon is great:

In my experience, conceptual clarity is often attained by a large number of minor viewpoint shifts.

(A compliment I once got from a research partner went something like "Nate, you just keep reframing the problem ever-so-slightly until the solution seems obvious." ❤️❤️)

Sometimes, a bunch of small shifts leave people talking a bit differently, because now they're thinking a bit differently. The old phrasings don't feel quite right—maybe they conflate distinct concepts, or rely implicitly on some bad assumption, etc.

(Coarse examples: folks who learn to think in probabilities might become awkward around definite statements of fact; people who get into NVC sometimes shift their language about thoughts and feelings. I claim that more subtle linguistic shifts regularly come hand-in-hand with good thinking.)

I suspect this phenomenon is one cause of jargon.  For example, when a rationalist says "my model of Alice wouldn't like that" instead of "I don't think Alice would like that," the non-standard phraseology is closely tracking a non-standard way they're thinking about Alice.

(Or, at least, I think this is true of me and of many of the folks I interact with daily. I suspect phraseology is contagious and that bystanders may pick up the alternate manner of speaking without picking up the alternate manner of thinking, etc.)

Of course, there are various other causes of jargon—e.g., it can arise from naturally-occurring shorthand in some specific context where that shorthand was useful, and then morph into a tribal signal, etc. etc.

As such, I'm ambivalent about jargon. On the one hand, I prefer my communities to be newcomer-friendly and inclusive. On the other hand, I often hear accusations of jargon as a kind of thought-policing.

"Stop using phrases that meticulously track uncommon distinctions you've made; we already have perfectly good phrases that ignore those distinctions, and your audience won't be able to tell the difference!"

No.

My internal language has a bunch of cool features that English lacks. I like these features, and speaking in a way that reflects them is part of the process of transmitting them.

Example: according to me, "my model of Alice wants chocolate" leaves Alice more space to disagree than "I think Alice wants chocolate," in part because the denial is "your model is wrong," rather than the more confrontational "you are wrong."

In fact, "you are wrong" is a type error in my internal tongue. My English-to-internal-tongue translator chokes when I try to run it on "you're wrong," and suggests (e.g.) "I disagree," or perhaps "you're wrong about whether I want chocolate."

"But everyone knows that 'you're wrong' has a silent '(about X)' parenthetical!" my straw conversational partner protests. 

I disagree. English makes it all too easy to represent confused thoughts like "maybe I'm bad."

If I were designing a language, I would not render it easy to assign properties like "correct" to a whole person—as opposed to, say, that person's map of some particular region of the territory.

The "my model of Alice"-style phrasing is part of a more general program of distinguishing people from their maps. I don't claim to do this perfectly, but I'm trying, and I appreciate others who are trying.

And this is a cool program! If you've tweaked your thoughts such that it's harder to confuse someone's correctness about a specific fact with their overall goodness, that's rad, and I'd love you to leak some of your techniques to me via a niche phraseology.

There are lots of analogous language improvements to be made, and every so often a community has built some into their weird phraseology, and it's wonderful. I would love to encounter a lot more jargon, in this sense.

(I sometimes marvel at the growth in expressive power of languages over time, and I suspect that that growth is often spurred by jargon in this sense. Ex: the etymology of "category.")

Another part of why I flinch at jargon-policing is a suspicion that if someone regularly renders thoughts that track a distinction into words that don't, it erodes the distinction in their own head. Maintaining distinctions that your spoken language lacks is difficult!

(This is a worry that arises in me when I imagine e.g. dropping my rationalist dialect.)

In sum, my internal dialect has drifted away from American English, and that suits me just fine, though your mileage may vary. I'll do my best to be newcomer-friendly and inclusive, but I'm unwilling to drop distinctions from my words just to avoid an odd turn of phrase.

Thank you for coming to my TED talk. Maybe one day I'll learn to cram an idea into a single tweet, but not today.


The Obvious Objection: Get Out Of My Head

Occasionally, I run across one of those click-bait-y articles that goes something like "Thirty emotions humans don't have words for," followed by thirty made-up words like "sonder" or "etterath."

According to the above argument, I should probably find such lists useful and interesting, but in fact they tend to be simultaneously annoying and forgettable.

This personal experience seems to rhyme with some of the culture war stuff about e.g. "we don't need five hundred damn words for every little quirk of sexuality."

There's also the fact that most readers of this essay were likely forced, at some point, to brute-force memorize large amounts of vocabulary for e.g. language courses or science courses, the vast majority of which will have leaked right back out of your head because they weren't actually useful to you in the first place.

On the surface, these assorted issues would seem to present a strong case in favor of something like "new terminology is actually quite costly, and often useless," which is a view I've heard explicitly expressed more than a few times.

I think this critique is slightly off.  It seems to emerge from a sort of zero-sum or finite-resource mentality around fear of adopting new terms, with the implicit assumption being something like "we only have so many word-slots in our brains," or "every bit of brain-currency spent on ingraining new terminology is a bit of brain-currency not spent on other, more useful things."

But it seems to me that there's a very big distinction between [words and phrases that just click, because they are an obvious, intuitive match for a concept that you've been trying to make explicit in your own head for a while now], and [things that you're effortfully forcing yourself to memorize and remember, despite the fact that they don't connect to anything you directly care about].

I claim that the former category is virtually costless in practice—that when new terminology tracks distinctions that you're actually making in your mind, they stick very very easily.  That what is actually costly and effortful and useless and non-sticky is terminology that doesn't matter to you, and that pushback to the latter category is often rounded off to "no more new words or concepts, please," when in fact the actual thing that people are fighting for is "no more new words or concepts that don't track anything I care about, please."

Uselessness, in other words, is relative.  The feeling of "ugh, what is this again?" is what it's like to encounter a concept you do not expect to need, and which is being forced upon you, somehow, by your work or your social context or whatever.

If you find a language request onerous, and it feels like a tedious burden, you are probably right, in the sense that this distinction you are being asked to track is not a thing that matters in your ontology.

But attacking the words and phrases themselves (or the process which is generating them) is a bucket error.  The actual problem lies with the dynamic that is forcing you to pretend to care.  The memetic ecosystem is an ecosystem, and not every meme will be fit for every niche—it's both a mistake for the occupants of one niche to try to force all of their concepts into every other niche, and for the defensive, put-upon occupants of a niche being inundated with useless memes to try to delete those memes from the environment entirely, rather than simply from their immediate surroundings.

This is complicated by the facts that:

a) the people attempting to force new concepts into the lexicon sometimes wield substantial social power, and use it to punish non-adopters, such that you can't always just eschew terms you don't find worth their weight.

b) sometimes there is genuine memetic warfare going on, in the sense that the pushers of new terminology genuinely intend (whether consciously or subconsciously) to reshape the thoughts of the people whose speech they are trying to change.

For example, I had a friend who once made a request of the form "please use he/him pronouns in reference to me," only to discover later that what he had really wanted (without even making this explicit in his own mind) was for me to perceive him as a man.  When he discovered that I did not perceive him as a man, he was deeply hurt and upset, and it took a substantial chunk of introspection for him to untangle precisely why.

That friend did not hold it against me (in part because he recognized that he hadn't actually asked me to change my perceptions, and in part because he recognized that he shouldn't ask me to change my perceptions).  But there are other people out there who are less principled, some of whom are deliberately attempting to Trojan-horse new updates into the people around them via language-hacking.

Most of the people out there recommending "instead of X, say Y" are not, in fact, thinking X and then pausing to top-down effortfully translate X into Y before opening their mouths.  Perhaps they did that at the start of their new language habit, while getting the hang of it, but typically what's going on is that the language just straightforwardly reflects the underlying architecture.  Typically, the reason other people find (e.g.) using "I" statements so much easier than you find it is that they are actually for real doing the "I" statement thing in their heads, at a deep level.  It doesn't feel burdensome or performative to them, because it isn't—they're simply living in a world where the "I" statement feels true, and the other thing does not.  They're producing sentences with roughly the same amount of effortlessness and ease that you produce sentences with, insofar as there isn't some big layer of processing between [what they want to say] and [what they actually say].

But it's also the case that thoughts can shift in response to language usage, and if you want someone to actually start thinking in "I" statements, one of the most reliable ways is to just make them top-down use "I" statements in their speech for a while.

(This is fine if it's up-front and open—if the request is something like "hey, want to use language in a way that will change your perceptions and mental models?"  It's less fine when that's an unacknowledged, hoped-for side effect of an explicit request specifically shaped so as to appear innocuous and small.)

I don't know what to do about all that social stuff, besides sort of waving in its general direction and saying "shit's on fire, yo."

But separately, I think it's important to understand that, a lot of the time, when language requests are being made and accepted/rejected, there are disconnects where both sides are typical-minding.

Finley says that the language shift isn't a burden (because to them, it isn't), and Gale doesn't even consider the hypothesis that Finley is being sincere (because to Gale it's so obviously burdensome that there's no way Finley can deny it with a straight face), and it's real easy for both sides to lose track of what's actually happening.

Often, the right answer seems to me to be "Oh, okay, yeah, your brain isn't currently running an OS where this language shift is easy and makes sense.  Yeah, please don't 'force' it, maybe give me a chance some time to try to give you an update patch that will suddenly make this distinction feel real to you, but in the meantime, just ... keep saying what you really mean and don't fake-translate."

Another way to say this is, if it feels to you like I am asking you to self-censor or do some meaningless laborious translation ... I probably am actually not?  I'm probably trying to get you to change the way you actually think, and the language shift is one way to help bring about that transition.

Which may be a thing you don't want to do, of course!  In my culture, you're welcome to refuse such requests, because they are deeply intimate, and you are entirely within your rights to not let people inject code into your mental algorithms willy-nilly.

(I don't know what to say on behalf of Homo Sapiens, which is on the whole less sane and forgiving and will indeed sometimes try to inseminate you with new conceptual distinctions regardless of whether you want them, and punish you if you resist.)

Overall, though, it seems straightforwardly false to me that we are, in general, running out of mental space for new concepts (and labels for them), as the people objecting to new terminology often object.  Humanity is insanely hungry for new conceptual fodder; we are constantly inventing both brand-new terms and brand-new meanings for old terms.  

Two years ago, nobody had "Let's Go Brandon" in their lexicon, but now tens of millions of people do.  Three years ago "PPE" was a moderately niche technical term known mostly to the blue-collar working class, and now it's a household concept.  "Flossing" had one definition, then two, then three, and now who-knows-how-many.  

This is not causing most people problems, except when they are forced to absorb terms they don't want to absorb—when I talk to less-nerdy friends about pastimes like gardening or kayaking or sports or whatever, they almost always have some new term or technique to share, some new distinction they previously hadn't made but whose addition to their vocabulary has opened up new possibilities for them[2].

For me, new terminology falls into one of three buckets:

  • Obviously useful; tends to be adopted by my brain via a nearly-automatic process
  • Obviously useless; tends to trickle right out without costing me anything
  • Intriguing or of uncertain value; flagged for potential effortful exploration (à la NVC)

...none of these buckets leave me feeling resentful of new words as they come in, which is an experience that a lot of people seem to have fairly regularly.  I think the key thing is that I simply do not expose myself to people who are going to punish me for maintaining critical oversight of my own conceptual boundaries; it's a truism that anyone who wants you to stop thinking isn't your friend but it's equally true that anyone who insists that you think in exactly the way they've deemed proper is also not your friend.  Or at least, they don't see you as a friend, so much as a piece of clay to mold into a shape they find useful to their goals and priorities.  Yuck.


Sapir-Whorf for Rationalists

The Sapir-Whorf Hypothesis is a claim derived from the works of Edward Sapir and Benjamin Lee Whorf, who Wikipedia tells me never published anything together and did not think of their assertions as a hypothesis.

In short, the SWH states that the structure of a language determines (or at least influences) a speaker's perception and categorization of experience.

Sapir-Whorf reinterpreted for rationalists would go something like:

The way we go about expressing and presenting our thoughts influences the shape of those thoughts, and there are changes that we can make to our speech which at first will feel laborious or arbitrary, but which in the long term can cause our minds to fall into a configuration more conducive for clear thinking and clear communication.

Therefore, contexts in which people are trying to be more rational, and trying to coax rationality out of others, are also contexts in which it pays to enforce and adhere to clear and unambiguous norms of rational discourse.  In particular: it pays to choose norms of discourse such that things which are less true are less easy to say.

In practice, this doesn't mean inventing a bunch of new terminology so much as actually bothering to track fine (but commonplace) distinctions between near-synonymous phrasings, in a way that is already pretty natural for most people.

For instance, below I have five versions of the same claim, in a random order; I would wager that >50% of readers would agree on a ranking of those five sentences from weakest/most uncertain to strongest/most confident, and that if you allow for one line to be one slot out of place, agreement would jump up to 85+%:

"I claim that passe muraille is just a variant of tic-tac."

"Obviously, passe muraille is just a variant of tic-tac."

"It seems to me that passe muraille is just a variant of tic-tac."

"I might be missing something, but as far as I can tell, the most sensible way to think of passe muraille is as a variant of tic-tac."

"Passe muraille is just a variant of tic-tac."

There are a lot of reasons why people argue that the distinctions between these sentences shouldn't matter—

(Two of the more common ones being "I don't want to put in the effort to track it" and "It's useful for me to be able to equivocate between them, e.g. using verbal markers of confidence for emphasis rather than to express strength-of-justified-belief.")

—but if you're in a subculture whose explicit goal is clear thinking, clear communication, and collaborative truth-seeking, it seems pretty likely to me that you'll get further if you can sustain common-knowledge agreement that these sentences are, in fact, different. That they mean different things, in the sense that they convey different strengths-of-claim in practice, and that it's disingenuous to pretend otherwise, and counterproductive to "let" people use them interchangeably as if they were straightforwardly synonymous.

I often object to certain conversational moves, and occasionally that objection takes the form of me attempting to rewrite what my conversational partner said—trying to express what I think they genuinely believe, and meant to convey, without violating Duncan-norms in the way their original version did.

(Because usually there is indeed something in there that's expressible in Duncan-culture; it is an intentional feature of Duncan-culture that many more things may be prosocially expressed than in most enclaves of American culture.)

After doing this, though, I often get back a counterobjection of the form "why should I have to put in that much interpretive labor?" or "if I have to put in that much work, I'm just never going to say anything[3]" or "yeah, no, I'm not gonna arbitrarily swap out words to meet some opaque and inscrutable standard."

And there seem to me to be several things going on in that sort of response, most of which aren't appropriate to dive into here.  But there is one aspect of it that sticks out, which is that it seems to me that such people assume/believe that I'm doing something like applying a politeness filter after the fact, or tacking on empty catch phrases to appease the audience, or similar.

Which is just not what's happening, ever—at least, not in my head.  The differences between:

  • Harley is a liar
  • Harley is lying
  • It seems like Harley is lying
  • It seems to me like Harley is lying
  • I'm having a hard time understanding how Harley could be being honest, here

... are quite real, and quite salient in Duncan-culture.  Those are not sentences which would ever be mistaken for one another and the differences between them are not cosmetic; they are crucial.

It makes sense to me that someone who does not see the distinction might find it meaningless, and think that it's performative, and might therefore feel some distaste and some resistance at the idea of being asked to pantomime it for purely social reasons.

But that person would be mistaken about what's actually being asked of them, at least so long as I'm the one doing the asking.

And yes, it's sometimes onerous to craft your speech with care and precision (or to be willing to go back and rephrase a clumsy first draft).  But that ... comes with the territory?  i.e. there's a way in which you're either here for the goal of being less wrong, or you're not, and it shouldn't be super controversial to say "there's a minimum amount of effort and conformity required for participation," just like it's not controversial to insist that people play by the rules of soccer if they want to be on a soccer team.

Some people get this stuff wrong because they haven't learned the rules yet, and I think those people deserve guidance and help (some of which is available in the post Basics of Rationalist Discourse).  And some people get it wrong because they're not perfect, and they need more practice or they had a bad day.

But others seem to me to get it wrong because they are actively hostile to the concept of putting in more work to accomplish the very goal we're here to accomplish, and I'm much less sympathetic to those people.  It's one thing to reject pronoun requests out in the middle of a crowded supermarket; it's another thing to register a username on a transgender forum and then grumble about how hard it is to track everyone's preferred pronouns.


A restatement of the thesis from a different direction

As we convert our nonverbal observations, impressions, and reactions into verbal, explicit thoughts, and as we convert our verbal, explicit thoughts into external speech, we each follow the norms and habits typical of our own unique cultures.  There is, for each of us, a way that our thinking tends to go (and possibly a few ways, if we have a few salient and very-different contexts, e.g. "me at work" or "me while depressed").

These norms and habits do not only have forward-facing impact—they do not only shape the verbal thoughts as they emerge from the nonverbal, or the external speech as it emerges from the internal monologue.  They also "reach backward," in a sense, shaping our perceptions and the mental buckets into which we divide our experiences.  Norms of speech begin to influence one's private thoughts, and norms of private thoughts begin to influence one's preverbal processing—over time, it becomes easier to think in ways which match the modes of explicit expression that one regularly engages in.

(This is the power of cognitive behavioral therapy (using thoughts to shape psychological state) and the dynamic described above with NVC (using words to shape thoughts).)

It is possible, therefore, to (marginally) influence one's habits of mind by intervening on one's lexicon.  If one is having a hard time thinking more rationally by sheer force of will, one may have more luck conforming to a marginally-more-rational mode of speech, which will both force one to find better versions of one's own thoughts which are legal in the new mode, and also sensitize one to new conceptual distinctions that weren't present in the old mode.  

This will eventually propagate backward to a nonzero degree, just as sloppy/foggy/truth-agnostic speech also propagates backward, encouraging sloppy or foggy or truth-agnostic thinking.

Therefore, contexts in which people are trying to be more rational, and trying to coax rationality out of others, are also contexts in which it pays to enforce and adhere to clear and unambiguous norms of rational discourse.  In particular, it pays to choose norms of discourse such that things which are less true are less easy to say.


Conclusion

I've drifted a little from the generic "why new words at all?" and more into "what should LessWrong's norms be?" so to refocus:

New words and phrases are good and useful because they either:

  1. Track new conceptual distinctions, allowing us to preserve our ability to make those distinctions and communicate our thoughts around them
  2. Help guide us toward conceptual distinctions that are new to us, via language hacking

Both of these things are really super duper cool, and as such I think it's quite bad to mistake "I find the fiat imposition of new words costly" (true) for "the generation of new conceptual distinctions and verbal labels to track those distinctions is costly" (basically false).

A healthy culture should indeed not force people to use language they find meaningless or useless, and our culture is doing somewhat poorly on this axis (this often gets rounded off to "political correctness" but it crops up in more places than just that).

But a healthy culture should also do far less than our culture does in the way of offering blanket discouragement of the generation, dissemination, and adoption of new terminology. The problem is one of trying to have a single nonbranching norm that is least bad for everybody, rather than just building the largest possible pile of conceptual handles and letting memetic evolution take its course unhindered by frowning shoulds.

Or, to put it more bluntly: the words in the dictionary that you don't care about are not the problem; the problem is the people forcing you to memorize the ones you have no use for.

The end!


  1. ^

    This one was actually in the list of hopefully-to-be-written when I first began drafting this post, and got published before this one.  Hooray!

  2. ^

    My own vocabulary is absolutely waxing, year by year and sometimes even week by week; I set a five-minute timer to jot down new conceptual handles I've added to my lexicon in the past ten years and ran out of time long before I would have run out of words:

    Moloch
    Mirror-sword
    Dropped in
    Coferences
    Trigger
    Murphyjitsu
    IFS (Internal Family Systems)
    IDC (Internal Double Crux)
    ITT (Ideological Turing Test)
    TDT (Timeless Decision Theory)
    CEV (Coherent Extrapolated Volition)
    Shoulder advisor
    Corrigibility
    Sphexishness
    Bayesian update
    Felt sense
    Doom (as in doom circle)
    Secretary problem
    Convergent goal
    Representativeness
    Fundamental attribution error
    Bucket error
    Broccoli error
    Cartesian agency
    Embedded agency
    Existential risk
    Tail risk
    Black swan
    Right-tail distribution
    80/20
    Area-under-the-curve
    Commensurability
    Shoulds
    Scissor statements
    Play to your outs
    Lenticular design
    Timmy, Johnny, Spike, Melvin, Vorthos
    Newcomb's box problem
    Stag hunt
    Schelling point
    Chesterton's Fence
    Goodhart
    Kegan levels
    Subject-object shift
    Hamming problem
    Ketosis
    Paleo diet
    Ball-heel-ball
    Gymnophobia
    Sastisficing
    Diachronic
    Episodic

    ...each of these is something I could easily give a talk or write a short essay on; each of these is something that I frequently use or reference in my own month-to-month life, if not day-to-day.  And I didn't even make it to the part of the brainstorm where I was vaguely anticipating talking about all sorts of memes and pop-culture references, of which I have certainly added hundreds and very likely thousands, in the past ten years.

  3. ^

    My knee-jerk uncharitable reaction to this sort of sentiment, which I include for the sake of candor even though it fails on several axes that are pretty important for cooperative discourse, goes something like "...you are aware that Eliezer wrote an essay every day for over a year, right? I mean, I get that most people can't and shouldn't try to hold themselves to that standard, but it seems like that shining standard should inspire some unusual effort in response. Like, if you're not going to specifically spend spoons on precision and clarity here—if you're just going to put forth the same amount of effort you put forth everywhere else—then ... don't be here?  It feels like you just walked into a martial arts dojo and said 'eh, doing all those kicks is too much work.'  If it's really actually the case that saying true and accurate things is too hard, and therefore your actual options are 'spout gunk' or 'say nothing,' I have a genuine preference for the latter, and a genuine preference about where the norms settle."

     

New Comment
49 comments, sorted by Click to highlight new comments since:
[-]gjm599

I think this is mostly excellent, but one thing rings false to me.

When discussing costs of new words, you repeatedly say or assume something along the following lines: "The cost to me of introducing a new word is the inconvenience / cognitive burden / manipulation risk that I take on by using that new word". That's true if the question is "should I start using this new word?". But if this is meant to be relevant to your posting things on LW proposing new terminology, the actual question is something more like "should this community start using this new word?", and that brings in both new benefits and new costs.

You do explicitly acknowledge the benefits that come from a community, rather than an isolated individual, acquiring a new concept-word. But nothing in what you write acknowledges the costs:

  • If the rest of the community is using a new term, then I have to choose between getting it into my head and having trouble understanding what everyone else is saying.
  • If a community starts using a new term that isn't standard elsewhere, then someone encountering the community for the first time will (1) have a little bit more difficulty making sense of what everyone else is saying and (2) be a little bit more likely to think "bunch of weirdos with their own jargon, avoid".

These aren't pure costs. (If I find some neologism useless, and the people holding a particular discussion find it useful enough that I can't contribute to the discussion without getting my head around the neologism ... well, maybe that indicates that I'm not familiar enough with some relevant issues for my opinions to be valuable. Maybe newcomers who aren't willing to put in some effort to figure out the local jargon are less likely to be useful contributors. Maybe newcomers who would be put off by neologism-dense discussion are less likely to be useful contributors.) But they are costs, and not necessarily negligible ones.

(The following is pure handwaving unsupported by actual data: my impression is that highly neologistic communities are often unhealthy ones, perhaps partly because they put off other people who might criticize, perhaps because the neologisms actually have exclusion as a goal either consciously or not. I don't know whether this is a likely failure mode for the LW community, nor even whether it's actually a thing that happens much outside my imagination, but to me it feels like something worth worrying about a little bit.)

Yeah (strong upvote); this is why I was pretty darn apologetic in both the "sazen" and "cup-stacking skills" posts, and much less apologetic in the "setting the zero point" and "concentration of force" posts.

galaxy brain take that just fluttered through my mind and might be worth ignoring and/or not ignoring: what if every time you coin a new word/phrase, you had to get a new domain name for the word/phrase and make a website succinctly introducing people to the concept. see, eg, https://www.microsolidarity.cc/, for a website that I already use as "obvious-things-documentation website that defines a word clearly and also is a good word according to me"

That's a brute-force solution to a nuanced social problem.

Telling newcomers to go read a website every time they encounter a new bit of jargon isn't any more welcoming than telling them "go read the sequences".

Yup, this is a good summary of why I avoid jargon whenever I can in online discussions; and in IRL discussions, I make sure people know about it before using it.

Something people don't realize is that most of the exposure people get to an online community isn't from its outwards-facing media, it's from random blog posts and from reading internal discussions community members had about their subjects of choice. You can get a lot of insight into a community by seeing what they talk about between themselves, and what everybody takes as granted in these discussions.

If those discussions are full of non-obvious jargon (especially hard-to-Google jargon) and everybody is reacting to the jargon as if it's normal and expected and replies with their own jargon, then the community is going to appear inaccessible and elitist.

It's an open question how much people should filter their speech for not appearing elitist to uncharitable outside readers; but then again, this OP did point out that you don't necessarily need to filter your speech, so much as change your ways of thinking such that elitist behavior doesn't come naturally to you.

I like the general message here, but I feel like the part about "look, if you don't want to practice, then don't be on lesswrong" is missing some important "outs".

Sometimes people have real disagreements about which things actually make sense to gloss in a given context.

Insofar as there's a real disagreement about whether a particular distinction is worth tracking in a particular context, there's room to say "look, I don't currently think that the skill of tracking the distinction you're trying to point to is actually that valuable as a part of a toolkit for effective cognition, so I'm not going to put in a bunch of translation effort just to practice it." And then the other party has an opportunity to argue why the skill of tracking this particular distinction IS relatively valuable. But the key is that that move is not itself, like, a defection against a community standard of practice.

Yep; agreed; this point is made explicit in the Basics post that I'm planning to put out tomorrow.

I like this essay and overall I agree with it. I especially like the parts about discourse norms, and the importance of phrasing distinctions like "X" vs. "I suspect that X". 

Next, I will talk about something other than the parts I like, using disproportionately many words.

When I ran into the subheadding "The Obvious Objection: Get Out Of My Head", I expected to encounter a description of my own usual objection to your essays that introduce new words, which I think is pretty well captured by the phrase "get out of my head". But I did not encounter my own usual objection!

According to my reading, the main objection you discuss seems to suppose that the objector is concerned about running out of space in which to store distinctions and words. The main problem in this case is that many distinctions and concept handles are "useless".

I personally am concerned about a different limited capacity, which is (something like) my ability to perceive different distinctions whose corresponding swaths of territory overlap with the ones your distinctions track. 

Suppose you say, "'Doobockle' is this thing that's going on with how some people are wrong about their own preferences." I hear and understand you, and now I have the word "doobockle", and I also effortlessly perceive doobockle in the social landscape.

However, if I have never thought deeply before about how people relate to their own preferences, then at least in my case, I will now find it much, much harder to do so in a way that does not structurally presuppose "doobockle". I may be able to ask "yes doobockle, or no doobockle?" of my observations, pretty easily; but original seeing will be much harder: Forming an entirely new distinction out of my own assessment of my direct observations will cost me a lot more than it would have without "doobockle".

Overall, I think it is much better for people to be saying "Doobockle is X" when it seems useful to them, than to avoid using short words and phrases as handles for the distinctions they perceive. But I think there are probably some tools for sharing new concepts that invite relatively more original seeing from the reader (indeed, I think some of them are central to my own way of writing); and as someone whose cognitive architecture is particularly vulnerable to this phenomenon (or at least unusually sensitive to it), I would like to figure out what those approaches are, and I would like for the most conceptually generative among us (such as yourself) to use them.

I do not mean to say that protecting a reader's capacity for original seeing is solely the responsibility of an author. Not even close. Outside of spaces with names like "Less Wrong", it's probably not all that important, from a "what is good writing" standpoint, and in fact writing that prioritizes this tends to be a bit less effective along some other important axes, such as stickiness and brevity. And even within spaces with names like "Less Wrong", preserving my capacity for original seeing is mostly my job, as a reader, just as it's mostly my job not to adopt useless terms even if all the cool kids are doing it. 

Still, I think this is a reasonable objection to some presentations of new terms. When I get mad at your essays, it is usually because I want them to be better on this axis, so that I feel like I'm primarily cooperating with you as a reader, rather than primarily defending my own ability to think.

These have been disproportionately many words in critical response to something that I primarily like and agree with.

To bring up a specific instance of this kind of problem: that lw post on open/active curiosity absolutely devastated my ability to think about curiosity for no less than a month. Every time I'd prompt myself to think about curiosity, my thoughts would flow toward the "open/active" concept shapes; I didn't know how to stop it (and I very much wanted to stop it. I was frustrated, found the shapes of to be misconfigured, a poor fit. I couldn't access my previous thought configurations on the topic, as they were temporarily overwritten). 

The only defense I found in the end was to stop prompting myself on the topic; it took about a month for the shapes to fade, for the forgetting to naturally occur. (I've long thought of Forgetting as an important skill in research; the ability to let wrong shapes fade away.)

While we're on the topic, I'll note that Logan-concepts, rare as they are, are WAY more likely (than Duncan-concepts, for example) to transfigure or hijack my thinking shapes. I'm not sure what's up with that yet. 

(Something something, the compression level at which Duncan talks about things is not really the compression level at which I like to think, and so the concepts can't really make a home there and stick? Something to do with a scope mismatch? Not feeling confident on these guesses, though.)

I think I have historically probably written essays in a similar style to Duncan that I presume would cause a similar problem. I've updated more on this viewpoint of "invite more original seeing" being pretty valuable.

I do find it not-costless (i.e. ignoring how much extra time it takes to write, something feels missing when I explicitly focus on it). But I'm keeping it in mind more in my writing.

If you can put your finger on it, I'd love to hear what thing(s) feel like they're missing when you explicitly focus on inviting original seeing. Not because I'm surprised that something ends up missing (I'm not!), but because understanding this may help the rest of us to do it better. I'd also love to hear what exactly you do when you attempt to invite original seeing.

Not speaking for Ray, of course, but some of the things that occur to me:

  1. Some people are desperately seeking guidance/help/something to cling to; they've been trying to make sense of things and failing, and when someone is already struggling, being like "here's an opportunity for you to make it all up yourself!" can be extremely demoralizing. Especially when there really actually is a simple conceptual leap that they could've just been given, or when the process of helping them see for themselves is not well-formed/well-scaffolded (so it amounts to being invited to flail around and then also blamed for not having put the pieces together).
  2. There are a million different dog breeds, and dog breeds themselves aren't clear-cut categories but fuzzy borders around continuous variation; inviting people to all squint at dogs and see for themselves can break something pretty important.  
    1. There's the social aspect of being able to communicate with each other, which is worsened by everyone having their own slightly different categories for all of the dogs.
    2. There's the information-overwhelm aspect of not being able to abstract away similarities and see that "dog" or (e.g.) "husky" is actually a valid and useful cluster, because you're trying to process all the axes of variation simultaneously with no starting point.
    3. There's something important about usage; when I try to reify a new term it's usually related to usage and if I asked someone to do their own original dog categorization I am sort of robbing them of useful predictive information about, like, "people will try to breed this one with that one" that will be hard for them to independently derive.
    4. Oh, also, in some cases there's much more convergence than others/much less in the way of cool and interesting and useful stuff to be found in the original seeing; there are some places where original seeing is 100x less payoff-y?  Like, my shoulder Logan says something like "the vast majority of humans are extremely atrophied in their ability to see things originally; they should seize every possible opportunity to get in some practice" and I agree but nevertheless I don't think every opportunity is equally good.
  3. It takes so, so, so much longer, not just to write but also to consume, and I do indeed think that some forms of information learned this way are more deeply grokked and better integrated and better self-motivated but I think that one of the primary benefits of being part of a species that communicates with language is not having to reinvent every single wheel.  I think I genuinely believe that the ideal point for most people is something like "fifteen percent of the way Loganwards from where Duncan is" and not "living in the region I conceive of as occupied by Logan/Robin/Anna/Benya."  Like, I think that the individualized education plan for Pokémon of your type is something our culture should absolutely have but that if everybody was taught in that way this would be actually worse?
  4. Something like confirmation.  The ever-fretful lovechild of Neville and Hermione does not, at the end of a guided self-investigation, know that they know something?  Or at least, not a clumsily done one; I believe that at the end of your naturalism course people ACTUALLY know, and know that they know, something new they have seen and understood.  But like, the failure mode of an attempt to cause people to originally see seems much less graceful than the failure mode of providing someone with a new conceptual template, even given that people in general are really bad about conceptual templates and confuse themselves into thinking they're real.

This is great. Thank you.

(I also endorse all these)

I think it's mostly on major thing is the difference between "hey guys, this concept is ready to use, with an implication of 'Let's Take Some Actions relating to it'" vs "hey guys, here's some ideas, are they the right ideas i dunno maybe let's think some more?". Usually when I'm forming an idea (in particular relating to social interactions), it's because I think something is going wrong that warranted thinking about to figure out how to make it go right.

When I shift towards "enable more original seeing on the margin", I'm mostly giving up (either partly or in-entirety) on using the post as a coordination-rallying-point. 

I think it's basically always the case that a) my conception of a problem isn't actually fully accurate, b) even if it is, it's healthy for people to original-see it for themselves. But, also, I think actually solving coordination problems requires taking incremental steps that somewhat simplify things even if you or everyone else is still a bit confused in some way.

My model of you is mostly not trying to do the "rally people in a direction" much at all.

I do think there's lots of room for improvement at "provoke some kind of action while inviting more-original-seeing-on-the-margin." 

If I were starting from "primarily be teasing out an idea and inviting others to do so", I might try "write some an intro and final paragraph that distills out my best guesses about what to make of the situation" (while leaving most of the post in exploratory mode). If I were starting from "primarily be rallying people", ask myself if I'm actually deconfused myself enough for rallying to be reasonable, and regardless phrase the essay with some original-seeing-of-my-own, trying to think through concrete examples without my frame.

I think Noticing Frame Differences leans somewhat in the "teasing out idea" direction, and Recursive Middle Manager Hell leans in the "rally people" direction (Recursive Middle Manager Hell was itself a distillation of Zvi's Mazes sequence that I think pushed much harder against original seeing and instead just pushing a frame to rally people around).

>My model of you is mostly not trying to do the "rally people in a direction" much at all.

Perhaps a bit off topic, but: I think actual me in fact is trying to "rally people in a direction" to a substantial degree! 

Except that "rally" is perhaps not a very good word for my strategy. I'm not sure what to call it, but I think that I am very frequently trying to go, "LOOK AT THIS THING. Think about this thing. If I just tell you about it and argue for it you will not have looked at it, and I want you to look at it. I have looked at it and thought about it and I think it's important, and I think it's important that all of us look at it and think about it, because understanding it correctly may be critical to our collective success." And my overarching project is perhaps something like "Cause people to be able to look at things."

Nod, makes sense. But your project is fundamentally about the looking-at-things, whereas main is more like "let's look at things when that's useful" but it's not the primary goal.

Harumph, I also say "let's look at things when that's useful" and I do not consider looking at things to be the primary goal. "Being able to look at things" is an instrumental goal toward "looking at things that it's useful to look at", and "figuring out which things it's useful to look at" is a really big part of "being good at looking at things" [see forthcoming essay "Locating Fulcrum Experiences"]. (My harumph is frustration at my own failure to communicate, not at you.)

Like on various occasions I've [something other than rallied] people to look at groundedness, at courage, at dreams, at memory, at learning, at defensiveness, at boredom, at each particular CFAR unit, and at a bunch of other things besides looking itself. Looking itself is only central to my overarching project, because I think it's a really really important piece of rationality that's poorly developed in the communal art.

Yeah I agree you have done something-other-than-rallied that results in people looking at the things.

And yeah makes sense that my phrasing didn't feel like an accurate description of what you said. (I think there is something-my-phrasing-was-meaning-to-point at that is probably a true distinction, but, not sure I can get any closer to it easily)

I also claim that I've been fairly successful thus far, at "[something other than rally] people in a direction", in many cases. Perhaps what I am attempting to communicate in my partial-criticism of this essay is "boo rallying, find better ways to [something] people in directions".

But maybe not. I can't actually belief report "boo rallying".

Note: I did almost no original seeing while writing the previous comment. If I had wanted to, I'd have started with "hmm, what things have I written, what did they actually appear to do? What was it like to write them?" (which I find more effortful than "write my cached headline I remember and then write some stuff to expound on the details", which is what I actually did. I am more aware of the difference thanks to you, fwiw)

The first three core claims reminded me of an argument that I once saw, which claimed that one of the primary functions of philosophy is to make conceptual distinctions (I don't know if I necessarily agree with that argument, but I thought it was interesting anyway):

Philosophy may be viewed as a science, on the one hand, or as an art, on the other. Philosophy is, indeed, uniquely difficult to classify, and resembles both the arts and the sciences.

On the one hand, philosophy seems to be like a science in that the philosopher is in pursuit of truth. Discoveries, it seems, are made in philosophy, and so the philosopher like the scientist has the excitement of belonging to an ongoing, cooperative, cumulative intellectual venture. If so, the philosopher must be familiar with current writing, and keep abreast of the state of the art. On this view, we twenty-first-century philosophers have an advantage over earlier practitioners of the discipline. We stand, no doubt, on the shoulders of other and greater philosophers, but we do stand above them. We have superannuated Plato and Kant.

On the other hand, in the arts, classic works do not date. If we want to learn physics or chemistry, as opposed to their history, we do not nowadays read Newton or Faraday. But we read the literature of Homer and Shakespeare not merely to learn about the quaint things that passed through people’s minds in far-off days of long ago. Surely, it may well be argued, the same is true of philosophy. It is not merely in a spirit of antiquarian curiosity that we read Aristotle today. Philosophy is essentially the work of individual genius, and Kant does not supersede Plato any more than Shakespeare supersedes Homer.

There is truth in each of these accounts, but neither is wholly true and neither contains the whole truth. Philosophy is not a science, and there is no state of the art in philosophy. Philosophy is not a matter of expanding knowledge, of acquiring new truths about the world; the philosopher is not in possession of information that is denied to others. Philosophy is not a matter of knowledge; it is a matter of understanding, that is to say, of organizing what is known. [...]

The most visible form of philosophical progress is progress in philosophical analysis. Philosophy does not progress by making regular additions to a quantum of information; as has been said, what philosophy offers is not information but understanding. Contemporary philosophers, of course, know some things that the greatest philosophers of the past did not know; but the things they know are not philosophical matters but the truths that have been discovered by the sciences begotten of philosophy. But there are also some things that philosophers of the present day understand that even the greatest philosophers of earlier generations failed to understand. For instance, philosophers clarify language by distinguishing between different senses of words; and, once a distinction has been made, future philosophers have to take account of it in their deliberations.

Take, as an example, the issue of free will. At a certain point in the history of philosophy a distinction was made between two kinds of human freedom: liberty of indifference (ability to do otherwise) and liberty of spontaneity (ability to do what you want). Once this distinction has been made the question ‘Do human beings enjoy freedom of the will?’ has to be answered in a way that takes account of the distinction. Even someone who believes that the two kinds of liberty in fact coincide has to provide arguments to show this; he cannot simply ignore the distinction and hope to be taken seriously on the topic.

-- Anthony Kenny: A New History of Western Philosophy

I would wager that >50% of readers would agree on a ranking of those five sentences from weakest/most uncertain to strongest/most confident

I notice that I rank 2 of these statements differently depending on a subtle distinction of what exactly is being ranked.

Between

"I claim that passe muraille is just a variant of tic-tac."

"Passe muraille is just a variant of tic-tac."

The first one indicates the speaker is consciously choosing to plant a flag that they intend to defend.  The second one suggests the speaker believes no defense will be necessary.

"Believing no defense will be necessary" seems like stronger subjective confidence from the speaker, but "I claim" means they're wagering some reputation on being correct; it's "stronger" in the sense of being harder to walk back.

(Though there's actually a bunch of possible reasons for believing no defense will be necessary, including reasons related to your audience or the format of the conversation that have nothing to do with how well-justified the thing is.)

This exact ambiguity is what caused me to soften the wager down so low and add the hedge in the next clause. =)

Curated.

I'm not sure I agree with everything in this post. But, I think "fine distinctions are a key part of rationality and worth prioritizing"* is at least a hypothesis I take pretty seriously. And I know not everyone is on board with it (there are plenty of complaints about too much jargon), so having a fairly comprehensive writeup and discussion of it seems valuable.

There's a number of comments that push-back-slightly or expand on the post, which I quite like (gjm, Logan and John)

I may wade more into the comments with some disagreements and/or distinctions if I have time this week.

*I don't know that this is a fair summary of the post exactly

I haven't read this post super deeply yet, but obviously this is one of those excellent posts that's going to become a Schelling point for various semi-related gripes after a mere token skim, even though most of them have been anticipated already in the post!

Some of those gripes are:
- Near enemies: Once a term for a phenomenon is entrenched in a community, it's a lot a lot a lot of work to name anything that's close to it but not quite it. (See, for example, "goodhart" for what is IMO a very diverse and subtle cluster of clumsiness in holding onto intentionality.) Escaping near enemies is difficult in any community, but especially difficult in this community because there tend to be mathematical/formalist intuitions of "well I have Abstraction that captures The Other Thing as a special case", which is often true. But it is also often "false" in the sense of...
- Sneaking in Connotations: I'm sad ADBOC hasn't become as CK as it could, especially for checking the unconscious tools that receive salience. Usually, if you don't believe in "vibes" and the relevance of connotations, you tend to only do some denotational pattern-matching to check whether a concept applies. But then it sneaks in burdensome frames and stances to bring to bear anyway. Sure, basketball ultimately, technically, happens in physics. I'm not sure that looking for the relevant differential equations (which is what you often do in physics) is the best frame to bring to improve your shot! Further, connotations also very often carry information that just don't make it into your cleaned up abstraction...
- Stickiness: ...and then it sticks. This is a feature, of course, as you name in point five: reification. You've made plenty of remarks about how the kata is useful even though it is not The Thing. That's great. But the fact of stickiness also implies that the underlying frame needs to be the relevant one for contexts where it is cued in people's minds, because it's now in some kind of vaguely zero-sum game with other frames. Now there's some transaction costs in switching to the other frame, taxing other perspectives. This is not even beginning to touch though, the real taxing involved in...
- Reification bias: There's a spectrum of errors from taking "you only understand it if you can program it" too literally, forcefully narrowing all the kinds of clarity available to human brains and bodies. It's a short step from there to "it's only real if you can formalize it" and "it's only real if it can be made explicit" and "it's only real if it can be reified/sustained in some way", which last thing a lot of non-dual practices attempt to disabuse humans of. It's not like this is extremely difficult to recognize (although it can be for certain dispositions), but it is especially immune to being noticed if you believe in...
- Solving everything with more explicitness: I love how a lot of double binds are dissolved in this community by kind and skillful naming of things. This is greatly underpumped nearly everywhere else. But I think it's silly to automatically assume it is a safe defense for any issue. It isn't impossible that certain situations might benefit from limiting explicitness rather than unboundedly including ever more meta accounting.


In sum: Detailed models that frames and handles come with are great, but burdensome. It's probably an oversight to not account for transaction costs in switching to (or even instantiating to, if you think your handle is general) other frames when you camp around one frame though, roughly multiplied with how burdensome the current frame might be.

In a way, I would have liked Luhmann for Rationalists better, but few people know Niklas Luhmann these days. I think that few people have analyzed how differences lead to systems better and more comprehensively than him. He is difficult to read because he had to invent a lot of differences while at the same time explaining in the language of social science. In a way he approaches the grounding problem from the social side.

I think this is maybe a good starting point: System as Difference

Coming up with novel conceptual distinctions seems indeed valuable. This made me notice that there is a different but related activity, one which many philosophers promote: Engaging in conceptual analysis. Conceptual analysis roughly works by trying to find definitions for terms in everyday language which preserves their original meaning. Mostly the terms are general terms like knowledge, probability, meaning, causation, rationality, justification, explanation, property, identity, truth, event etc.

Often that works by someone proposing a definition D for a term T, and others come up with thought experiments which serve as counterexamples: Either one where something intuitively satisfies D but not T, or one where something satisfies T but not D. The former shows that D is not necessary for T, the latter that D is not sufficient for T.

Now I wonder, do you think the above is a useful activity? It seems to contribute to "clearer thinking" by discovering logical relations between concepts, e.g. by noticing that knowledge implies truth and justification. Or is it just some form of fairly pointless game, compared to finding new distinctions? I think the two are closely related in spirit, but I'm not quite sure in which way.

I think it's also for interpersonal synchronization of the meaning of concepts, to avoid misunderstandings or conflicts. A social, instead of a personal purpose.

I think it's more like sifting sand for gold and other minerals.

The process of exhaustively looking for all of the conceptual distinctions we can find, in some swath of the territory, is one way (among many) to make sure that we don't miss any of the ones that are actually useful.

"I claim that passe muraille is just a variant of tic-tac."

Well that's a big leap.

David Belle and I had a bit of a disagreement about it, in person. =P

Casus Belli:

A casus belli is a justification for declaring war, right? Does using that term here mean that you are viewing this post as an act of war against someone? I'm confused.

Tongue-in-cheek way of saying "reason for writing post."

I feel confused. I have been exactly betrayed, and I have betrayed other; it's when someone promises something and then doesn't do it. Were I to complain about being betrayed, I would not speak about anger or hurt; in "normal" speech that would mean that I have stopped demanding the actual promised thing itself and the other person now has a right to "ok, we can talk when you are less upset" or some other grown-up answer. After all, anger is a passing thing, isn't it. It creates no obligations.

(But I also don't understand why "betrayed" and "manipulated" are not allowed, and "hurt" and "used" are.)

("Used" would also not be allowed; the person was just going too fast to be corrected.)

The reason NVC claims that "betrayed" and "manipulated" and "used" are not feelings, where "hurt" is, is that the former three all include models and beliefs about the world, while the latter does not.

NVC builds a very high wall between "the feeling you're feeling" and "the story you're telling about why you feel that way."

I agree it causes problems, if one person is using NVC and the other one isn't/doesn't understand how it works or why it's being done.

Ah, now it's clear. Thank you. But how then Y would relate his the model of the world to X after X lost (or "lost") Y's research samples and never said a thing until Y tried to find them in the fridge? Using the NVC, I mean. (I myself would not speak to X at all.)

They would separately say "I feel hurt and angry" and "I believe that you lied."

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

The first times I read LW articles and especially those by Eliezer, it was common for me to think that I simply wasn't smart enough to follow their lines of argumentation. It's precisely missing these buckets and handles, these modes of thoughts and the expressions/words used to communicate them, what made it hard at the start; as I acquired them, I could feel I belonged the community. I suppose this occurs to all newbies, and it's understandable to feel this helplessness and inability to contribute for as long as you haven't acquired the requisite material. High standards need high barriers of entry.

At the same time, however, I think the use of plenty of unknown-to-me terminology, all linking and referring to other posts, could have as easily put me off as it made me curious to explore this new world. And it has probably repelled many potentially very capable contributors, which didn't lack the desire or talent, but simply experienced a discouraging introduction. (And kept them out of all the advantages of this new camp and perspective).

Only perhaps a sufficient promotion/display of the benefits and value of this community can outweigh the costs of starting for an outsider (as long as we're unwilling to reduce the costs; I think there are good reasons to keep the costs high). A way to increase the perceived interests could be to for example ensure that other sites who link to LW, and which might be a source of newcomers, sufficiently introduce and promote the benefits of LW and this rationalist community, rather than merely linking this seemingly highly entangled and exclusive/elitist website. But I feel this would still be insufficient to properly adjust this filter of entry to LW — a more extensive list of measures would be better.

On the subject of jargon, there's one piece of jargon that I've long found troubling on LW, and that's the reference to 'tech' (for mental techniques/tools/psycho-technologies), which I've seen Duncan use a few times IIRC.

A few issues:
1. It's exactly the same usage as the word 'tech' in the fake scifi 'religion' that must not be named (lest you summon its demons to the forum through the Google portal). They do exercises to give them new mental tools, based on reading the lengthy writings of their founder on how to think, and those lessons/materials/techniques are always referred to as 'tech.' This doesn't automatically make our usage of it bad, but it's probably smart to avoid so closely mirroring their usage imo. 

2. Using the word 'tech' doesn't shine much light. I'm aware of the concept of 'exaptation' and that things external to the mind can be integrated into the mind much the way that a craftsman stops seeing the hammer as separate from his hand. Still, it doesn't seem very useful to blur the distinction between mental techniques and reasoning strategies we can learn and internalise by reading blog posts, and literal technology we might use to augment or supplement our thinking abilities.

To be clear, I'm blurring lines even further than you describe above; I use "tech" for all three of the following categories:

  • Actual technological development, such as LLMs or new kinds of steel
  • Mental technologies like TAPs and Gendlin's Focusing
  • Social tech like duels (obsoleted, but superior to their predecessors) and "I statements"

Perhaps separate nomenclature can be used for each meaning?

Such as:

'atech' 

'mtech'
'stech'

etc.

"Language is a virus from outer-space."

William S. Burroughs

I don't think even the people who feel it's justified to change the way you think are trying to use you as a tool for their goals and priority; I don't think it's that machiavellian. They push the termininology they do because it's natural for them, because they truly believe their terminology most accurately describes reality.

 

When you reject their terminology, they don't see it as you saying 'my brain isn't currently running an OS where this language shift is easy and makes sense, and I get to have more of a say on what OS my brain runs on than you do'. They see it as you saying 'that concept that you're describing with your jargon DOESN'T EXIST. It's straight up NOT A THING in REALITY.' Whereas it's very much real to them. To them, it's like you're about to step on a cat, and they're trying to point out there's a cat in front of you, but you're like 'oh, that fluffy thing on the ground? We already have a name for those - mats' and proceed to step on it.

 

To me, a better answer seems to be:


- (for when someone is trying to push jargon onto you)

'Look, my brain isn't currently running an OS where this language shift is easy and makes sense. I'm not saying this concept doesn't describe anything that exists in reality; you've probably adopted this concept because it does describe something and you've seen it, but it's not something I feel the need to care about; and no-one can have such a fine-grained vocabulary to describe every nuance in reality when we don't care about the majority of them. It will be costly for me to adopt your concepts, and it's ultimately up to me whether I choose to.'


- (for when you're trying to get someone to adapt your jargon)

'Oh, okay, yeah, your brain isn't currently running an OS where this language shift is easy and makes sense. I don't blame you for not immediately seeing it, but I want to give you an update patch that will suddenly make this distinction feel real to you, becauce to me it's very real and very important, and it makes me feel gaslit to interact with people who think I'm crazy for caring about this distinction and thinking it's a thing. It's your choice, of course; I won't hold it against you if you refuse this update patch; I would just maybe not interact with you as much anymore, purely for my own comfort and not because I think you're a bad person who isn't worth interacting with.'

Is this related to the OP? What point do you think the OP is making and how is this relevant?

What point do you think the OP is making?

Jargon is useful and natural when it describes distinctions we're already familiar with, but is cumbersome when it describes distinctions that feel meaningless to us.

As a 'subpoint', I felt like the OP was implying that one has no reason to be upset with other people for resisting one's attempt to push jargon onto them, unless one is trying to hijack their mental software to use them as a tool for one's own goals and priorities. I could be wrong about OP having this view ...

how is this relevant?

... but just in case, I wanted to mention a case where one might be upset with other people for resisting one's attempt to push jargon onto them, that has nothing to do with trying to control or manipulate them.

(Was that too small of a point in the OP to be worth addressing? I'm new here XD)