1 min read5th May 201329 comments

19

When a group of people talk to each other a lot they develop terms that they can use in place of larger concepts. This makes it easier to talk to people inside the group, but then it's harder to talk about the same ideas with people outside the group. If we were smart enough to keep up fully independent vocabularies where we would always use the right words for the people we were talking to, this wouldn't be an issue. But instead we get in the habit of saying weird words, and then when we want to talk to people who don't know those words we either struggle to find words they know or waste a lot of time introducing words. Especially when the group jargon term offers only a minor advantage over the non-jargon phrasing I think this is a bad tradeoff if you also want to speak to people outside the group.

Recently I've been working on using as little jargon as possible. Pushing myself to speak conventionally, even when among people who would understand weird terms a little faster, can be frustrating, but I think I'm also getting better at it.

 

I also posted this on my blog

New Comment
29 comments, sorted by Click to highlight new comments since: Today at 2:11 PM

Coining new jargon words (neologisms) is an alternative to formulating unusually precise meanings of commonly-heard words when one needs to convey a specific meaning.

A Marxist may use the term "surplus value" to specifically mean the difference between a worker's productivity and wage. If they say "surplus value" to someone who does not recognize this specific meaning, that person may think the Marxist means "surplus" in the sense of "unnecessary excess". They may think the Marxist means that the worker's productivity is wasted, and respond accordingly. This may baffle the Marxist, who will point out that "surplus value" (in their sense) doesn't have much to do with "overproduction" (another word that has a specific meaning in Marxist economics).

Using neologisms has the advantage that it conveys readily, to someone unfamiliar with the neologism, that they are unfamiliar with it and need to ask for clarification. Using existing words with unusually precise meanings runs the risk of letting someone go past a misunderstood word without realizing that they are doing so.

I agree that the distinction between neologisms and overloading existing words is an important one (and your examples are good!) - but I think the ordinary understanding of "jargon" covers both.

If someone announces "I'm going to stop using jargon!", and goes on to say things like "steel man", "shut up and multiply", "dark arts", then most people will agree he failed. The list of LessWrong Jargon contains plenty of non-neologisms like that.

Neologisms are a bit more obvious, but even the distinction between somewhat rare words (like "neologism") and specialized jargon (like "overloading") is pretty fuzzy.

But no-one's going to assume that “steel man” refers to a man made of steel, or “dark arts” to arts of a dark colour, so they do qualify as neologisms in fubarobfusco's sense. (OTOH, I do seem to recall someone on LW or OB who had assumed that “shut up and multiply” was an exhortation to have lots of children, and went WTF.)

A lot of the LW sense of "dark arts" could be found in the mainstream expression "dirty tricks", which is slightly more general but not much: "cognitive dirty tricks" would be pretty clear. A significant part of both terms' meaning is that using the techniques so named is unethical or unfair on account of being manipulative of others.

(OTOH, I do seem to recall someone on LW or OB who had assumed that “shut up and multiply” was an exhortation to have lots of children, and went WTF.)

I don't recall this incident, but if a newcomer came across an evolutionary psychology discussion and saw that expression, that would be the obvious interpretation!

The list of LessWrong Jargon contains plenty of non-neologisms like that.

ADBOC?

I don't think jkaufman meant we should use familiar-sounding words with unfamiliar overly precise meanings, but rather that we shouldn't get in the habit of using unfamiliar overly precise concepts even when we don't really need to (“unfamiliar” here meaning ‘unfamiliar to most audiences’, not ‘unfamiliar to the speaker’, of course).

Part of being an effective communicator is optimizing what you say for your audience. You shouldn't take pride in not trying to do this. Train your brain to make optimal use of jargon given your audience, not to minimize your use of jargon.

New college professors often have trouble teaching "down" to the level of their students, but the solution for them is not to lower the complexity of their conversations with everyone, but rather to train their brains to respond differently when talking to students as opposed to colleagues.

This seems nonresponsive to jkaufman's stated reason for trying to minimize jargon instead of using it optimally, namely this:

If we were smart enough to keep up fully independent vocabularies where we would always use the right words for the people we were talking to, this wouldn't be an issue. But instead we get in the habit of saying weird words, and then when we want to talk to people who don't know those words we either struggle to find words they know or waste a lot of time introducing words.

I agree with you that it's useful to optimize communication strategies for your audience. However, I don't think that always results in using shared jargon. Deliberately avoiding jargon can presumably provide new perspectives, or clarify issues and definitions in much the way that a rationalist taboo would.

But good jargon reduces the time it takes to communicate ideas and so allows for more time to gain new perspectives.

Unless the jargon perpetuates a false dichotomy, or otherwise obscures relevant content. In politics, those who think in terms of a black-and-white distinction between liberal and conservative may have a hard time understanding positions that fall in the middle (or defy the spectrum altogether). Or, on LessWrong, people often employ social-status-based explanations. We all have the jargon for that, so it's easy to think about and communicate, but focusing on status-motivations obscures people's other motivations.

(I was going to explain this in terms of dimensionality reduction, but then I thought better of using potentially-obscure machine learning jargon. =) )

Maybe this should be in the Open Thread?

Nonetheless, I feel that if you can't explain it without using jargon, that gives some evidence for you not understanding it in the first place (whatever it is).

I feel that if you can't explain it without using jargon, that gives some evidence for you not understanding it in the first place (whatever it is).

Eh. Maybe if you're using it in a guessing-the-teacher's-password kind of way, but sometimes you need jargon because you need to say something very precise (e.g. in mathematics).

This is very related to something my friend pointed out a couple weeks ago. Jargon doesn't just make us less able to communicate with people from outside groups - it makes us less willing to communicate with them.

As truth-seeking rationalists, we should be interested in communicating with people who make good arguments, consider points carefully, etc. But I think we often judge someone's rationality based on jargon instead of the content of their message. If someone uses a lot of LessWrong jargon, it gives a prior that they are rational, which may bias us in favor of their arguments. If someone doesn't use any LW jargon (or worse, uses jargon from some other unrelated community), then it might give a prior that they're irrational, or won't have acquired the background concepts necessary for rational discussion. Then we'll be biased against their arguments. This contributes to LW becoming a filter bubble.

I think this is a very important bias to combat. Shared jargon reflects a shared conceptual system, and our conceptual systems constrain the sort of ideas that we can come up with. One of the best ways to get new ideas is to try understanding a different worldview, with a different collection of concepts and jargon. That worldview might be full of incorrect ideas, but it still broadens the range of ideas you can think about.

So, thanks for this post. =) I hope you will discuss the results of your attempt to speak without jargon.

That's not what prior means. You mean evidence.

Hmm, you're probably right. I guess I was thinking that quick heuristics (vocabulary choice, spelling ability, etc.) form a prior when you are evaluating the actual quality of the argument based on its contents, but evidence might be a better word.

Where is the line drawn between evidence and prior? If I'm evaluating a person's argument, and I know that he's made bad arguments in the past, is that knowledge prior or evidence?

Where that goes depends on whether you're evaluating "He's right" or "This argument is right".

Examples? I'm not sure I understand what sort of jargon you particularly want to cut. The kind of jargon I use in my day-to-day work (mathematics) is more or less indispensable.

In a lesswrong context this would be avoiding saying things like "that's inconsistent with my model of you", "I'll need to update on that", or "that charity is nearly donkey-sanctuary in it's fuzzies to utilons ratio".

I've found "machine problem-solving" goes over with laypeople better than "machine intelligence."

The former suggests narrow intelligence to me, whereas the latter is more neutral (whereas “artificial intelligence” suggests general intelligence to me).

I cringe a little every time I see someone here write, "Suppose Omega told you X," when, "Suppose X," works just as well.

I seen your point, but the phrasing you object to tends to reduce the frequency of responders to fight the hypothetical. At least, in theory.

Jargon and terms of art have their uses and abuses. Clearly, it's very handy to have short references to complex concepts to communicate more information faster.

Unfortunately, it's also a tool of control and status, used to exclude and pretend.

For me, I tend to hate jargon. Only so many concepts fit in my head at once. That' one of the reasons for my preference for Jaynes. What is the probability of X? That works much better for me than the endless sea of special purpose names and concepts in conventional statistics.

Slightly side-tracked:

I had several objections to this, and then did some standard debiasing and came up with an obvious-in-retrospect obvious solution within less than five minutes, here to anchor your judgment for my dark-artsy pleasures! Unfortunately, the original idea I had relies on technologies and their widespread use that are barely being hinted at by obscure high-tech lab projects like Steve Mann's EyeTap and Google's Glass, so here's the toned-down version that at least fixes some of the issues for internet discussion boards. [1]

The obvious solution I mention is to write a (browser) script that maintains a database of jargon terms or keywords or unconventional definitions or abstract concepts, along with links to places that explain them, and an easy and convenient way to add new jargon to it. This script would unobtrusively (based on my scripting experience, this "unobtrusive" part and the "easy to add new jargon" are probably the two tallest orders and most difficult parts of such a project) suggest linking / referencing (or perhaps also allow for one-click substitution / inserting an explanation of the concept) whenever it detects the keywords.

The base concept would be that it work like the automatic spell-checking dictionaries (e.g. the one integrated in most versions of Firefox), but instead of suggesting corrections to common words, it would suggest links and references for specialized jargon and obscure terms.

1: (The original idea I had involved automatic personal databases and inter-device communication that compared those databases and offered automatic substitutions or transmitted link references when there were mismatches between two users' data, so that you'd just keep on using jargon and terms that you understand differently from your audience would be automatically (or by suggestion) adjusted for possible misunderstandings or have explanatory notes / links to references attached to them, even during normal in-person conversation. I take my living-in-the-future ideals very seriously. )

How about discussing the jargon piece by piece? Some words could perhaps be replaced by already existing words with the same meaning but larger audience. Other words could remain if we feel they add enough value.

This seems like a great way to keep from accidentally alienating new members to a group.