The following once happened: I posted a link to some article on an IRC channel. A friend of mine read the article in question and brought up several criticisms. I felt that her criticisms were mostly correct though not very serious, so I indicated agreement with them.

Later on the same link was posted again. My friend commented something along the lines of "that was already posted before, we discussed this with Kaj and we found that the article was complete rubbish". I was surprised - I had thought that I had only agreed to some minor criticisms that didn't affect the main point of the article. But my friend had clearly thought that the criticisms were decisive and had made the article impossible to salvage.

--

Every argument actually has two parts, even if people often only state the first part. There's the argument itself, and an implied claim of why the argument would matter if it were true. Call this implied part the relevance claim.

Suppose that I say "Martians are green". Someone else says, "I have seen a blue Martian", and means "I have seen a blue Martian (argument), therefore your claim of all Martians being green is false (relevance claim)". But I might interpret this as them saying, "I have seen a blue Martian (argument), therefore your claim of most Martians being green is less likely (relevance claim)". I then indicate agreement. Now I will be left with the impression that the other person made a true-but-not-very-powerful claim that left my argument mostly intact, whereas the other person is left with the impression that they made a very powerful claim that I agreed with, and therefore I admitted that I was wrong.

We could also say that the relevance claim is a claim of how much the probability of the original statement would be affected if the argument in question were true. So, for example "I have seen a blue martian (argument), therefore the probability of 'Martians are green' is less than .01 (relevance claim)", or equivalently, "I have seen a blue martian" & "P(martians are green|I have seen a blue martian) < .01".

If someone says something that I feel is entirely irrelevant to the whole topic, inferential silence may follow.

Therefore, if someone makes an argument that I agree with, but I suspect that we might disagree about its relevance, I now try to explicitly comment on what my view of the relevance is. Example.

Notice that people who are treating arguments as soldiers are more likely to do this automatically, without needing to explicitly remind themselves of it. In fact, for every argument that their opponent makes that they're forced to concede, they're likely to immediately say "but that doesn't matter because X!". Because we like to think that we're not treating arguments as soldiers, we also try to avoid automatically objecting "but that doesn't matter because X" whenever our favored position gets weakened. This is a good thing, but it also means that we're probably less likely than average to comment about an argument's relevance even in cases where we should comment on it.

(Cross-posted from my blog.)
New Comment
16 comments, sorted by Click to highlight new comments since:

Congratulations on rediscovering relevance implicatures.

[-]asr80

I basically agree, but I think the point is stronger if framed differently:

Some defects in an argument are decisive, and others are minor. In casual arguments, people who nitpick are often unclear both to themselves and to others whether their objections are to minor correctable details, or seriously undermine the claim in question.

My impression is that mathematicians, philosophers, and scientists are conscious of this distinction and routinely say things like "the paper is a little sloppy in stating the conclusions that were proved, but this can be fixed easily" or "there's a gap in the argument here and I think it's a really serious problem." Outside professional communities, I don't see people make distinctions between fatal and superficial flaws in somebody else's argument.

In summary: I think your post is a good one but with minor correctable flaws.

I often find this distinction frustrating, in that people will sometimes jump to attacking what they believe is my relevance claim before making any kind of judgement on the truth or falsehood of my argument. And then they're wrong about the relevance claim. (if there even is one. Sometimes I'm just nitpicking.)

I have a sneaking, unverified suspicion that a lot of these cases of mutual misunderstanding are..."unconsciously deliberate" is the phrase that comes to mind, although that doesn't seem quite right. They're ways for both parties to walk away convinced that they're right, without either being required to legitimately engage (and thereby risk turning out to be wrong). For bonus points, both parties can honestly accuse the other of intellectual defection if they bring it up again later!

[-][anonymous]40

I often find this distinction frustrating, in that people will sometimes jump to attacking what they believe is my relevance claim before making any kind of judgement on the truth or falsehood of my argument. And then they're wrong about the relevance claim. (if there even is one. Sometimes I'm just nitpicking.)

That sounds like it's sensible behaviour. If your argument doesn't really matter, they're probably better off not bothering to judge its truth or falsehood, instead spending their time and attention on something else. (Especially in a group setting where multiple people are making counterarguments, and it's better to engage with the real objections than to get bogged down dealing with pedants and derailments)

I'm pretty sure every statement has more than two parts. I don't have an example handy, but I've been in arguments which just went around and around until someone figured out what premise of an initial statement was being disagreed with.

Good. This is an important distinction. I think that what you say could be rephrased in the following terms. An argument consists of a) a number of premises P1,...,Pn and b) an implicit or explicit claim to the effect that if P1,...,Pn are true, then that raises the probability that the conclusion C is true to X (or, alternatively, raises the probability of C with a certain value computed from the prior probability Y of C - e.g. Y + Z, where Z is some value).

In the case of deductive arguments, then X=1 (since if the premises are true in a deductive argument, then the conclusion is certainly true). When it comes to inductive arguments, X<1, however.

I think that generally, we are better at assessing a) - whether the premises are true - than b) - what the probability that the conclusion is true is given that the premises are true. An important reason for this is, I think, that in order to understand how the truth of the premises affect the probability of the conclusions, we need to have a comprehensive understanding of and overview over the whole question, whereas we normally do not have to do that in order to judge whether the premises are true. In other words, b) is normally a hard "holistic" judgment, a) normally an easier "atomistic" judgment.

For instance, say that someone says that "this article uses statistical technique A whereas statistical technique B is standard in the field. That clearly makes the conclusions of the article untrustworthy". In this case, it is easy to check whether the premises are true - whether the article uses technique A and the standard in the field is technique B. However, in order to assess whether that indeed makes the conclusions untrustworthy, you need to have a good grasp of the relative reliabilities of techniques A and B, especially with regards to the specific methodology that the authors have used, and the conclusions they have arrived at.

In general, I think one should be quite explicit about the relevance of arguments. I think the story in the beginning of the post nicely illustrates that. Failure to be explicit about the relevance of arguments is a major problem within the academia, too - nit-picky arguments are often given far too much weight whereas more complicated but more significant arguments are unjustly ignored.

Thanks, I like your rephrasing.

In the case of deductive arguments, then X=1 (since if the premises are true in a deductive argument, then the conclusion is certainly true).

Bringing up the case of deductive arguments made me realize that the Tortoise's argument to Achilles seems like a case of relevance claims being used... creatively.

Hehe, creatively indeed.

I really liked your post on inferential silence, too. I'd be interested in reading more from you on argumentation, and in particular how we can use feedback to improve argumentation. It's a really important and somewhat neglected topic.

Thanks!

(Thought I'd replied to this earlier but apparently I hadn't.)

To me, "argument" means "A logical (or, at least, putatively logical) progression from premises to conclusion". "I have seen a blue Martian" is not an argument. If it is implied that you are wrong because of this, then "You are wrong because I have seen a blue Martian" would be the argument, while "I have seen a blue Martian" is a premise. So, in my nomenclature system, the issue is that many arguments/conclusions are left implicit. People often aren't even consciously aware that they are leaving the most important part of their argument unsaid. I find that often, much of responding to people consists of badgering them to make their arguments explicit, to little success.

Summary: Agreeing with people who insufficiently ironman an argument, will be treated as agreeing that the argument is complete rubbish.

Summary: Agreeing with people who insufficiently ironman an argument...

straw man < iron man < steel man?

And I expect the reason is that people who insufficiently ironman an argument are either more interested in the argument's technical correctness, or more interested in discrediting the claim.

This is a good thing, but it also means that we're probably less likely than average to comment about an argument's relevance even in cases where we should comment on it.

That's my experience with myself.

We could also say that the relevance claim is a claim of how much the probability of the original statement would be affected if the argument in question were true.

Usually there is more than one statement in play, or more than one viable interpretation of a statement, as you point out with "Martians are green". So I prefer to think along the lines of "how much of the actionable value of the original statement(s) is affected by the new argument." This has the advantage of highlighting which elements of a cluster of statements are important, and which less so. It also highlights the possibility that participants in the conversation may see the relative importance differently.

IAWYC, but...

I would posit that the original conversation's discussion was too shallow. There is an opportunity cost in analysing or delving into every conversation to an extreme depth to root out the exact definition nodes or evidence being questioned to the point of resolving it. With shorter conversations of more implied meaning and less explicit meaning, there is a tendency for both sides to walk away feeling triumphant. Also there is a thread where any negative point 'scored' against and argument somehow invalidates the entire point.

I'd argue overcoming these and making the conversation productive and serving the utility of future confusion or disagreement that a deeper and more thoughtful discourse happen. But agreeing to that is quite possibly a huge part of being a LWer. A lot of these sorts of conversations are with people who find it too tedious to dig into discussions too often. I have this as well with my friends I consider smart and capable, but who only demonstrate this ability infrequently; having a preference for quick, shallow, aha I win style discussions. I'd argue their purpose utility, they argue the time cost and priority of the conversation utility.

Perhaps a, I agree with your minor nitpicking, but maintain the overall validity of the argument disclaimer type statement attached to the secondary agreement with the relevance claim?