I'm reading Thinking, Fast and Slow. In appendix B I came across the following comment. Emphasis mine:

Studies of language comprehension indicate that people quickly recode much of what they hear into an abstract representation that no longer distinguishes whether the idea was expressed in an active or in a passive form and no longer discriminates what was actually said from what was implied, presupposed, or implicated (Clark and Clark 1977).

My first thought on seeing this is: holy crap, this explains why people insist on seeing relevance claims in my statements that I didn't put there. If the brain doesn't distinguish statement from implicature, and my conversational partner believes that A implies B when I don't, then of course I'm going to be continually running into situations where people model me as saying and believing B when I actually only said A. At a minimum this will happen any time I discuss any question of seemingly-morally-relevant fact with someone who hasn't trained themselves to make the is-ought distinction. Which is most people.

The next thought my brain jumped to: This process might explain the failure to make the is-ought distinction in the first place. That seems like much more of a leap, though. I looked up the Clark and Clark cite. Unfortunately it's a fairly long book that I'm not entirely sure I want to wade through. Has anyone else read it? Can someone offer more details about exactly what findings Kahneman is referencing?

New to LessWrong?

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 7:17 AM

If you make the type of statements that normally are meant to imply things, but you make them without intending to imply anything, you're not communicating your intent well.

And that's your responsibility. It's easy to say "everyone should just look at what I am literally saying", but nobody's going to do that, and they'd be correct in not doing that.

I'm not sure I agree. Expecting people to judge stated claims and ignore implicature all the time is unreasonable, sure. But expecting them to judge stated claims over implicature when the stated claim is about empirical facts strikes me as plenty reasonable.

...or that was my opinion until now, anyway. This bit about the brain not actually distinguishing the two has me questioning it. I still don't think that it's okay to conflate them, but if the tendency to do so is hardwired, then it doesn't represent willful stupidity or intellectual dishonesty.

It is, however, still a problem, and I don't think it's one that can be blamed on the speaker; as Gunnar points out elsethread, it's hard to explicitly rule out implicatures that you yourself did not think of. It's also hard to have a discussion when you have to preface statements with disclaimers.

I should add that I am talking about relatively neutral statements here. If I may steal an example from yvain, if you say "The ultra-rich, who control the majority of our planet's wealth, spend their time at cocktail parties and salons while millions of decent hard-working people starve," you pretty much lose the right to complain. For contrast, if you say "90% of the planet's wealth is held by the upper 1%," and your discussion partner asks you why you support the monster Stalin, I think you're on solid ground asking them WTF.

...or again, so I thought. If the brain really doesn't distinguish between the neutral version of that statement and the listener's belief that people making it must be Communists, then the comparison is inevitable and I am boned.

This bit about the brain not actually distinguishing the two has me questioning it.

It clearly an overstatement. People are very well able to distinguish them -- we are doing so right here. Perhaps what people are actually doing (I have not seen the Clark&Clark source to know what concrete observations they are discussing) is considering the implications to have been intended by the speaker as much as the explicit assertions. Well, duh, as the saying is.

Implicatures aren't some weird thing that the poor confused mehums do that the oppressed slans are forced to put up with. You don't say things when they are clear without being said, because it's a waste of time. It's a compression algorithm. As with any compression algorithm, the more you compress things, the more vulnerable the message is to errors, and you have a trade-off between the two.

This, btw, is my interpretation of the Ask/Guess cultural division. These are different compression algorithms, that leave out different stuff. Mixing compression algorithms is generally a bad idea: too much stuff gets left out if both are applied.

People are very well able to distinguish them -- we are doing so right here.

Are we? We're discussing the distinction, sure, but is each of us distinguishing the other's statements about implicature from the other's implications about implicature? Did I say everything you think I said? Did you say everything I think you said?

If I read this thread, then attempt to write down a list of significant statements you made from memory, and then compare that list to your actual text; will it contain things you did not say? Will it also contain things that I thought followed from what you said, but that you neither said nor meant?

My understanding of the original quote is that it will. I found that surprising, enlightening, and scary.

We're discussing the distinction, sure, but is each of us distinguishing the other's statements about implicature from the other's implications about implicature?

No, because we do not need to. We are responding to what we perceive the others' meanings to be, regardless of how explicitly they were expressed. Only if we are are uncertain of an implication, or if one person perceives an implication that another did not intend, do we need to raise the issue.

If I read this thread, then attempt to write down a list of significant statements you made from memory, and then compare that list to your actual text; will it contain things you did not say?

Yes. This will still be the case if you do not do this from memory, but write out a paraphrase with the original text in front of you.

Will it also contain things that I thought followed from what you said, but that you neither said nor meant?

It may well do that as well. Communication is fallible. That is why it has to be a two-way process: fire and forget doesn't work. Hence also such devices as the ideological Turing test: formulating someone else's views on a subject in a way that they agree is accurate.

ETA: The university library has the Clark & Clark book. Kahnemann and Tversky don't give a page reference, but chapters 2 and 3 of C&C discuss implicatures. What I gather from it is that yes, people make them. In experiments, their memories of a text are influenced in various ways by the background knowledge that they bring to bear. Incongruous details are more often forgotten, while congruous but absent details are added in retelling. The experiments cited in Clark & Clark measure things like how long certain linguistic comprehension tasks take and what errors are made. These are compared with the predictions of various models of how the brain is processing things.

Sounds like Bayesian updating to me, with the evidence received (the text) not being so strong as to screen off the priors. But unless you are specifically attending to it (as you might well be doing, e.g. when examining a witness at a trial) all you are aware of is the result.

I found that surprising, enlightening, and scary.

Welcome to consciousness of abstraction! Your brain does not distinguish true beliefs from false beliefs!

ETA: The university library has the Clark & Clark book.

I don't think I have any more to say in this subthread, but I wanted to thank you for looking this up. Thanks.

For contrast, if you say "90% of the planet's wealth is held by the upper 1%," and your discussion partner asks you why you support the monster Stalin, I think you're on solid ground asking them WTF.

Maybe. It could have been your discussion partner's experience that everyone who brings up the 90% thing has, in fact, been a communist. If that's been their experience, then based on the knowledge that they have, that can be a reasonable question to ask. Compare with the claim "Marx wrote that [whatever]"; even though this might be a neutral factual claim in principle, in practice anyone who brings that up in a discussion is much more likely to be a Marxist than someone who doesn't.

It's really easy to assume that people should take your comments at face value, but that tends not to work. It's slightly harder to actually make explicit, as part of your comment, that you are not expecting the usual connotations to follow...but a lot more effective.

"holy crap, this explains why people insist on seeing relevance claims in my statements that I didn't put there. If the brain doesn't distinguish statement from implicature, and my conversational partner believes that A implies B when I don't, then of course I'm going to be continually running into situations where people model me as saying and believing B when I actually only said A."

In particular, I've noticed that depression causes recollections to be modded in the negative direction. People will remember slight rewordings of sentences that makes their implications sound worse.

It also implies a fix: Make implications explicit. Thedifficult part finding the implications that could be made which you don't explicitly think of.

this explains why people insist on seeing relevance claims in my statements that I didn't put there.

Seems so, yes.

This process might explain the failure to make the is-ought distinction in the first place. That seems like much more of a leap, though.

Yes, quite a leap. The is-ought debate persists even when people take pains to understand precisely what others are saying.

[-][anonymous]9y-20

I guess what you mean is something along the lines of if we are used to "X is natural" used as an argument to "X is true", then we assume it is meant so every time we hear it,even when it is not the case? But I think for that to happen, first the fallacy must be commited often enough to become used to it? If nobody ever used "X is natural" in the sense of "X is good", why would we jump to this implication?

My point is, the implicature is rather social, usually, perhaps if you figure out a really unusual implicature, you will not really assume people making it. You will assume so when you have heard it so often that you got used to it. An example is that capitalism-is-bad carries the implicature of socialism-is-good. The only reason it carries it is simply the frequency of it, for it is most often socialists who criticize capitalism.

In other words, the whole thing may not be much more than a frequency based prediction: people probably mean what they most often mean.

An example is that capitalism-is-bad carries the implicature of socialism-is-good. The only reason it carries it is simply the frequency of it, for it is most often socialists who criticize capitalism.

I don't think that's the only reason. It's also the desire to box every political position into boxes.

Studies of language comprehension indicate that people quickly recode much of what they hear into an abstract representation that...

contains few traces of the actual spoken language (at least if you are a competent speaker).