I'm reading Thinking, Fast and Slow. In appendix B I came across the following comment. Emphasis mine:
Studies of language comprehension indicate that people quickly recode much of what they hear into an abstract representation that no longer distinguishes whether the idea was expressed in an active or in a passive form and no longer discriminates what was actually said from what was implied, presupposed, or implicated (Clark and Clark 1977).
My first thought on seeing this is: holy crap, this explains why people insist on seeing relevance claims in my statements that I didn't put there. If the brain doesn't distinguish statement from implicature, and my conversational partner believes that A implies B when I don't, then of course I'm going to be continually running into situations where people model me as saying and believing B when I actually only said A. At a minimum this will happen any time I discuss any question of seemingly-morally-relevant fact with someone who hasn't trained themselves to make the is-ought distinction. Which is most people.
The next thought my brain jumped to: This process might explain the failure to make the is-ought distinction in the first place. That seems like much more of a leap, though. I looked up the Clark and Clark cite. Unfortunately it's a fairly long book that I'm not entirely sure I want to wade through. Has anyone else read it? Can someone offer more details about exactly what findings Kahneman is referencing?
No, because we do not need to. We are responding to what we perceive the others' meanings to be, regardless of how explicitly they were expressed. Only if we are are uncertain of an implication, or if one person perceives an implication that another did not intend, do we need to raise the issue.
Yes. This will still be the case if you do not do this from memory, but write out a paraphrase with the original text in front of you.
It may well do that as well. Communication is fallible. That is why it has to be a two-way process: fire and forget doesn't work. Hence also such devices as the ideological Turing test: formulating someone else's views on a subject in a way that they agree is accurate.
ETA: The university library has the Clark & Clark book. Kahnemann and Tversky don't give a page reference, but chapters 2 and 3 of C&C discuss implicatures. What I gather from it is that yes, people make them. In experiments, their memories of a text are influenced in various ways by the background knowledge that they bring to bear. Incongruous details are more often forgotten, while congruous but absent details are added in retelling. The experiments cited in Clark & Clark measure things like how long certain linguistic comprehension tasks take and what errors are made. These are compared with the predictions of various models of how the brain is processing things.
Sounds like Bayesian updating to me, with the evidence received (the text) not being so strong as to screen off the priors. But unless you are specifically attending to it (as you might well be doing, e.g. when examining a witness at a trial) all you are aware of is the result.
Welcome to consciousness of abstraction! Your brain does not distinguish true beliefs from false beliefs!
I don't think I have any more to say in this subthread, but I wanted to thank you for looking this up. Thanks.