It's too damn hard to be a tapper.
In 1990, Elizabeth Newton earned a Ph.D. in psychology at Stanford by studying a simple game in which she assigned people to one of two roles: "tappers" or "listeners." Tappers received a list of twenty-five well-known songs, such as "Happy Birthday to You" and "The Star-Spangled Banner." Each tapper was asked to pick a song and tap out the rhythm to a listener (by knocking on a table). The listener's job was to guess the song, based on the rhythm being tapped.
The tappers got their message across 1 time in 40, but they thought they were getting their message across 1 time in 2. Why?
Because it's hard to be a tapper. The tapper is humming the song in her head while tapping, she is seeing the whole picture, it's so clear! Putting yourself in the listener's shoes and pretending to be someone who doesn't know what you do and do know lots of stuff that you don't is extremely difficult.
In my experience, being aware of this, alone, is a powerful piece of information to improve the dialogue. For the tapper, to explore being clear and explicit in what is said and take the other person's knowledge into account. For the listener — well, to shut up, listen, and ask exploratory questions.
This experiment is cited in the excellent book Made to Stick.
"Made to stick" was on my list — I will have to bump it upwards a few notches. Thank you for posting this.
Nice article Kaj -- this is a phenomena I've come up against myself several times, so it's really nice to see a carefully worked analysis of this situation. In a probabilistic sense, perhaps intuitive differences are priors that arise from evidence that a person no longer recalls directly, so although the the person may have rationally based their belief on evidence, they are unable to convince another person since they do not have the original evidence at hand. I'm particularly thinking of cases where "evidence" comprises many small experiences over a prolonged period, making it particularly difficult to replay to another person. A carpenter's intuition about the strength of a piece arises from their experiences working with wood, but no single piece of evidence could be easily recalled to transfer that intuition to someone else.
intuitive differences are priors that arise from evidence that a person no longer recalls directly
Incidentally this is how people get embedded in theism. In my own case I was presented with some "proofs" of religion when I was 14; they weren't terrible actually but in retrospect had logical holes. But once they get embedded in your mind, they are very hard to get out. You have to pull yourself out by your own hair, so to say. Or have an emotionally significant event of large magnitude happen to you.
That doesn't mean one or both conclusions aren't wrong. If someone has experiences that are not representative of problems in general, they'll have flawed intuitions.
Also the problem at hand may simply be atypical, so someone with well-rounded experiences will be wrong.
A while back I read that a great many political and religious debates of our time arise out of these two competing axioms:
There's nothing more important than children and family.
There's nothing more important than personal autonomy and choice.
These competing intuitions are responsible for arguments about abortion, gay rights, birth control, feminism, religion, and so many other things. It stands to reason that competing axioms are why no one ever wins these arguments.
Well, empiricism should work. What are your relative track records when you disagree? In predictions in general? If nothing else, there's Aumann, though not under the assumptions of irrationality that could lead to you both being confirmed in your opinions by the same evidence. For that, there's only practice in being VERY explicit about thinking about conservation of expected evidence in order to reduce your susceptibiliity to confirmation biases.
I think she's been correct more often, though an accurate estimate is made difficult by the fact that, after three hours of extensive debate, we tend to discover we actually never disagreed much in the first place and were simply expressing ourselves unclearly. Still, I have adjusted my confidence in my own intuition downwards in light of her possibly having been correct more often.
Incidentally, trying to estimate our respective competences in this leads to an interesting circularity. Much of my intuition is grounded in what I know of mathematics and computer science, while she appealed to examples from medicine and biology. I'm tempted to think that math and cs, which in principle study all the possible ways in which any phenomena could be modeled, would be more useful than the estimates of doctors and biologists whose conceptual toolkits are limited to what has traditionally worked in their more narrow domain... but that would require me to first assume that the very general models studied in math and cs can be easily adapted into specific complex domains, and that ease of adaptability was the very thing the disagreement was all about! So in order to judge whose expertise is more applicable for resolving the question, we'd first need to resolve the question. (This is what I was talking about when I said we ran into the conflict of intuitions on at least five separate times.) Of course, I still don't know that much cs and math, so I might also be mistaken about how much credence those fields lend to my intuition.
Two comments:
a) In the cs domain, suppose that the phenomenon that you were trying to model was the output of a cryptographic-quality pseudo-random generator for which you did not know the seed. Would you expect to be able to model its output accurately?
b) My gut reaction to your original post was that I'd expect them to partition roughly between cases where there is lots of experimental data compared to the parameter space of the system in question vs. where the parameter space is much larger than the reasonably accessible volume of experimental data. Of course, one doesn't really know the parameter space till one has a successful model :-( ...
suppose that the phenomenon that you were trying to model was the output of a cryptographic-quality pseudo-random generator for which you did not know the seed. Would you expect to be able to model its output accurately?
Uhh, no, I wouldn't. But that hardly describes most naturally occurring phenomena.
Ok. In a sense, all of the difference between your intution and your friend's intuition can be viewed as how to construe "most". There are lots of systems in both categories. There is also a bias in which ones we research: Unless a problem is extraordinarily important, if attempts to build models for a phenomenon keep failing, and we have any reason to suspect e.g. chaotic behavior, we fall back to e.g. settling for statistical information.
Also, there is a question of how much precision one is looking for: The orbits of the planets look like clockwork even on moderately long timescales - but there do turn out to be chaotic dynamics (I think the part with the fastest divergence turns out to be one of the orbital elements of Mars, iirc), and this injects chaotic dynamics into everything else, if you want to predict far enough into the future.
The important thing to realize is that neither intuition rests on any particular piece of evidence. Instead, each one is a general outlook that has been formed over many years and countless pieces of evidence, most of which have already been forgotten.
This seems an overly rosy view of intuition that assumes it's largely forged of admissible evidence and competent updating. Is there much reason to believe that we tend to form worthwhile intuitions from sporadic bits of evidence?
I tend to be pretty wary of beliefs that have no easily accessed or communicated justification. What of epistemic hygiene? This looks like a time to confess your ignorance and move on.
Is there much reason to believe that we tend to form worthwhile intuitions from sporadic bits of evidence?
Not necessarily. But even if the intuitions were originally formed from only a couple of pieces of evidence, and plenty of confirmation bias afterwards, it still isn't going to be easy and effortless to update them to the correct level later on. (Notice also that I remarked that theoretically, the data might be exactly the same for both people, only received in a different order. That doesn't exactly mesh well with an assumption of competent updating.)
Original reasons for adopting a belief may be tangled and forgotten, but if your differing beliefs lead to differing concrete predictions, you should be able to test them without diving into the original justifications.
This particular issue of how much people care about mathematical simplicity of models seems to be affecting things a lot, and people who disagree on that simply talk past each other.
For a concrete example just look at Robin's recent Malthusian posts, which I (simple-model-skeptic) find utterly ridiculous due to their reliance on model with known false assumptions, conflicts with a lot of empirical data, and great uncertainty about the future, and Robin (simple-model-lover) basically says because we don't have any better model that's equally simply, this one must be true, and can be extrapolated as much as we feel like.
I feel somewhat better now, with Kaj clarifying it. I can imagine there are probably many other such cases where people completely disagree about their worldview.
For a concrete example just look at Robin's recent Malthusian posts, which I (simple-model-skeptic) find utterly ridiculous due to their reliance on model with known false assumptions, conflicts with a lot of empirical data, and great uncertainty about the future, and Robin (simple-model-lover) basically says because we don't have any better model that's equally simply, this one must be true, and can be extrapolated as much as we feel like.
I tend to agree. I've got the distinct impression that the dubious assumptions you've mentioned are motivated by orthodoxy more than accuracy.
I find it useful (or at least interesting) to explore what things may be like if we follow certain assumptions to their conclusion. Robin's 'burning the cosmic commons' analysis is an example I gleaned insight from. However when I am using such models to make actual predictions about the future I take far more care in my assumption selection.
Yes, you can also use new evidence to compare the intuitions, instead of going back to the old ones. I actually meant to say this in the post, but didn't come across very clearly. Not that it'd make much of a difference - you're still looking at general trends instead of anything tightly and narrowly defined, so you still need a small mountain of cases to test the predictions on.
If you can't recall the evidence that lead to your intuition, how can you verify whether your past self updated correctly? Or even whether your friend's past self updated correctly!
Seems like you should put more weight to the intuition of the person you believe to be more rational in the present and past.
When you encounter a road block, you don't need to give up. You can simply emulate each other's intuitions and proceed with as a provisional argument (assuming your world view is true...).
The mathematicians have pretty much answered this question. Point to Friend, and Kaj needs to read up on chaos theory. Chaos theory describes the types of systems that can't be well modeled if the model system deviates even very slightly from the system of interest.
Ah, but which proportion of all existing systems are chaotic? And of those that are, how chaotic are they? To what degree can you still extract predictable properties from chaotic systems? The Wikipedia page on chaos theory says that
Chaotic behavior has been observed in the laboratory in a variety of systems including electrical circuits, lasers, oscillating chemical reactions, fluid dynamics, and mechanical and magneto-mechanical devices. Observations of chaotic behavior in nature include the dynamics of satellites in the solar system, the time evolution of the magnetic field of celestial bodies, population growth in ecology, the dynamics of the action potentials in neurons, and molecular vibrations. Everyday examples of chaotic systems include weather and climate.
but we still have useful models on all of those systems, even though they're imperfect. Whether the models are useful enough depends on what we try do with them, and how accurate results we need. Chaos theory is evidence in favor of my friend's intuition, to be certain, but it doesn't seem to resolve the question by itself. A comprehensive review of different systems and phenomena and the limits of how well they can be modeled could go a long way in that direction, though. Anybody know of one?
All of this is coming from a book on chaos theory I read ~5 years ago, so take it for what it's worth. As I recall:
Microscopic wind currents cause worldwide changes to the weather in about a month.
On a frictionless billiard table, extremely microscopic differences in initial conditions cause significant changes in the system after about a minute of collision, since each collision magnifies the difference between model and system results.
Plus, your claim was a claim about all systems. His claim was a claim about some systems. In general, you had the harder case to make.
Anyway, the wikipedia article was a good place to start, but probably not deep enough if these questions interest you greatly.
For what it's worth, I strongly side with your Friend's intuitions, and I'm greatly annoyed by people using obviously faulty models and responding to criticism with "so what other model do you propose instead?", as if not having a pretty model wasn't an option.
Kaj, you should really put a link to Aumann and prior in here. Robin and Tyler's paper perhaps.
This seems like the conflict between two deep seated heuristics, hence it would be difficult at best to argue for the right one.
Instead, I suggest a synthetic approach. Stop treating the two intuitions as a false dichotomy, and consider the continuum between them (or even beyond them).
Note that your intuition takes the form of a universal statement ("all phenomena can be modeled well, with sufficient effort") while your friend's takes the form of an existential statement ("some phenomena cannot be modeled well"). This makes your friend's view more plausible, a priori.
It's not a question of having different evidence: theoretically, you might both even have exactly the same evidence, but gathered in a different order. The question is one of differing interpretations, not raw data as such.
Disappointing, but true. If humans were perfect Bayesians the order of presentation wouldn't matter, but instead our biases kick in and skew the evidence as it arrives.
Edit: Ah, I see you already mentioned confirmation bias versus competent updating.
theoretically, you might both even have exactly the same evidence, but gathered in a different order. The question is one of differing interpretations, not raw data as such.
I happen to be studying conflicts in a completely different domain, in which I claim the solution is to ensure the events shape identical result no matter in which order they are applied. I briefly wondered whether my result could be useful in other domains, and I thought of lesswrong: perhaps we should advocate update strategies which don't depend on the order in which the evidence is encountered.
And then your post came up! Nice timing.
This seems to open up the idea of 'logic games' a la confirmation bias: Eg: Likelihood of - US invading China China testing nukes in the Pacific *US invading because of China testing nukes in the Pacific
In fact, there's a lot of stuff from this site that could be compiled into a "Puzzles for people with High IQs" books, except if somebody from here made the book it would be more useful, and less arbitrary in what it considers to be the 'right' answers.
Two days back, I had a rather frustrating disagreement with a friend. The debate rapidly hit a point where it seemed to be going nowhere, and we spent a while going around in circles before agreeing to change the topic. Yesterday, as I was riding the subway, things clicked. I suddenly realized not only what the disagreement had actually been about, but also what several previous disagreements we'd had were about. In all cases, our opinions and arguments had been grounded in opposite intuitions:
You may notice that these intuitions are not mutually exclusive in the strict sense. They could both be right, one of them covering certain classes of things and the other the remaining ones. And neither one is obviously and blatantly false - both have evidence supporting them. So the disagreement is not about which one is right, as such. Rather, it's a question of which one is more right, which is the one with broader applicability.
As soon as I realized this, I also realized two other things. One, whenever we would run into this difference in the future, we'd need to recognize it and stop that line of debate, for it wouldn't be resolved before the root disagreement had been solved. Two, actually resolving that core disagreement would take so much time and energy that it probably wouldn't be worth the effort.
The important thing to realize is that neither intuition rests on any particular piece of evidence. Instead, each one is a general outlook that has been formed over many years and countless pieces of evidence, most of which have already been forgotten. Before my realization, neither of us had even consciously known they existed. They are abstract patterns our minds have extracted from what must be hundreds of different cases we've encountered, very high-level hypotheses that have been repeatedly tested and found to be accurate.
It would be impossible to find out which was the more applicable one by means of regular debate. Each of us would have to gather all the evidence that led to the formulation of the intuition in the first place. Pulling a number out of my hat, I'd guess that a comprehensive overview of that evidence (for one intuition) would run at least a hundred pages long. Furthermore, it wouldn't be sufficient for each of us to simply read the other side's overview, once it had been gathered. By this point, we would be interpreting the evidence in light of our already existing intuition. I wouldn't be surprised if simply reading through the summary would lead to both sides only being more certain of their own intuition being right. We would have to take the time to discuss each individual item in detail.
And if a real attempt to sort out the difference is hard, resolving it in the middle of a debate about something else is impossible. Both sides in the debate will have an opinion they think is obvious and be puzzled as to why the other side can consistently fail to get something so obvious. At the same time, neither can access the evidence that leads them to consider their opinion so obvious, and both will grow increasingly frustrated at both the other side's bone-headedness and their own failure to properly communicate something that shouldn't even need explaining.
In many cases, trying to resolve an intuitive difference simply isn't worth the effort. Learn to recognize your intuitive differences, and you'll know when to break off debates once they hit that difference. Putting those intuitions in words still helps understanding, though. When I told my friend the things I've just written here, she agreed, and we were able to have a constructive dialogue about those differences. (While doing so, and returning to the previous day's topic, we were able to identify at least five separate points of disagreement that were all rooted in the same intuitive difference.) Each one was also able to explain, on a rough level, some of the background that supported their intuition. In the end, we still didn't agree, but at least we understood each other's positions a little better.
But what if the intuitive difference is about something really important? Me and my friend resolved to just wait things out and see whose hypothesis would turn out more accurate, but sometimes the difference might affect big decisions about the actions you want to take. (Robin's and Eliezer's disagreement on the nature of the Singularity comes to mind.) What if the disagreement really needs to be solved?
I'm not sure how well it can be done, but one could try. First off, both need to realize that in all likelihood, both intuitions have a large grain of truth to them. Like with me and my friend, the question is often one of the breadth of applicability, not of a strict truth or falsehood. Once the basic positions have been formulated, both should ask whether, not why. Assign some certainty value on the likelyhood of your intuition being the more correct one, and then consider the fact that your "opponent" has spent many years analyzing evidence to reach this position and might very well be right. Adjust your certainty downwards to account for this realization. Then take a few weeks considering both the things that may have led you to formulate this intuition, as well as the things that might have led your opponent to theirs. Spend time gathering evidence for both sides of the view, and be sure to give each piece of evidence a balanced view: half of the time you'll first consider a case from the PoV of your opponent's hypothesis, then of your own. Half of the time you'll do it the other way around. Commit all such considerations in writing and present them to your opponent an regular intervals, taking the time to discuss them through. This is no time for motivated skepticism - both of you need to have genuine crisis of faith in order for things to get anywhere.
Not every disagreement is an intuitive difference. Any disagreement that rests on particular pieces of evidence and can be easily resolved with the correct empirical evidence isn't one. If it feels like one of the intuitions is strictly false instead of having a large grain of truth to it, it's still an intuitive difference, but not the kind of one that I have been covering here. An intuitive difference is also kind of related to, but different from, an inferential distance. In order to resolve it, a lot of information needs to be absorbed, but by both partners, not simply the other. It's not a question of having different evidence: theoretically, you might both even have exactly the same evidence, but gathered in a different order. The question is one of differing interpretations, not raw data as such.