A new interesting entry on Gregory Cochran's and Henry Harpending's well known blog (West Hunter). For me the information I gained from the LessWrong articles on inferential distances complemented it nicely. Link to source.

There is a spectrum of problem-solving, ranging from, at one extreme, simplicity  and clear chains of logical reasoning (sometimes long chains) and, at the other,  building a picture by sifting through a vast mass of evidence of  varying quality.  I will give some examples. Just the other day, when I was conferring, conversing and otherwise hobnobbing with my fellow physicists, I mentioned high-altitude lighting, sprites and elves and blue jets.   I said that you could think of a thundercloud as a vertical dipole,  with an electric field that decreased as the cube of altitude, while the breakdown voltage varied with air pressure, which declines exponentially with altitude. At which point the prof I was talking to said ” and so the curves must cross!”.  That’s how physicists think, and it can be very effective. The amount of information required to solve the problem is not very large. I call this a ‘thin’ problem’.

At the other extreme,  consider Darwin gathering and pondering on a vast amount of natural-history information, eventually coming up with natural selection as the explanation.   Some of the information in the literature  wasn’t correct, and much  key information that would have greatly aided his  quest, such as basic genetics, was still unknown.   That didn’t stop him, anymore than not knowing the cause of continental drift stopped Wegener.

In another example at the messy end of the spectrum, Joe Rochefort, running Hypo in the spring of 1942,  needed to figure out Japanese plans. He had an an ever-growing mass of Japanese radio intercepts, some of which were partially decrypted – say, one word of five, with luck.   He had data from radio direction-finding; his people were beginning to be able to recognize particular Japanese radio operators by their ‘fist’.  He’d studied in Japan, knew the Japanese well.  He had plenty of Navy experience – knew what was possible. I would call this a classic ‘thick’ problem, one in which an analyst needs to deal with an enormous amount of data of varying quality.  Being smart is necessary but not sufficient: you also need to know lots of  stuff.

At this point he was utterly saturated with information about the Japanese Navy.  He’d been  living and breathing JN-25 for months. The Japanese were aimed somewhere,  that somewhere designated by an untranslated codegroup – ‘AF’.  Rochefort thought it meant Midway, based on many clues, plausibility, etc.  OP-20-G, back in Washington,  thought otherwise. They thought the main attack might be against Alaska, or Port Moresby, or even the West Coast.

Nimitz believed Rochefort – who was correct.  Because of that, we managed to prevail at Midway, losing one carrier and one destroyer while the the Japanese lost four carriers and a heavy cruiser*.  As so often happens, OP-20-G won the bureaucratic war:  Rochefort embarrassed them by proving them wrong, and they kicked him out of Hawaii, assigning him to a floating drydock.

The usual explanation of Joe Rochefort’s fall argues that John Redman’s ( head of OP-20-G, the Navy’s main signals intelligence and cryptanalysis group) geographical proximity to Navy headquarters  was a key factor in winning the bureaucratic struggle, along with his brother’s influence (Rear Admiral Joseph Redman).  That and being a shameless liar.

Personally, I wonder if part of the problem is the great difficulty of explaining the analysis of a thick problem to someone without a similar depth of knowledge.  At best, they believe you because you’ve  been right in the past.  Or, sometimes, once you have developed the answer, there is a ‘thin’ way of confirming your answer – as when Rochefort took Jasper Holmes’s suggestion and had Midway broadcast an uncoded complaint about the failure of their distillation system – soon followed by a Japanese report that ‘AF’ was short of water.

Most problems in the social sciences are ‘thick’, and unfortunately, almost all of the researchers are as well. There are a lot more Redmans than Rocheforts.

New Comment
12 comments, sorted by Click to highlight new comments since:

Or, sometimes, once you have developed the answer, there is a ‘thin’ way of confirming your answer

This seems like a promising tactic in general - not just for convincing people.

[-][anonymous]60

Physicists of LW, would you care to weigh in on this? How well does Cochran's article describe your thought processes?

"And therefore the curves must cross" seems more like mathematical thinking to me. (Which has its uses in physics, obviously.) Certainly the curves must cross at some point, but it is not obvious (well, not to me) that they will do so within the range for which third-power decay and exponential decay are good approximations. (Obviously the electric field is nowhere infinite, so the x-inverse-cube relation cannot be intended as an exact description everywhere - it must break down near the zero.) To see that you'd have to put in the boundary conditions: The starting values and the constants of decay.

That said, I'm nitpicking a special case and it may well be that the professor knew the boundary conditions and could immediately see that the curves would cross somewhere in the relevant range. In general, yes, this sort of there-must-exist insight is often useful on the grand overview level; it tells you what sort of solutions you should look for. I've seen it used to immediately spot an error in a theory paper, by pointing out that a particular formula gave an obviously wrong answer for a limiting case. It is perhaps more useful to theorists than experimentalists.

Not a physicist but a mathematician, but the anecdote about lightning in the first quoted paragraph sounds like everyday conversation to me.

Suppose we call a Type A Error attempting to attack a thick problem with thin methods, and a Type B Error attempting to attack a thin problem with thick methods. Not all problems are inherently thick or thin, but plausibly the discipline of economics might be a massive A-error, while continental philosophy might plausibly be a massive B-error.

What are some non-obvious heuristics for reducing errors of these sorts? (I can think of some obvious ones, but my guess is that they're so thoroughly internalized that the nonobvious heuristics will end up correcting for them as often as not.)

Suppose we call a Type A Error attempting to attack a thick problem with thin methods

As illustrated here.

And, for that matter, here; indeed, it's one of XKCD's recurring themes.

[This comment is no longer endorsed by its author]Reply

Off topic here, but this stood out:

As so often happens, OP-20-G won the bureaucratic war: Rochefort embarrassed them by proving them wrong, and they kicked him out of Hawaii, assigning him to a floating drydock.

Similarly:

Joe Coder found a way to automate his data entry job, increasing output and reliability by an order of magnitude. When he tells management about that, he becomes chastised for not following procedure. The IT department then devised "another" procedure which somehow uses Joe Coder's scripts.

More generally,

Low Status disagrees with High Statuses, with proof. High Statuses punish (or evict) him.

I'm probably not cynic enough, but this I do not understand. High Statuses may be afraid, but how come they're so stupid? Surely High Statuses can make better uses of Low Status?

This is very interesting material, and introduces some potentially useful vocabulary. Upvoted. I'm actually working on a computational approach to a thick problem (functional grammar induction) myself, and it's very applicable. Initial approaches to NLP were all thin, and it was a miserable failure. Thick approaches are finally starting to gain some ground, by sheer brute force of data, but may lack a few thin insights that could make them dramatically more effective.

Upvoted for the Midway example; I'm fascinated by the Pacific campaign in general, from a decision-theoretic as well as narrative standpoint.

Most problems in engineering are much "thicker" than the math-heavy instructional approaches at my schools were appropriate for. Oh well.