Or, sometimes, once you have developed the answer, there is a ‘thin’ way of confirming your answer
This seems like a promising tactic in general - not just for convincing people.
Physicists of LW, would you care to weigh in on this? How well does Cochran's article describe your thought processes?
"And therefore the curves must cross" seems more like mathematical thinking to me. (Which has its uses in physics, obviously.) Certainly the curves must cross at some point, but it is not obvious (well, not to me) that they will do so within the range for which third-power decay and exponential decay are good approximations. (Obviously the electric field is nowhere infinite, so the x-inverse-cube relation cannot be intended as an exact description everywhere - it must break down near the zero.) To see that you'd have to put in the boundary conditions: The starting values and the constants of decay.
That said, I'm nitpicking a special case and it may well be that the professor knew the boundary conditions and could immediately see that the curves would cross somewhere in the relevant range. In general, yes, this sort of there-must-exist insight is often useful on the grand overview level; it tells you what sort of solutions you should look for. I've seen it used to immediately spot an error in a theory paper, by pointing out that a particular formula gave an obviously wrong answer for a limiting case. It is perhaps more useful to theorists than experimentalists.
Not a physicist but a mathematician, but the anecdote about lightning in the first quoted paragraph sounds like everyday conversation to me.
Suppose we call a Type A Error attempting to attack a thick problem with thin methods, and a Type B Error attempting to attack a thin problem with thick methods. Not all problems are inherently thick or thin, but plausibly the discipline of economics might be a massive A-error, while continental philosophy might plausibly be a massive B-error.
What are some non-obvious heuristics for reducing errors of these sorts? (I can think of some obvious ones, but my guess is that they're so thoroughly internalized that the nonobvious heuristics will end up correcting for them as often as not.)
Off topic here, but this stood out:
As so often happens, OP-20-G won the bureaucratic war: Rochefort embarrassed them by proving them wrong, and they kicked him out of Hawaii, assigning him to a floating drydock.
Similarly:
Joe Coder found a way to automate his data entry job, increasing output and reliability by an order of magnitude. When he tells management about that, he becomes chastised for not following procedure. The IT department then devised "another" procedure which somehow uses Joe Coder's scripts.
More generally,
Low Status disagrees with High Statuses, with proof. High Statuses punish (or evict) him.
I'm probably not cynic enough, but this I do not understand. High Statuses may be afraid, but how come they're so stupid? Surely High Statuses can make better uses of Low Status?
This is very interesting material, and introduces some potentially useful vocabulary. Upvoted. I'm actually working on a computational approach to a thick problem (functional grammar induction) myself, and it's very applicable. Initial approaches to NLP were all thin, and it was a miserable failure. Thick approaches are finally starting to gain some ground, by sheer brute force of data, but may lack a few thin insights that could make them dramatically more effective.
Upvoted for the Midway example; I'm fascinated by the Pacific campaign in general, from a decision-theoretic as well as narrative standpoint.
Most problems in engineering are much "thicker" than the math-heavy instructional approaches at my schools were appropriate for. Oh well.
A new interesting entry on Gregory Cochran's and Henry Harpending's well known blog (West Hunter). For me the information I gained from the LessWrong articles on inferential distances complemented it nicely. Link to source.