In response to: Failure by Analogy, Surface Analogies and Deep Causes
Analogy gets a bad rap around here, and not without reason. The kinds of argument from analogy condemned in the above links fully deserve the condemnation they get. Still, I think it's too easy to read them and walk away thinking "Boo analogy!" when not all uses of analogy are bad. The human brain seems to have hardware support for thinking in analogies, and I don't think this capability is a waste of resources, even in our highly non-ancestral environment. So, assuming that the linked posts do a sufficient job detailing the abuse and misuse of analogy, I'm going to go over some legitimate uses.
The first thing analogy is really good for is description. Take the plum pudding atomic model. I still remember this falsified proposal of negative 'raisins' in positive 'dough' largely because of the analogy, and I don't think anyone ever attempted to use it to argue for the existence of tiny subnuclear particles corresponding to cinnamon.
But this is only a modest example of what analogy can do. The following is an example that I think starts to show the true power: my comment on Robin Hanson's 'Don't Be "Rationalist"'. To summarize, Robin argued that since you can't be rationalist about everything you should budget your rationality and only be rational about the most important things; I replied that maybe rationality is like weightlifting, where your strength is finite yet it increases with use. That comment is probably the most successful thing I've ever written on the rationalist internet in terms of the attention it received, including direct praise from Eliezer and a shoutout in a Scott Alexander (yvain) post, and it's pretty much just an analogy.
Here's another example, this time from Eliezer. As part of the AI-Foom debate, he tells the story of Fermi's nuclear experiments, and in particular his precise knowledge of when a pile would go supercritical.
What do the above analogies accomplish? They provide counterexamples to universal claims. In my case, Robin's inference that rationality should be spent sparingly proceeded from the stated premise that no one is perfectly rational about anything, and weightlifting was a counterexample to the implicit claim 'a finite capacity should always be directed solely towards important goals'. If you look above my comment, anon had already said that the conclusion hadn't been proven, but without the counterexample this claim had much less impact.
In Eliezer's case, "you can never predict an unprecedented unbounded growth" is the kind of claim that sounds really convincing. "You haven't actually proved that" is a weak-sounding retort; "Fermi did it" immediately wins the point.
The final thing analogies do really well is crystallize patterns. For an example of this, let's turn to... Failure by Analogy. Yep, the anti-analogy posts are themselves written almost entirely via analogy! Alchemists who glaze lead with lemons and would-be aviators who put beaks on their machines are invoked to crystallize the pattern of 'reasoning by similarity'. The post then makes the case that neural-net worshippers are reasoning by similarity in just the same way, making the same fundamental error.
It's this capacity that makes analogies so dangerous. Crystallizing a pattern can be so mentally satisfying that you don't stop to question whether the pattern applies. The antidote to this is the question, "Why do you believe X is like Y?" Assessing the answer and judging deep similarities from superficial ones may not always be easy, but just by asking you'll catch the cases where there is no justification at all.
Analogies are pervasive in thought. I was under the impression that cognitive scientists basically agree that a large portion of our thought is analogical, and that we would be completely lost without our capacity for analogy? But perhaps I've only been exposed to a narrow subsection of cognitive science, and there are many other cognitive scientists who disagree? Dunno.
But anyway I find it useful to think of analogy in terms of hierarchical modeling. Suppose you have a bunch of categories, but you don't see any relation between them. So maybe you know the categories "dog" and "sheep" and so on, and you understand both what typical dogs and sheep look like, and how a random dog or sheep is likely to vary from its category's prototype. But then suppose you learn a new category, such as "goat". If you keep categories totally separate in your mind, then when you first see a goat, you won't relate it to anything you already know. And so you'll have to see a whole bunch of goats before you get the idea of what goats are like in general. But if you have some notion of categories being similar to one another, then when you see your first goat, you can think to yourself "oh, this looks kind of like a sheep, so I expect the category of goats to look kind of like the category of sheep". That is, after seeing one goat and observing that it has four legs, you can predict that pretty much all goats also have four legs. That's because you know that number-of-legs is a property that doesn't vary much in the category "sheep", and you expect the category "goat" to be similar to the category "sheep". (Source: go read this paper, it is glorious.)
Anyway I basically think of analogy as a way of doing hierarchical modeling. You're trying to understand some situation X, and you identify some other situation Y, and then you can draw conclusions about X based on your knowledge of Y and on the similarities between the two situations. So yes, analogy is an imprecise reasoning mechanism that occasionally makes errors. But that's because analogy is part of the general class of inductive reasoning techniques.
Perhaps a better title would have been "The Correct System-II Use of Analogy", or "The Correct Use of Analogy in Intellectual Debate." What you're saying is true about day-to-day/on-the-fly thinking, but written argument requires a higher standard.