Exhaustive Free Association is a step in a chain of reasoning where the logic goes "It's not A, it's not B, it's not C, it's not D, and I can't think of any more things it could be!"
Oh no, I wonder if I ever made that mistake.
Security Mindset
Hmm, no, I think I understand that point pretty well...
They listed out the main ways in which an AI could kill everyone (pandemic, nuclear war, chemical weapons) and decided none of those would be particularly likely to work
Definitely not it, I have a whole rant about it. (Come to think of it, that rant also covers the security-mindset thing.)
They perform an EFA to decide which traits to look for, and then they perform an EFA over different "theories of consciousness" in order to try and calculate the relative welfare ranges of different animals.
I don't think I ever published any EFAs, so I should be in the clear here.
The Fatima Sun Miracle
Oh, I'm not even religious.
Phew! I was pretty worried there for a moment, but no, looks like I know to avoid that fallacy.
This does feel like a nearby fallacy of denying specific examples, which maybe should have its own post
Explanation
(The post describes a fallacy where you rule out a few specific members of a set using properties specific to those members, and proceed to conclude that you've ruled out that entire set, having failed to consider that it may have other members which don't share those properties. My comment takes specific examples of people falling into this fallacy that happened to be mentioned in the post, rules out that those specific examples apply to me, and proceeds to conclude that I'm invulnerable to this whole fallacy, thus committing this fallacy.
(Unless your comment was intended to communicate "I think your joke sucks", which, valid.))
What exactly do you propose that a Bayesian should do, upon receiving the observation that a bounded search for examples within a space did not find any such example?
(I agree that it is better if you can instead construct a tight logical argument, but usually that is not an option.)
I also don't find the examples very compelling:
I will admit my bias: I hope the visions of Fatima were untrue, and therefore I must also hope the Miracle of the Sun was a fake. But I’ll also admit this: at times when doing this research, I was genuinely scared and confused. If at this point you’re also scared and confused, then I’ve done my job as a writer and successfully presented the key insight of Rationalism: “It ain’t a true crisis of faith unless it could go either way”.
[...]
I don’t think we have devastated the miracle believers. We have, at best, mildly irritated them. If we are lucky, we have posited a very tenuous, skeletal draft of a materialist explanation of Fatima that does not immediately collapse upon the slightest exposure to the data. It will be for the next century’s worth of scholars to flesh it out more fully.
Overall, I'm pleasantly surprised by how bad these examples are. I would have expected much stronger examples, since on priors I expected that many people would in fact follow EFAs off a cliff, rather than treating them as evidence of moderate but not overwhelming strength. To put it another way, I expected that your FA on examples of bad EFAs would find more and/or stronger hits than it actually did, and in my attempt to better approximate Bayesianism I am noticing this observation and updating on it.
It depends on properties of bounded search itself.
I.e., if you are properly calibrated domain expert who can make 200 statements on topic with assigned probability 0.5% and be wrong on average 1 time, then, when you arrive at probability 0.5% as a result of your search for examples, we can expect that your search space was adequate and wasn't oversimplified, such that your result is not meaningless.
If you operate in confusing, novel, adversarial domain, especially when domain is "the future", when you find yourself assigning probabilities 0.5% for any reason which is not literally theorems and physical laws, your default move should be to say "wait, this probability is ridiculous".
As an aside, the formalisms that deal with this properly are not Bayesian, they are nonrealizable settings. See Diffractor and Vanessa's work, like this: https://arxiv.org/abs/2504.06820v2
Also, my experience with actual super forecasters, as opposed to people who forecast in EA spaces, has been that this failure mode is quite common, and problematic, even outside of existential risk - for example, things during COVID, especially early on.
I think the phrase 'Proof by lack of imagination' is sometimes used to describe this (or a close cousin).
Interesting! I have a post cooking somewhere in the guf of my brain which is something like "How Much Imagination Should We Use?". A "Proof by Lack of Imagination" is dual to a "Proof by Excess of Imagination".
Example: suppose someone says "I can imagine an atomic copy of ourselves which isn't conscious, therefore consciousness is non-physical." and I say "No, I can't imagine that."
On the one hand, a good model of the world should prevent me from imagining things which aren't possible. If I have a solid idea of what acidic and alkaline solutions are, I should have great difficulty imagining a solution which is both at once! But on the other hand, it is a good thing if my model can imagine things which are possible, like a power-plant which harvests energy from a miniature sun.
I've been thinking about things from a pedagogical angle a lot lately, which is why this post is phrased in the specific way it is. I would like to come to a better conclusion of how to move disagreements forward. If I say to someone "It seems quite easy to imagine an AI smart enough to take over the world using a medium-bandwidth internet connection" and someone says "No I can't imagine that at all", then it seems like we just reach an impasse immediately! How do we move forward?
Example: suppose someone says "I can imagine an atomic copy of ourselves which isn't conscious, therefore consciousness is non-physical." and I say "No, I can't imagine that."
Or the followup by Logan Strohl, even more directly on this
If I have a solid idea of what acidic and alkaline solutions are, I should have great difficulty imagining a solution which is both at once!
Or at least one that is both at once for more than a very short period of time, as the relevant ions will quickly react to make "neutral" water until one runs out...
Uhhh either I forgot the word and it means nothing, or it's (roughly) the place where souls live before they get born in Judaism.
gguf (Georgi Gerganov's Unified Format) from ggbl (Georgi Gerganov Brain Learning), which is a file format and associated software for running and storing brain images. you can use it via, eg, lemur.cpp (Lemur stands for the common term LEmuR, Large Emulation Runner, which runs brain images). J Bostock is referring to the file that stores the upload of their brain. /j
One key question is where this argument fails - because as noted, superforecasters are often very good, and most of the time, listing failure modes or listing what you need is effective.
I think the answer is adversarial domains. That is, when there is a explicit pressure to find other alternatives. The obvious place this happens is when you're actually facing a motivated opponent - like the scenario of AI trying to kill people, or cybersecurity intrusions. That's because by construction, the blocked examples don't contain much probability mass, since the opponent is actually blocked, and picks other routes. When there's an argument, the selection of arguments and the goal of the arguer is often motivated beforehand, and will pick other "routes" in the argument - and really good arguers will take advantage of this, as noted. And this is somewhat different from the Fatima Sun Miracle, where the selection pressure for proofs of God was to find examples of something they couldn't explain, and then use that, rather than selection on the arguments themselves.
In contrast, what Rethink did for theories of consciousness seems to be different - there's a priori no reason to think that most probability mass lies outside of what we think about, since how consciousness works is not understood, but is not adversarial. And moving away from the point of the post, the conclusion should be that we know we're wrong, because we haven't dissolved the question, but we can try our theories since they seem likely to be at least near the correct explanation, even if we haven't found it yet. And using heuristics, "just read the behavioural observations on different animals and go off of vibes" rather than theories, when you don't have correct theories, is a reasonable move, but also a completely different discussion!
A bunch of superforecasters were asked what their probability of an AI killing everyone was. They listed out the main ways in which an AI could kill everyone (pandemic, nuclear war, chemical weapons) and decided none of those would be particularly likely to work, for everyone.
As someone who participated in that XPT tournament, that doesn't match what I encountered. Most superforecasters didn't list those methods when they focused on AI killing people. Instead, they tried to imagine how AI could differ enough from normal technology that it could attempt to start a nuclear war, and mostly came up with zero ways in which AI could be powerful enough that they should analyze specific ways in which it might kill people.
I think Proof by Failure of Imagination describes that process better than does EFA.
I believe in Thinking Fast and Slow, Kahneman refers to this fallacy as "What You See Is All There Is" (WYSIATI). And it used to be common for people to talk about "Unknown Unknowns" (things you don't know, that you also don't know you don't know).
Non-Exhaustive Free Association or Attempted Exhaustive Free Association seems like a more accurate term?
Edit: oh ops, @Mateusz Bagiński beat me to it. convergence!
I think that an attempted EFA is a strong argument and I think people should usually take it seriously.
I can see 2 reasons why you should remain mostly unconvinced by an EFA:
When neither of these reasons apply, I think skepticism against EFA is unwarranted.
I lean towards all epistemic environments being adversarial unless proven otherwise based on strong outside-view evidence (e.g. your colleagues at a trading firm, who you regularly see trading successfully using strategies they freely discuss with you). Maybe I'm being too paranoid, but I think that the guf in the back of your mind is filled with memetic tigers, and sometimes those sneak out and pounce into your brain. Occasionally, they turn out to be excellent at hunting down your friends and colleagues as well.
An adversarial epistemic environment functions similarly to a normal adversarial environment, but in reverse. Instead of any crack in your code (therefore, a crack in the argument that your code is secure) being exploitable, the argument comes into your head already pre-exploited for maximum memetic power. And using an EFA is one way to create a false argument that's highly persuasive.
I also think that---in the case where the EFA turns out to be correct---it's not too hard to come up with supporting evidence. Either a (good) reference-class argument (though beware any choice of reference class!) or some argument as to why your search really is exhaustive.
Good post!
Why did you call it "exhaustive free association"? I would lean towards something more like "arguing from (falsely complete) exhaustion".
Re it being almost good reasoning, a main thing making it good reasoning rather than bad reasoning is having a good model of the domain so that you actually have good reasons to think that your hypothesis space is exhaustive.
An argument in favor of it is, "free association" is inherently a fuzzy human thing, where the process is just thinking for a bit and seeing what you come up with and at some point declaring victory; there is nothing in it that could possibly guarantee correctness. Arguably, anyone who encounters the term should be conscious of this, and therefore notice that it's an inappropriate step in a logical argument that purports to establish high certainty. Perhaps even notice that the term itself is paradoxical: in a logical context, "exhaustion" must be a rigorous process, but "free association" is inherently unrigorous.
I'm not sure if I buy the argument. The author of "The Design of Everyday Things" warns against being too clever with names and assuming that normal people will get the reference you intend. But... I dunno.
If you don't have a systematic way of iterating your ideas, your method of generating ideas is just free-association. So making an argument from "exhaustive free-association" means you're arguing that your free association is exhaustive. Which it never is.
You named it in such a way as to imply that the free-association was exhaustive this time though. You absolutely did that.
I would call this "false exhaustiveness" or "illusory exhaustiveness", else the name contains no criticism, and even mildly implies true exhaustiveness via free association (an implication invalidated by free association being usually incomplete and thus not locally valid, but I predict this matters for describing fallacies to folks not used to thinking truly exhaustively.) Also, the exhaustiveness need not be via free association. My comment is a bid for you to edit the post, because I think we should standardize on a slightly clearer name.
Claude suggested a number of existing names, but they're ones others already mentioned and don't specifically call out the illusion produced by listing many items.
It seems likely to my intuition that False Exhaustiveness is a subset of Argument from Ignorance.
edit: added missing paren, second paragraph break
This does seem to be the form of argument used when it is demanded that someone proves a negative. I'm surprised no one has brought that up elsewhere in the comments.
It is incumbent on someone making a positive statement to provide evidence. When someone doesn't provide enough evidence for a positive claim critics will point that out but "there is insufficient evidence" can only be repeated so many times.
People continue to press "Well, prove why [positive claim] isn't the case!" and the critic's only response is to iterate through all the positive arguments made for [positive claim] and show they are insufficient.
Considered as a logical fallacy, it's a form of hasty generalization.
But it's interesting to call out specifically the free-associative step; the attempt to brainstorm possibilities, nominally with the intent of checking them. That's a step where we can catch ourselves and say, "Hey, I'm doing that thing. I should be careful that I actually know what territory I'm mapping before I try to exhaustively map it out."
Going to the AI Doom example:
A bunch of superforecasters were asked what their probability of an AI killing everyone was. They listed out the main ways in which an AI could kill everyone (pandemic, nuclear war, chemical weapons) and decided none of those would be particularly likely to work, for everyone. They ended up giving some ridiculously low figure, I think it was less than one percent. Their exhaustive free association did not successfully find options like "An AI takes control of the entire supply chain, and kills us by heating the atmosphere to 150 C as a by-product of massive industrial activity."
The point at which reasoning has gone astray is in what sort of possibilities are being listed-out. The set {pandemic, nuclear war, chemical weapons} seems to be drawn from the set of known natural and man-made disasters: things that have already killed a lot of people in the past.
But the set that should be listed-out is actually all conditions that humans depend on to live. This includes a moderate-temperature atmosphere, land to grow food on, and so forth. Negating any one of those kills off the humans, regardless of whether it looks like one of those disasters we've survived before.
The forecaster is asking, "For each type of disaster, could runaway AI create a disaster of that type sufficient to kill all humans?" And they think of a bunch of types of disaster, decide that each wouldn't be bad enough, and end up with a low P(doom).
But the question should really be, "For each material condition that humans depend on to live, could runaway AI alter that condition enough to make human life nonviable?" So instead of listing out types of known disaster, we list out everything humans depend on — like land, clean water, breathable atmosphere, and so on. Then for each one, we ask, "Is there some way an unaligned optimizer running on our planet could take this thing away?"
Can a runaway AI take all the farmland away? Well, how could a particular piece of land be taken away? Buy it. Build a datacenter on it. Now that land can't be used for farming because there's a big hot datacenter on it. Do people buy up land and build datacenters on it today? Sure, and sometimes to the annoyance of the neighbors. What do you need to do that? Money. Can AI agents earn money from trade with humans, or with other AI agents? They sure can. Could AI agents be sufficiently economically dominant that they can literally outbid humanity for ownership of all the land? Hmm... that's not as easy as "could it cause a nuclear war big enough to kill everyone."
So the advice here could be summed up as something like: When you notice that you're brainstorming an exhaustive list of cases, first check that they're cases of the right nature.
I don't think this is a fallacy. If it was, one of the most powerful and common informal inference forms (IBE a.k.a. Inference to the Best Explanation / abduction) would be inadmissible. That would be absurd. Let me elaborate.
IBE works by listing all the potential explanations that come to mind, subjectively judging how good they are (with explanatory virtues like simplicity, fit, internal coherence, external coherence, unification, etc) and then inferring that the best explanation is probably correct. This involves the assumption that the probability is small that the true explanation is not among those which were considered. Sometimes this assumption seems unreasonable, in which case IBE shouldn't be applied. That's mostly the case if all considered explanations seem bad.
However, in many cases the "grain of truth" assumption (the true explanation is within the set of considered explanations) seems plausible. For example, I observe the door isn't locked. By far the best (least contrived) explanation I can think of seems to be that I forgot to lock it. But of course there is a near infinitude of explanations I didn't think of, so who is to say there isn't an unknown explanation which is even better than the one about my forgetfulness? Well, it just seems unlikely that there is such an explanation.
And IBE isn't just applicable to common everyday explanations. For example, the most common philosophical justification that the external world exists is an IBE. The best explanation for my experience of a table in front of me seems to be that there is a table in front of me. (Which interacts with light, which hits my eyes, which I probably also have, etc.)
Of course, in other cases, applications of IBE might be more controversial. However, in practice, if Alice makes an argument based on IBE, and Bob disagrees with its conclusion, this is commonly because Bob thinks Alice made a mistake when judging which of the explanations she considered is the best. In which case Bob can present reasons which suggest that, actually, explanation x is better than explanation y, contrary to what Alice assumed. Alice might be convinced by these reasons, or not, in which case she can provide the reasons why she still believes that y is better than x, and so on.
In short, in many or even most cases where someone disagrees with a particular application of IBE, their issue is not with IBE itself, but what the best explanation is. Which suggests the "grain of truth" assumption is often reasonable.
Most examples of bad reasoning, that are common amongst smart people, are almost good reasoning. Listing out all the ways something could happen is good, if and only if you actually list out all the ways something could happen
Well, that's clearly almost always impossible (there are almost infinitely many possible explanations for almost anything), so we can't make an exhaustive list. Moreover, "should" implies "can", so, by contraposition, if we can't list them, it's not the case that we should list them.
, or at least manage to grapple with most of the probability mass.
But that's backwards. IBE is a method which assigns probability to the best explanation based on how good it is (in terms of explanatory virtues) and based on being better than the other considered explanations. So IBE is a specific method for coming up with probabilities. It's not just stating your prior. You can't argue about purely subjective priors (that would be like arguing about taste) but you can make arguments about what makes some particular explanation good, or bad, or better than others. And if you happen to think that the "grain of truth" assumption is not plausible for a particular argument, you can also state that. (Though the fact that this is rather rarely done in practice suggests it's in general not such a bad assumption to make.)
Exhaustive Free Association is a step in a chain of reasoning where the logic goes "It's not A, it's not B, it's not C, it's not D, and I can't think of any more things it could be!"[1] Once you spot it, you notice it all the damn time.
This description skips over the fallacy part of the fallacy. On its own, the sentence in quotes sounds like a potentially productive contribution to a discussion.
Often related to maneuver warfare as well, ie making you expend more energy than you opponent does by putting the onus of refutation on you.
Some of this might be conflation between within-model predictions and overall predictions that account for model uncertaintly and unknown unknowns. Within-model predictions are in any case very useful as exercises for developing/understanding models, and as anchors for overall predictions. So it's good actually (rather than a problem) when within-model predictions are being made (based on whatever legible considerations that come to mind), including when they are prepared as part of the context before making an overall prediction, even for claims/predictions that are poorly understood and not properly captured by such models.
The issue is that when you run out of models and need to incorporate unknown unknowns, the last step that transitions from your collection of within-model prediction anchors to an overall prediction isn't going to be legible (otherwise it would just be following another model, and you'd still need to take that last step eventually). It's an error to give too much weight to within-model anchors (rather than some illegible prior) when the claim/prediction is overall poorly understood, but also sometimes the illegible overall assessment just happens to remain close to those anchors. And even base rates (reference classes) is just another model, it shouldn't claim to be the illegible prior at the end of this process, not when the claim/prediction remains poorly understood (and especially not when its understanding explicitly disagrees with the assumptions for base rate models).
So when you happen to disagree about the overall prediction, or about the extent to which the claim/prediction is well-understood, a prediction that happens to remain close to the legible anchors would look like it's committing the error described in the post, but it's not necessarily always (or often) the case. The only way to resolve such disagreements would be by figuring out how the last step was taken, but anything illegible takes a book to properly communicate. There's not going to be a good argument, for any issue that's genuinely poorly understood. The trick is usually to find related but different claims that can be understood better.
I personally just model errors like that as "projection". The error here is "I can't think of any more possibilities, therefore, more possibilities do not exist". It's very common for people to assume that other things are bounded by the same limitations as they are. The concept of "unknown unknowns" is related here as well.
More generally, when people talk about life and reality, they talk about themselves, even if they do not realize it. They assume their map is the territory. For instance, if a person says "Life is suffering", that may be true for them, and every counter-argument they hear may even evaluate to false in their model of reality, but that still doesn't mean it's true for everyone.
Another comment mentioned "Proof by failure of imagination" and I like that name, since the fallacy is an error which occurs in a person. When we say "logical error" we don't mean that there's an error in logic itself, but in its use. If something is implicit for long enough, we risk forgetting it (I think this happened to morality. Now certain things are considered good in an absolute sense, rather than in a context)
If somebody uses a "proof by contradition", then a single example is enough (∃), but this argument is in the other direction, so one needs to show that something is true for all examples (∀) and not just some (∃). The only reason I can think of that somebody would make this error, is that they consider the examples they thought of to be "the best". If you can refute the best argument for why something would happen, it's easier to assume that it won't (I guess this is what steelmanning is?). This method works fine for smaller problem spaces, but quickly grows useless because of the inherent asymmetry between attacking and defending
I've noticed an antipattern. It's definitely on the dark pareto-frontier of "bad argument" and "I see it all the time amongst smart people". I'm confident it's the worst, common argument I see amongst rationalists and EAs. I don't normally crosspost to the EA forum, but I'm doing it now. I call it Exhaustive Free Association.
Exhaustive Free Association is a step in a chain of reasoning where the logic goes "It's not A, it's not B, it's not C, it's not D, and I can't think of any more things it could be!"[1] Once you spot it, you notice it all the damn time.
Since I've most commonly encountered this amongst rat/EA types, I'm going to have to talk about people in our community as examples of this.
Here's a few examples. These are mostly for illustrative purposes, and my case does not rely on me having found every single example of this error!
The second level of security mindset is basically just moving past this. It's the main thing here. Ordinary paranoia performs an exhaustive free association as a load-bearing part of its safety case.
A bunch of superforecasters were asked what their probability of an AI killing everyone was. They listed out the main ways in which an AI could kill everyone (pandemic, nuclear war, chemical weapons) and decided none of those would be particularly likely to work, for everyone. They ended up giving some ridiculously low figure, I think it was less than one percent. Their exhaustive free association did not successfully find options like "An AI takes control of the entire supply chain, and kills us by heating the atmosphere to 150 C as a by-product of massive industrial activity."
Clearly, they did something wrong. And these people are smart! I'll talk later about why this error is so pernicious.
Yeah I'm back on these guys. But this error is all over the place here. They perform an EFA to decide which traits to look for, and then they perform an EFA over different "theories of consciousness" in order to try and calculate the relative welfare ranges of different animals.
The numbers they get out are essentially meaningless, to the point where I think it's worse to look at those numbers than just read the behavioural observations on different animals and go off of vibes.
See an argument here. The author raises and knocks down an extremely long list of possible non-god explanations for a miracle, including hallucinating children, and demons.
I'm going to treat the actual fact-of-the-matter as having been resolved by Scott Alexander here. Turns out, there's a weird visual effect you get when you look at the sun, sometimes, which people have reported in loads of different scenarios.
Most examples of bad reasoning, that are common amongst smart people, are almost good reasoning. Listing out all the ways something could happen is good, if and only if you actually list out all the ways something could happen, or at least manage to grapple with most of the probability mass.
So the most dangerous thing about this argument is how powerful it is. Internally and externally. Those superforecasters managed to fool themselves with an argument resting on an exhaustive free-association. Ethan Muse has been described as "supernaturally persuasive".
If you can't see the anti-pattern, I can see why this would be persuasive! Someone deploying an exhaustive free association looks, at first glance, to be using a serious, knock-down argument.
And in fact, for normal, reference-class-based things, this usually works! Suppose you're planning a party. You've got a pretty good shot at listing out all the things you need, since you've been to plenty of parties before. So an exhaustive free-association can look, at first glance, like an expert with loads of experience. This requires you to follow the normal rules of reference-class forecasting.[2]
Secondly, it's both locally valid, and very powerful, if you can perform an exhaustive search. You have to do a real exhaustive search, like "We proved that this maths problem reduces to 523 cases exactly and showed the conjecture holds for all of them". But in this case, the hard part is the first step, where you reduce your conjecture from infinite cases to 523, which is a reduction by a factor of infinity.
Thirdly, there are lots of people who haven't quite shaken themselves out of student mode. You'll come across lots of arguments, at school and at university, which closely resemble an exhaustive free association. And most of them are right!
Lastly, an exhaustive free association is intimidating! It is a powerful formation of soldiers. I'm fully aware that, if I had a public debate with someone like Ethan Muse, the audience would judge him more persuasive. That man has spent an enormous amount of time thinking of arguments and counter-arguments. Making the point that "Actually there probably exists a totally naturalistic explanation which I cannot currently think of" sounds ridiculous!
An Exhaustive Free Association can also be laid as a trap: suppose I mull over a few possible causes, arguments, or objections. I bring them up, but my interlocutor has already prepared the counter-argument for each. In a debate, I am humbled and ridiculed. (Of course, if I am honestly reading something, I duly update as follows "Hmm, the first three ideas I had were shot down instantly, what chance is there that my later arguments come up correct?) Of course, the only reason we're discussing this particular case is because my interlocutor's free association failed to turn anything up!
Sure, stay in scout mindset as much as you can. But if you notice someone massing forces into formations (especially one like this) perhaps you should worry more about their mindset than your own.
Noticing an exhaustive free association is only part of the battle. It's not even a big part. You have to have the courage to point it out, and then you have to decide whether or not its worth your time. Do you spend hours picking over the evidence to find the true point where their argument, or do you give up?
Hopefully, you now have a third option. "This argument appears to depend on an exhaustive free association." you mutter to yourself, or perhaps some close friends. You will come back to deal with it later, if you have the time.
Of course, crying "Exhaustive Free Association!" is a rather general counter-argument. I would be remiss to post this without giving some hint at a defence to this defence. One method is reference classes. If you wish to dive into that mud-pit, so be it. Another method is simply to show that your listing is exhaustive, through one means or another.
But honestly? The best defence is to make your argument better. If you're relying on something which looks like an exhaustive free association, your first worry should be that it actually is! Invert the argument to a set of assumptions. Go up a level to a general principle. Go down a level to specific cases.
Apparently I'm writing in verse now. Well, in for a penny, in for a pound I suppose!
It's not A, it's not B, it's not C, it's not D,
And I can't think of any more things it could be!
Cried the old furry fox-frog who lived in the swamp,
As he heard far-off trees falling down with a "thwomp".
For there's been no great storm which might break all these boughs,
And I've not heard a great herd of elephant-cows,
Which sometimes come this way with trumpety-moos,
And knock down a few trees with their big grey-black hooves,
And I've felt not a rumble or tremor at all,
And the river's been running quite low since last fall!
So the noises that keep me up, when I'm in bed,
No they just can't be real, they must be in my head!
So the old furry fox-frog picked up his old cane,
And he walked to the house of the heron-crow-crane
And they both puzzled over the noise in his brain,
And of course, both were flattened, when the steamrollers came.
That is, your new item must have only one obvious reference class, or at least one obvious best, most specific, reference class. "Car" is better than "Mode of transport" for predicting the properties of a Honda; "Herbivore" vs "Mode of transport" might produce different, conflicting predictions about the properties of a horse.