Even if it is "just a synonym", it does not imply that we should shift terminology. Terminology is not just about definition (denotation), it is also about implication (connotation).
As others have pointed out, "mechanistic" and "reductionist" have unwanted connotations, while "gears-level" has only the connotations the community gives it... along with the intuitive implication that it's a model that is specific enough that you could build it, that you would need to know what gears exist and how they connect. (In contrast, it's much easier to say that a model is mechanistic or reductionist, without it actually being, well, gears-level!)
Between the lack of pre-existing negative connotations and the intuition pump, there seems to me to be more than enough value to use the term in preference over the other words, even if it were an exact synonym!
Also, I feel like the mental picture of gears turning is far more telling than the picture of a "mechanism".
I see an implicit premise I disagree with about the value of improving communication within the rationalist community vs. between rationalists and outsiders; It seems like I think the latter is relatively more important than you do.
I for one don't plan on using "mechanistic" where I currently talk about "gears-like" simply because I know what intuition the latter is pointing at but I'm much less sure about the former. Maybe down the road they'll turn out to be equivalent. But I'll need to see that, and why, before it'll feel-make sense for me to switch. Sort of like needing to see and grok a math proof that two things are equivalent before I feel comfortable using that fact.
Not that I determine how Less Wrong does or doesn't use this terminology. I'm just being honest about my intentions here.
A minor aside: To me, "gears-level" doesn't actually make sense. I think I used to use that phrasing, but it now strikes me as an incoherent metaphor. Level of what? Level of detail of the model? You can add a ton of detail to a model without affecting how gears-like it is. I think it's self-referential in roughly the style of "This quoted sentence talks about itself." I think it's intuitively pointing at how gears-like a model is, and on the scale of "not very many gears at all" to "absolutely transparently made of gears", it's on a level where we can talk about how the gears interact.
That said, there is a context in which I'd use a similar phrase and I think it makes perfect sense. "Can we discuss this model at the gears level?" That feels to me like we're talking about a very gears-like model already but we aren't yet examining the gears.
I interpret the opening question being about whether the property of being visibly made of gears is the same as "mechanistic". I think that's quite plausible, given that "mechanistic" means "like a mechanism", which is a metaphor pointing at quite literally a clockwork machine made of literal physical gears. The same intuition seems to have inspired both of them.
But as I said, I await the proof.
Same. I think there's a tendency to see superficially similar names and say "ah, these are the same concept, we should use commonly used phrases to refer to them", which sometimes miss the nuances that the new concept was actually aiming at.
There's a tag for gears level and in the original post it looks like everyone in the comments was confused even then what gears-level meant, and in particular there were a lot of non-overlapping definitions given. In particular, the author, Valentine, also expresses confusion.
The definition given, however, is:
1. Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
2. How incoherent is it to imagine that the model is accurate but that a given variable could be different?
3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?
I'm not convinced that how people have been using it in more recent posts though. I think the one upside is that "gears-level" is probably easier to teach than "reductionist" but contingent on someone knowing the word "reductionism" it is clearly simpler to just use that word. In the history of the tag, there was also previously "See also: Reductionism" with a link.
In the original post, I think Valentine was trying to get at something complex/not fully encapsulated by an existing word or short phrase, but it's not clear to me that it was well communicated to others. I would be down for tabooing "gears-level" as a (general) term on lesswrong. I can't think of an instance after the original where someone used the term "gears-level" to not mean something more specific, like "mechanistic" or "reductionist."
That said, given I don't think I really understand what was meant by "gears-level' in the original, when there are suitable replacements, I would ideally like to hear from someone who thinks they do. In particular, like Valentine or brook. If there were no objections maybe clean-up the tag by removing it and/or linking to other related terms.
Both of these would be clearer if replaced by "causal". That is what they are both talking about: causes and effects.
I have noticed that a lot of people are reluctant to talk about causation, on LessWrong and elsewhere ever since Hume (who was confused on the matter). Even in statistics, where causal analysis is nowadays a large field, time was when you couldn't talk about causation in statistical papers, and had to disguise causal analysis as the "missing data problem". Neither Causal Decision Theory nor Evidential Decision Theory work as naturalised decision theories, yet the former is criticised more harshly for failing on Newcomb's Problem than the latter is for fail...
I dunno, I just used "gears" about a totally acausal (but still logical) relationship yesterday.
I don't think this works. There are many cases where the gears-level model is causal and the policy level is not, but it's not the same distinction, and there are cases where they come apart.
E.g., suppose someone claims to have proven P NP. You can have a policy-level take on this, say "Scott Aarenson think it's correct therefore I believe it", or a gears-level model, e.g., "I've read the proof and it seems solid". But neither of them is causal. It doesn't even make sense to talk about causality for mathematical facts.
I think the best pointer for gears-level as it is used nowadays is John Wentworth's post Gears vs Behavior. And in this summary comment, he explicitly says that the definition is the opposite of a black box, and that gears-level vs black box is a binary distinction.
Gears-level models are the opposite of black-box models.
[...]
One important corollary to this (from a related comment): gears/no gears is a binary distinction, not a sliding scale.
As for the original question, I feel that "mechanistic" can be applied to models that are just one neat equation but with no moving parts, such that you don't know how to alter the equation when the underlying causal process.
If mechanistic indeed means the opposite of black-box, then in principle we could replace gears-level model.
Huh. That's a neat distinction. It doesn't feel quite right, and in particular I notice that in practice there absolutely super duper very much is a sliding scale of gears-ness. But the "no black box" thing does tie together some things nicely. I like it.
A simple counterpoint: There's a lot of black box in what a "gear" is when you talk about gears in a box. Are we talking about physical gears operating with quantum mechanics to create physical form? A software program such that these are basically data structures? A hypothetical universe in which things actually in fact magically operate according to classical mechanics and things like mass just inherently exist without a quantum infrastructure? And yet, we can and do black-box that level in order to have a completely gears-like model of the gears-in-a-box.
My guess is you have to fuse this black box thing with relevance. And as John Vervaeke points out, relevance is functionally incomputable, at least for humans.
Isn't mechanistic specifically about physical properties? Could you say that an explanation of a social phenomenon is "mechanistic", even though it makes zero references to physical reality?
Normally and one might be tempted to generalise that usually when you know some thing you know the surrounfing "topic". However there are cases when this is lacking. There are such things as zero-knowledge proofs. Also any reductio ad absurdum (assuume not p. Derive q and not q from not p. Therefore p) is going to be very silent about small alterations to the claims.
Also dismissing perpetual motions machines because you believe energy is conserved will make no particular claim on what is the issue with this particular scheme. This can be rigorous and robust which might be alot what people often shoot for with "mechanistic" but it is general and fails to be particular and thus not gear-level (it kind of concretely doesn't care whether the machine in question even has gears or not).
White box or transparent box model, as opposed to a black box model.
"Mechanistic" and "reductionist" have somewhat poor branding, and this assertion is based on personal experience rather than rigorous data. Many people I know will associate "mechanistic" and "reductionist" with negative notions, such as "life is inherently meaningless" or "living beings are just machines", etcetera.
Wording matters and I can explain the same idea using different wording and get drastically different responses from my interlocutor.
I agree that “gears-level“ is confusing to someone unfamiliar with the concept. Naming is hard. A better name could be "precise causal model".
But mechanistic world models do suggest that meaning in a traditional (mystical? I can’t really define it, as I find the concept itself incoherent) sense does not (and cannot) exist; so I think the “negative” connotations are pretty fair, it’s just that they aren’t that negative or important in the first place. (“Everything adds up to normalcy.”) Rebranding is still a sound marketing move, of course.
In some sense, yeah, "life is inherently meaningless" and "living beings are just machines." However, I am still struggling to wrap my head around the objectivity of aesthetics, meaning and morality. Information is now widely considered physical (refer to papers by R Landauer and D Deutsch). Maybe someday, we will once and for all incorporate aesthetics, meaning and morality under physicalism. If minds are physical, and aesthetics, purposes, and morality are real aspects of minds, then wouldn't that imply that they are therefore objective notions? And thus not "meaningless"?
This is a gnarly rabbit hole, and I am not qualified to talk about this topic. I recently read Parfit's "Reasons and Persons" to gain a deeper grasp of these topics and it's a stunning and precious book, but I need to do more work to understand all this. I may have to read his magnum opus "On What Matters" to wrap my head around this. We don't have a proper understanding of minds at this point in time. Developing robust theories about rationality, morality, aesthetics, desires, etc., necessitates actually understanding minds.
As you've pointed out, marketing matters. In my view, this is part of the reason why epistemic and instrumental rationalities are distinct aspects of rationality as defined in the sequences. If your goal is to explain an idea to your interlocutor and you can convey the same truth using different wording, with one wording leading to mutual understanding and the other leading to obstinacy, then the instrumentally rational thing to do would be to use the former wording. Here we have a situation where two things are epistemically equivalent but not instrumentally so.
A dimension I like, is the dimension of how much a model bears "long" chains of inference. (Metaphorically long, not necessarily many steps.) Can I tell you the model, and then ask you what the model says about X, and you don't immediately see it, but then I tell you an argument that the model makes, and you can then see for yourself that the model says that? Then that's a gears-level model.
Gears-level models make surprising predictions from apparently unsurprising elements. E.g. a model that says "there's some gears in the box, connected in series by meshing teeth" sounds sort of anodyne, but using inference, you can get a precise non-obvious prediction out of the model: turning the left gear Z-wise makes the right gear turn counter-Z-wise, and vice versa.
more transparent to outsiders
There is the danger of it being more transparency-illuding instead. (Yeah, I just invented that term, but what did I mean by it?)
If so, can we try to shift rationalist terminology towards the latter, which seems more transparent to outsiders?