Epistemologist specialized in the difficulties of alignment and how to solve AI X-Risks. Currently at Conjecture.
Blogging at For Methods.
Oh, that's a great response!
I definitely agree with you that there is something like a set of primitives or instructions (as you said, another metaphor) that used everywhere by humans. We're not made to do advanced maths, create life-like 2D animation, cure diseases. So we're clearly retargeting processes that were meant for much more prosaic tasks.
The point reminds me of this great quote from Physics Avoidance, a book I'm taking a lot of inspiration for my model of methodology: (p.32)
An unavoidable consequence of our restricted reasoning capacities is that we are forever condemned to wobble between seasons of brash inferential extension and epochs of qualified retrenchment later on. These represent intellectual cycles from which we can never escape: we remain wedded to a comparatively inflexible set of computational tools evolved for the sake of our primitive ancestors, rather than for experts in metallurgy. We can lift ourselves by our bootstraps through clever forms of strategic reassignment within our reasonings, but no absolutist guarantees on referential application can be obtained through these adaptive policies.
This is clearly the part of my model of methodology/epistemology that is the weakest. I feel there is something there, and that somehow the mix of computational constraints thinking from Theoretical CS and language design thinking from Programming Language Theory might make sense of it, but it's the more mechanistic and hidden part of methodology, and I don't feel I have enough phenomenological regularities to go in that direction.
Digging more into the Faraday question, this raises another subtlety: how do you differentiate the sort of "direct" reuse/adaptation of a cognitive primitive to a new task, from the analogy/metaphor to a previous use in the culture.
Your hypotheses focus more on the latter, considering where Faraday could have seen or heard geometric notions in context that would have inspired him for his lines of forces. My intuition is that this might instead be a case of the former, because Faraday was particularly graphic in his note taking and scientific practice, and so it is quite natural for him to convergently rediscover graphic/visual means of explanations.
Exploratory Experiments, my favoured treatment of Faraday's work on Electromagnetism (though focused on electromagnetic induction rather than the lines of forces themselves), emphasizes this point. (p.235,241)
Both the denial of the fundamental character of attraction and repulsion, as well as the displacement of the poles of a bar magnet away from its ends, broke with traditional conceptions. It is important to highlight that these ideas were formed in the context not only of intense experimentation but also of successive attempts to find the most general graphical presentation of the experimental results—attempts that involved a highly versatile use of various visual perspectives on one and the same experimental subject.
[...]
In this development, Faraday’s engagement with graphical representations is again highly remarkable. His laboratory record contains no drawings of the experimental setups themselves, only the occasional sketch of the shape of the wire segment. Of much greater importance are his sketches of the experimental results. As before, these alternate easily between side views and views from above. The side views are less abstract. But even in these drawings Faraday had to add an imaginary post in the center of each described rotation, so as to distinguish front from back and thereby specify the direction of rotation. Again, his sketches served as working media in which he developed stepwise abstractions. They played a constitutive role in the evolution of his view.
(As a side note, Faraday's work in Electromagnetism is probably one of the most intensely studied episode in the history of science. First because of its key importance for the development of electromagnetism, field theory, and most of moder physics. But also because Faraday provides near perfect historical material: he religiously kept a detailed experimental journal, fully published, and had no interest in covering up his trace and reasoning (as opposed to say Ampère).
So in addition to Exploratory Experiments mentioned above, I know of the following few books studying Faraday's work:
I'm unsure if that's what you meant, but your comment has made me realize that I didn't neatly separate the emergence of a new mechanism (pseudo or not) from the perpetuation of an existing one. The whole post weaves back and forth between the two.
For the emergence of a new mechanism, this raises a really interesting question: where does it come from. The examples I mentioned, and more that come to mind, clearly point to a focus on some data, some phenomenological compression as a starting point (Galileo, Kepler, and other's observations and laws for Newton, say).
But then it also feels like the metaphor being used is never (at least I can't conjure up an instance) completely created out of nothing. People pull it out of existing technology (maybe clockwork for Newton? definitely some example in the quote from The Idea of the Brain at the beginning of the post), out of existing science (say the use of the concept of field by Bourdieu in sociology from Physics) out of stories (how historical linguistics and Indo-European linguistics were bootstrapped with an analogy to Babel), out of elements of their daily life and culture (as an example, one of my friend has a strong economics background, and so they always tend towards economic explanations; I have a strong theoretical computer science background, and so I always tend towards computational explanations...)
On the other hand, I know of at least one example where the intensity of the pattern gave life to a whole new concept, or at least something that was hardly tied with existing scientific or technological knowledge at the time: Faraday's discovery of lines of forces, which prefigures the concept of field in physics.
To go deeper into this (which I haven't done), I would maybe look at the following books:
One point evoked by other comments, which I've realized only after leaving France and living in the UK, is that there is still a massive prestige for engineering. ENS is not technically an engineering school, but it benefits from this prestige by being lumped with them, and by being accessed mainly from the national contests at the end of Prepas.
As always with these kind of cultural phenomena, I didn't really notice them until I left France for the UK. There is a sense in France (more when I was a student, but still there) that the most prestigious jobs are engineering ones. Going to engineering school is considered one of the top options (with medecine), and it is considered a given that any good student with a knack for maths, physics, science, will go to prepa and engineering school.[1] It's almost free (and in practice is free if your parents don't make more than a certain amount), and it is guaranteed to lead to a good future.
This means that the vast majority of mathematical talent studies the equivalent of a undergraduate degree in maths, compressed in the span of 2 years. In addition of giving the standard french engineer much more of a mathematical training, it shows to the potential mathematicians, by default, a lot of what they could do. And if they decide to go to ENS (or Polytechnique, which is the best engineering school but still quite researchy if you want to), this is actually one of the most prestigious options you could take.
Similarly, the prestige of engineering (and science to some extent) impacts what people decide to do after their degrees. I remember that in my good prepa and my good engineering school, the cool ones were those going to build planes and bridges. The ones who went into consulting and finance were pitied and mocked as the failures, not the impressive successes to emulate. Yet what my UK friends tell me is that this is the exact opposite of what happens even in great universities in the UK.
This has become less true, as more private schools open, and the whole elitist system is wormed out by software engineering startups (which generally doesn't ask you for an engineering degree, as opposed to the older big french companies).
I did not particularly intend to do a book review per say, and I don't claim to be an expert on the topic. So completely fine with tagging this in some way as "non-expert" if you wish.
Not planning to change how I wrote my posts based on this feedback, as I have no interest in following some arbitrary standard of epistemic expertise for a fun little blog post that will be read by 10 people max.
I remember reading this post, and really disliking it.
Then today, as I was reflecting on things, I recalled that this existed, and went back to read it. And this time, my reaction was instead "yep, that's pointing to the mental move that I've lost and that I'm now trying to relearn".
Which is interesting. Because that means a year or two ago, up till now, I was the kind of people who would benefit from this post; yet I couldn't get the juice out of it. I think a big reason is that while the description of the play/fun mental move is good and clear, the description of the opposite mental move, the one short-circuiting play/fun, felt very caricatural and fake.
My conjecture (though beware mind fallacy), is that it's because you emphasize "naive deference" to others, which looks obviously wrong to me and obviously not what most people I know who suffer from this tend to do (but might be representative of the people you actually met).
Instead, the mental move that I know intimately is what I call "instrumentalization" (or to be more memey, "tyranny of whys"). It's a move that doesn't require another or a social context (though it often includes internalized social judgements from others, aka superego); it only requires caring deeply about a goal (the goal doesn't actually matter that much), and being invested in it, somewhat neurotically.
Then, the move is that whenever a new, curious, fun, unexpected idea pop up, it hits almost instantly a filter: is this useful to reach the goal?
Obviously this filter removes almost all ideas, but even the ones it lets through don't survive unharmed: they get trimmed, twisted, simplified to fit the goal, to actually sound like they're going to help with the goal. And then in my personal case, all ideas start feeling like should, like weight and responsibility and obligations.
Anyway, I do like this post now, and I am trying to relearn how to use the "play" mental move without instrumentalizing everything away.
Yes, and this leads to another essential point: any new idea is at a fundamental infrastructure disadvantage. For the old idea has been not only etched into the psyche and the ontology of its users, but it has probably (especially in the case of a technical idea) grown a significant epistemic infrastructure around it: tools that embed the assumptions, tricks to simplify computations, tacit knowledge of how to tweak it to make it work.
The new idea has nothing of the sort, and so even if it has eventual advantages, it must first survive in a context where it is probably inferior in result. Which generally comes about through some form of propaganda, of separate community, of a new generation wanting to turn around known wisdom...