Yeah, there's something weird going on here that I want to have better handles on. I sometimes call the thing Bengio does being "motivated by reasons." Also the skill of noticing that "words have referents" or something?
Like, the LARPing quality of many jobs seems adjacent to the failure mode Orwell is pointing to in Politics and the English Language, e.g., sometimes people will misspell metaphors—"tow the line" instead of "toe the line"—and it becomes very clear that the words have "died" in some meaningful sense, like the speaker is not actually "loading them." And sometimes people will say things to me like "capitalism ruined my twenties" and I have a similarly eerie feeling about, like it's a gestalt slapped together—some floating complex of words which themselves associate but aren't tethered to anything else—and that ultimately it's not really trying to point to anything.
Being "motivated by reasons" feels like "loading concepts," and "taking them seriously." It feels related to me to noticing that the world is "motivated by reasons," as in, that there is order there to understand and that an understanding of it means that you get to actually do things in it. Like you're anchored in reality, or something, when the default thing is to float above it. But I wish I had better words.
LARPing jobs is a bit eerie to me, too, in a similar way. It's like people are towing the line instead of toeing it. Like they're modeling what they're "supposed" to be doing, or something, rather than doing it for reasons. And if you ask them why they're doing whatever they're doing they can often back out one or two answers, but ultimately fail to integrate it into a broader model of the world and seem a bit bewildered that you're even asking them, sort of like your grandpa. Floating complexes—some internal consistency, but ultimately untethered?
Anyways, fwiw I think the advice probably varies by person. For me it was helpful to lean into my own taste and throw societal expectations to the wind :p Like, I think that when you're playing the game "I should be comprehensible to basically anyone I talk to at every step of the way" then it's much easier to fall into these grooves of what you're "supposed" to be doing, and lapse into letting other people think for you. Probably not everyone has to do something so extreme, but for me, going hardcore on my own curiosity and developing a better sense of what it is, exactly, that I'm so curious about and why, for my own god damn self and not for anyone else's, has gone a long way to "anchor me in reality," so to speak.
Not sure I understand what you're saying with the "tow the line" thing.
A lot of what you wrote seems like it's gesturing towards an idea that I might call "buzzwords" or "marketing speak" or "the affect game". I think of it as being when someone tries to control the valence of an idea by associating it with other ideas without invoking any actual model of how they're connected. Like trying to sell a car by having a hot model stand next to it.
If that's what you meant, then I agree this is kind of eerie and sort of anti-reality and I'm generally against it. (Let's attach negative valence to it!)
But that wouldn't be my first hypothesis (or my second) to explain what's going on when someone writes "tow the line" instead of "toe the line". My first hypothesis would be that they think of "toe the line" as an idiomatic phrase, in the sense that their mental dictionary has an explicit entry for the phrase-as-a-whole so they don't need to construct the meaning out of the component words. It's therefore easy to make a mistake regarding which words are in the phrase because they aren't load-bearing. (This is also my guess about where phrases like "sooner than later" or "I could care less" come from, though I've never seriously researched them.)
This explanation sort of matches your description about how the words have died and aren't being "loaded", in the sense that the entire phrase has a meaning that's not dependent on the individual words. But if you're saying that putting idiomatic phrases into your mental dictionary is terrible, I disagree. I think idiomatic phrases are just compound words with spaces in them. I feel that understanding how the individual words contribute to the combined meaning is kind of like knowing the Latin roots of a word: preferable to not knowing, but hardly an essential skill for most people, and even if you know you probably don't want to reconstruct the meaning from that knowledge every time you use it.
(My second hypothesis would be a simple spelling mistake. Misspelling homophones is not exactly unheard-of, although I don't think toe/tow is one of the common ones.)
I'm not sure if you were trying to get at the first thing or the second thing or some other thing I haven't thought of.
Not sure I understand what you're saying with the "toe the line" thing.
The initial metaphor was ‘toe the line’ meaning to obey the rules, often reluctantly. Imagine a do-not-cross line drawn on the ground and a person coming so close to the line that their toe touched it, but not in fact crossing the line. To substitute “tow the line”, which has a completely different literal meaning, means that the person has failed to comprehend the metaphor, and has simply adopted the view that this random phrase has this specific meaning.
I don’t think aysja adopts the view that it’s terrible to put idiomatic phrases whole into your dictionary. But a person who replaces a meaningful specific metaphor with a similar but meaningless one is in some sense making less meaningful communication. (Note that this also holds if the person has correctly retained the phrase as ‘toe the line’ but has failed to comprehend the metaphor.)
aysja calls this failing to notice that words have referents, and I think that gets at the nature of the problem. These words are meant to point at a specific image, and in some people’s minds they point at a null instead. It’s not a big deal in this specific example, but a) some people seem to have an awful lot of null pointers and b) sometimes the words pointing at a null are actually important. For example, think of a scientist who can parrot that results should be ‘statistically significant’ but literally doesn’t understand the difference between doing one experiment and reporting the significance of the results, and doing 20 experiments and only reporting the one ‘significant’ result
Since two people have reacted saying that I missed your point (but not what point I missed), I'm rereading your comment and making another try at understanding it. I'm not making much progress on that, but your description of what "toe the line" means keeps bothering me. You said:
The initial metaphor was ‘toe the line’ meaning to obey the rules, often reluctantly. Imagine a do-not-cross line drawn on the ground and a person coming so close to the line that their toe touched it, but not in fact crossing the line.
If you're trying to get someone not to cross a line, telling them that they should get as close as possible without crossing it seems pretty weird to me. Exhortations to follow the rules do not typically include an implication that you should get as close as possible to breaking them.
When I first inferred the phrase's meaning from context (the usual way people learn most terms) and made an idle guess at its original metaphor, I guessed it had to do with soldiers lining up in formation, showing that they're part of the superorganism and displaying that superorganism's coordination to potential foes.
So I checked Wikipedia...
The most likely origin of the term goes back to the wooden decked ships of the Royal Navy during the late 17th or early 18th century. Barefooted seamen had to stand at attention for inspection and had to line up on deck along the seams of the wooden planks, hence to "toe the line".
The page lists several other theories as to the origin of the phrase, and one of them (House of Commons) does actually involve some type of do-not-cross line--although that theory seems to have strong evidence against it and is presented as a common myth rather than a serious contender for the true origin.
You're complaining about people who degrade our communication by not grasping the underlying metaphor, but your own guess at the underlying metaphor is probably wrong, and even historians who make a serious effort to figure this out can't be sure they've got it right.
Do you still think your communication was better than the people who thought the line was being towed, and if so then what's your evidence for that?
To recap:
No one is debating the question of whether learning etymology of words is important and I'm not sure how you got hung up on that idea. And toe/tow the line is just an example of the problem of people failing to load the intended image/concept, while LARPing (and believing?) that they are in fact communicating in the same way as people who do.
Does that help?
When I asked for clarification (your number 3), I said here's some things aysja might mean, if they mean thing A then I agree it's bad but I don't agree that "tow the line" is an example of the same phenomenon, if they mean thing B then I agree "tow the line" is an example but I don't think it's bad, is aysja saying A or B or something else?
You replied by focusing heavily on "tow the line" and how it demonstrates a lack of understanding and that's bad, but not saying anything that appeared to argue with or contradict my explanation of this as an example of thing B, so I interpreted you as basically accepting my explanation that "towing the line" is an example of thing B and then trying to change my mind about whether thing B is bad.
Your summary of the conversation doesn't even include the fact that I enumerated two different hypotheses so I'm guessing that the point at which we desynced was that those two hypotheses did not make the jump from my brain to yours?
Do you still think your communication was better than the people who thought the line was being towed, and if so then what's your evidence for that?
We are way off topic, but I am actually going to say yes. If someone understands that English uses standing-on-the-right-side-of-a-line as a standard image for obeying rules, then they are also going to understand variants of the same idea. For example, "crossing a line" means breaking rules/norms to a degree that will not be tolerated, as does "stepping out of line". A person who doesn't grok that these are all referring to the same basic metaphor of do-not-cross-line=rule is either not going to understand the other expressions or is going to have to rote-learn them all separately. (And even after rote-learning, they will get confused by less common variants, like "setting foot over the line".) And a person who uses tow not toe the line has obviously not grokked the basic metaphor.
I thought I just established that "toeing the line" is not referring to the same basic metaphor as "crossing a line".
My understanding of the etymology of "toe the line" is that it comes from the military--all the recuits in a group lining up , with their toes touching (but never over!) a line. Hence "I need you all to toe the line on this" means "do exactly this, with military precision"
Yes. (Which is very different from "stay out of this one forbidden zone, while otherwise doing whatever you want.")
If you're using "null pointer" to describe the situation where a person knows what a phrase means but not the etymology that caused it to take on that meaning, then I think you should consider nearly everyone to have "null pointers" for nearly every word that they know. That's the ordinary default way that people understand words.
You probably don't know why words like "know" or "word" have the meaning that they have. You'd probably have a marginally more nuanced understanding of their meaning if you did. This does not make a practical difference for ordinary communication, and I would not advise most people to try to learn the etymologies for all words.
And sometimes people will say things to me like "capitalism ruined my twenties" and I have a similarly eerie feeling about, like it's a gestalt slapped together
Ugh, that one annoys me so much. Capitalism is a word so loaded it has basically lost all meaning.
Like, people will say things like "slavery is inextricably linked to capitalism" and I'm thinking, hey genius, slavery existed in tribal civilizations that didn't even have the concept of money, what do you think capitalism even is?
(Same thing for patriarchy.)
while the median person in ML basically doesn’t.
Can I query you for the observations which produced this belief? (Not particularly skeptical, but would appreciate knowing why you think this.)
Not OP, but relevant -- I spent the last ~6 months going to meetings with [biggest name at a top-20 ML university]'s group. He seems to me like a clearly very smart guy (and very generous in allowing me to join), but I thought it was quite striking that almost all his interests were questions of the form "I wonder if we can get a model to do x", or "if we modify the training in way y, what will happen?" A few times I proposed projects about "maybe if we try z, we can figure out why b happens" and he was never very interested --a near exact quote of his in response was "even if we figured that out successfully, I don't see anything new we could get [the model] to do".
At one point I explicitly asked him about his lack of interest in a more general theory of what neural nets are good at and why-- his response was roughly that he's thought about it and the problem is too hard, comparing it to P=NP.
To be clear, I think he's an exceptionally good ML researcher, but his vision of the field looks to me more like a naturalist studying behavior than a biologist studying anatomy, which is very different from what I expected (and from the standard my shoulder-John is holding people to).
EDITED--removed identity of Professor.
This is mostly a gestalt sense from years of interacting with people in the space, so unrolling the full belief-production process into something legible would be a lot of work. But I can try a few sub-queries and give some initial answers.
Zeroth query: let’s try to query my intuition and articulate a little more clearly the kind of models which I think the median ML researcher doesn’t have. I think the core thing here is gears. Like, here’s a simple (not necessarily correct/incorrect) mental model of training of some random net:
We’re doing high dimensional optimization via gradient descent. The high dimensionality will typically make globally-suboptimal local minima rare, but high condition numbers quite common, so the main failure mode of the training process (other than fundamental limitations of the data or architecture) will be very slow convergence to minima along the bottom of long, thin “valleys” in the loss landscape.
That mental model immediately exposes a lot of gears. If that’s my mental model, and my training process is failing somehow, then I can go test that hypothesis via e.g. estimating the local condition number of the Hessian (this can be done in linear time, unlike calculation of the full Hessian), or by trying a type of optimizer suited to poor condition numbers (maybe conjugate gradient), or by looking for a “back-and-forth” pattern in the update steps; the model predicts that all those measurements will have highly correlated results. And if I do such measurements in a few contexts and find that the condition number is generally reasonable, or that it’s uncorrelated with how well training is going, then that would in-turn update a bunch of related things, like e.g. which aspects of the NTK model are likely to hold, or how directions in weight-space which embed human-intelligible concepts are likely to correspond to loss basin geometry. So we’ve got a mental model which involves lots of different measurements and related phenomena being tightly coupled epistemically. It makes a bunch of predictions about different things going on inside the black box of the training process and the network itself. That’s gearsiness.
(In analogy to the “dark room” example from the OP: for the person who “models the room as containing walls”, there’s tight coupling between a whole bunch of predictions involving running into something along a particular line where they expect a wall to be. If they reach toward a spot where they expect a wall, and feel nothing, then that’s a big update; maybe the wall ended! That, in turn, updates a bunch of other predictions about where the person will/won’t run into things. The model projects a bunch of internal structure into the literal black box of the room. That’s gearsiness. Contrast to the person who doesn’t model the room as containing walls: they don’t make a bunch of tightly coupled predictions, so they don’t update a bunch of related things when they hit a surprise.)
Now contrast the “high condition numbers” mental model to another (not necessarily correct/incorrect) mental model:
We’re doing optimization via gradient descent, so the main failure mode of the training process (other than fundamental limitations of the data or architecture) will be getting stuck in local minima which are not global minima (or close to them in performance).
This mental model exposes fewer gears. It allows basically one way to test the hypothesis: randomize to a new start location many times (or otherwise jump to a random location many times, as in e.g. simulated annealing), and see if training goes better. Based on this mental model in isolation, I don’t have a bunch of qualitatively different tests to run which I expect to yield highly correlated results. I don’t have a bunch of related things which update based on how the tests turn out. I don’t have predictions about what’s going on inside the magic box - there’s nothing analogous to e.g. “check the condition number of the Hessian”. So not much in terms of gears. (This “local minima” model could still be a component of a larger model with more gears in it, but few of those gears are in this model itself.)
So that’s the sort of thing I’m gesturing at. Again, note that it’s not about whether the model is true or false. It’s also not about how mathematically principled/justified the model is, though that does tend to correlate with gearsiness in practice.
Ok, on to the main question. First query: what are the general types of observations which served as input to my belief? Also maybe some concrete examples...
Second query: any patterns which occasionally come up and can be especially revealing when they do?
These are all very much the kinds of patterns which come up in conversation and papers/blog posts.
Ok, that’s all the answer I have time for now. Not really a full answer to the question, but hopefully it gave some sense of where the intuition comes from.
Same, but I'm more skeptical. At ICML there were many papers that seemed well motivated and had deep models, probably well over 5%. So the skill of having deep models is not limited to visionaries like Bengio. Also I'd guess that a lot of why the field is so empirical is less that nobody is able to form models, but rather that people have models, but rationally put more trust in empirical research methods than in their inside-view models. When I talked to the average ICML presenter they generally had some reason they expected their research to work, even if it was kind of fake.
Sometimes the less well-justified method even wins. TRPO is very principled if you want to "not update too far" from a known good policy, as it's a Taylor expansion of a KL divergence constraint. PPO is less principled but works better. It's not clear to me that in ML capabilities one should try to be more like Bengio in having better models, rather than just getting really fast at running experiments and iterating.
At ICML there were many papers that seemed well motivated and had deep models, probably well over 5%. So the skill of having deep models is not limited to visionaries like Bengio.
To be clear, I would also expect "well over 5%". 10-20% feels about right. When I said in the OP that the median researcher lacks deep models, I really did mean the median, I was not trying to claim 90%+.
Re: the TRPO vs PPO example, I don't think this is getting at the thing the OP is intended to be about. It's not about how "well-justified" a technique is mathematically. It's about models of what's going wrong - in this case, something to do with large update steps messing things up. Like, imagine someone who sees their training run mysteriously failing and starts babbling random things like "well, maybe it's getting stuck in local minima", "maybe the network needs to be bigger", "maybe I should adjust some hyperparameters", and they try all these random things but they don't have any way to go figure out what's causing the problem, they just fiddle with whatever knobs are salient and available. That person probably never figures out TRPO or PPO, because they don't figure out that too-large update steps are causing problems.
Placeholder response: this is mostly a gestalt sense from years of interacting with people in the space, so unrolling the full belief-production process into something legible would be a lot of work. I've started to write enough to give the general flavor, but it will probably be a few days before even that is ready. I will post another response-comment when it is.
Curated. When I was younger I had the belief that the people at the top of their fields "knew what they were doing" but struggled later with finding I didn't implicitly believe this in my actions, and also from finding the people at the top often not able to give good explanations of why they're doing what they're doing. I'd come around to deciding that they'd discovered effective "strategies" but that was a different skill than building "accurate models", but I hadn't fully stopped being confused, and now I am still not all the way.
I wish I'd read this post when I was 14, that would've saved me a lot of time and confusion. The examples of your grandfather, and of machine learning practitioners, really helped, and I think a more fleshed out version of this post would examine more fields and industries. I also really liked reading the advice — I'm not sure the advice actually applies in all situations, but it sure is helpful to even see an approach to how to act given this model of the world.
I also found it helpful to read Aysja's comment — to read another thoughtful perspective on the same topic.
Sometimes all you've got is experience and heuristics that you don't actually know how to put into words.
The biggest cost of this giant civilizational LARP is that people aren’t given much space to actually go build models, learn to the point that they know what they’re doing, etc.
I think there's another, bigger reason why this happens. Workmanship and incremental progress are predictable. In many fields (academic research comes to mind), in an attempt to optimize productivity and capture in a bottle the lightning of genius, we've built edifices of complex metrics that we then try to maximise in tight feedback loops. But if you are being judged by those metrics and need to deliver results on a precise schedule, it's usually a lot more reliable to just try and do repeatable things that will produce a steady trickle of small results than to sit down studying models of things hoping this will lead you to a greater intuition down the line. After long enough, yeah, people start feeling like that is what real expertise looks like. This is absolutely a problem and IMO makes worse the issue of stagnation in some fields of research and the replicability crisis.
This definitely describes my experience, and gave me a bit of help in correcting course, so thank you.
Also, I recall an Aella tweet where she claimed that some mental/emotional problems might be normal reactions to having low status and/or not doing much interesting in life. Partly since, in her own experience, those problems were mostly(?) alleviated when she started "doing more awesome stuff".
Thinking about status reminded me of the advice "if you are the smartest person in the room, you are in a wrong room". Like, on one hand, yes, if you move to a place with people smarter than you, you will have a lot of opportunity to learn. On the other hand, maybe you were high-status in your old room, and become low-status the new room. And maybe the low status will make you feel so depressed, that you will be unable to use the new opportunity to learn (while maybe you would google something in the old room).
This of course depends on other things, such as how friendly are the people in the new room, and whether your smartness was appreciated in the old room.
(Hypothetically, the winning combination would be to surround yourself with smarter people you can learn from, and simultaneously have such a giant ego that you do not feel low-status, and simultaneously also somehow the giant ego should not get in your way of actually learning from them.)
Can confirm, I'm able to get into rooms where I'm easily the dumbest person in them. Luckily I know how to feel less bad, and it's to spend more time/energy learning/creating stuff to "show" the rest of the group. (Now the bottleneck is "merely" my time/energy/sleep/etc., like always!).
...
I... think your comment, combined with all this context, just fixed my life a little bit. Thank you.
In "How Life Imitates Chess", Garry Kasparov wrote:
Every person has to find the right balance between confidence and correction, but my rule of thumb is, lose as often as you can take it. Playing in the open section and going 0-9 every time is going to crush your spirit long before you get good enough to make a decent score. Unless you have a superhuman ego, or totally lack one, a constant stream of negativity will leave you too depressed and antagonized to make the necessary changes.
(Hypothetically, the winning combination would be to surround yourself with smarter people you can learn from, and simultaneously have such a giant ego that you do not feel low-status, and simultaneously also somehow the giant ego should not get in your way of actually learning from them.)
A much more probable combination is for those with minimal ego, yet who are tough as nails.
There are examples of 'giant egos' able to effectively learn from a roomful of smarter folks, but that's literally 1 in a million.
Yup. Fundamentally, I think that human minds (and practically-implemented efficient agents in general) consist of a great deal of patterns/heuristics of variable levels of shallowness, the same way LLMs are, plus a deeper general-intelligence algorithm. System 1 versus System 2, essentially; autopilot versus mindfulness. Most of the time, most people are operating on these shallow heuristics, and they turn on the slower general-intelligence algorithm comparatively rarely. (Which is likely a convergent evolutionary adaptation, but I digress.)
And for some people, it's rarer than for others; and some people use it in different domains than others.
The LW-style rationality, in general, can be viewed as an attempt to get people to use that "deeper" general-purpose reasoning algorithm more frequently. To actively build a structural causal model of reality, drawing on all information streams available to them, and run queries on it, instead of acting off of reactively-learned, sporadically-updating policies.
The dark-room metaphor is pretty apt, I think.
Bengio et al’s “Orthogonal Deep Neural Nets”.
Bengio isn't an author on the linked paper. The authors don't seem to be his students, either?
I assume John was referring to Unitary Evolution Recurrent Neural Networks which is cited in the "Orthogonal Deep Neural Nets" paper.
Good post. I’ve been making more of an effort to deeply understand (relearning) the fundamentals of ML recently (for example, reading more papers in detail with focus and Karpathy’s videos) so that I don’t just stumble around and I have good actual grounding with the experiments I come up with.
I try to be vigilant to catch myself when I shy away from the discomfort of actually grappling with the hard material so that I can then actually push through and do it rather than shrug my shoulders and move on (and lie to myself that I’ll eventually get around to it).
This makes a lot of sense to me and helps me articulate things I've thought for a while. (Which, you know, is the shit I come to LessWrong for, so big thumbs up!)
One of the first times I had this realization was in one of my first professional experiences. This was the first time in my life where I was in a team that wasn't just LARPing a set of objectives, but actually trying to achieve them.
They weren't impressively competent, or especially efficient, or even especially good at their job. The objectives weren't especially ambitious: it was a long-running project in its final year, and everybody was just trying to ship the best product they could, where the difference between "best they could" and "mediocre" wasn't huge.
But everyone was taking the thing seriously. People in the team were communicating about their difficulties, and anticipating problems ahead of time. Managers considered trade-offs. Developers tried to consider the UX that end-users would face.
Thinking about it, I'm realizing that knowing what you're doing isn't the same as being super good at your job, even if the two are strongly correlated. What struck me about this team wasn't that they were the most competent people I ever worked with (that would probably be my current job), it's that they didn't feel like they were pretending to be trying to achieve their objectives (again, unlike my current job).
I think a point that I don't find sufficiently stressed is that impostor syndrome is not so much about the perceived absolute lack of knowledge/expertise/... but rather the perceived relative lack.
At least speaking for myself, the experience of not knowing something in itself does not trigger any emotional response. Whereas comparing myself to people, who have an impressive amount of knowledge about something I don't, is much more likely to make me feel more like an impostor.
This made me think of Carmack's integral of value over time vs maximal value. Oftentimes, we are LARPing because LARPing is a lot easier and faster than, as you say, debug until you know exactly what went wrong. When a program crashes, you'll get a ton of logging information, but it's often easier to just add retries and move on. If you're working under deadlines, you know that, in some sense, the quicker fix where you're bumping around in the dark will work and might be just as good as the slower fix of truly understanding the reason for a crash.
This is much less helpful for your own personal understanding, but often what you are delivering is a project, not an increase in understanding. When you want to stop LARPing, you have to go very slowly, and you don't really have time to not LARP in most areas of life. This has the bad effect of making my own personal understanding of many subjects much more shallow than I would like, but it lets me deliver projects, which is net better for my company than deeply understanding software internals. In terms of things that I am trying to deeply learn, there are really only a few areas at a time that I can actually take a shot of understanding, and I have to be sure that I actually want to understand those areas.
I found this post helpful for clarifying the concept of LARPing, in the sense of technical subjects.
Someone without a model has a hard time building any generalizable knowledge at all. It’s the difference between someone walking around in a dark room bumping into things and roughly remembering the spots they bumped things but repeatedly bumping into the same wall in different spots because they haven’t realized there’s a wall there, vs someone walking around in a dark room bumping into things, feeling the shapes of the things, and going “hmm feels like a wall going that way, I should strategize to not run into that same wall repeatedly”
I really like this analogy.
They feel like they’re just LARPing their supposed expertise, because they are just LARPing their supposed expertise.
Oh wow. This one too.
In any given field, the relative contributions of people who do and don’t know what’s going on will depend on (1) how hard it is to build some initial general models of what’s going on, (2) the abundance of “low-hanging fruit”, and (3) the quality of feedback loops, so people can tell when someone’s random stumbling has actually found something useful
Reading this, I instantly thought of high-impact complex problems with low tolerance which, in according with the Cynefin framework, is best dealt with initial probing and sensing. By definition, such environments/problems are not easily decomposed (and modeled), and are often characterized by emergent practices derived from experimentation. In the specific case that it is highly impactful but also does not allow for multiple failures, impact-oriented people are incentivized to work on the problem, but iteration is not possible.
In that situation, how can the impact-oriented people contribute -- and at the same time combat impostor syndrome -- if they cannot tangibly make themselves more experienced in the said problem? It seems that people won't be able to correctly realize if they know what they are doing. Would it be best to LARP instead? Or, is it possible for these people to gain experience in parallel problems to combat impostor syndrome?
Nice post, thank you.
The concept of beginners mind has proven absolutely invaluable to me in regards to all this. If anyone isn't familiar and is suffering from imposter syndrome while all those around you hail you as an expert or "SME", take a look at it. Each morning I remember two things:
1. I could die at any moment.
2. I should always strive to have a beginners mind.
This has significantly improved my well-being as well as my performance. Not uncommon advice but that's for a reason.
As someone who teaches new and budding ML "experts", your comments ring true about my students. The broad temptation is to apply ML without understanding. In our program, the math is difficult, the program is about understanding ML and what it means, and yet the students are inexorably driven to apply techniques without understanding, and it's an uphill battle. This includes students all the way to the Ph.D. level.
Imposter syndrome is endemic in postgraduate degrees, especially doctoral. Many spend the first half wading in waters far too deep for them. Those who do not drop out manage to find the right model and the miracle is that they find their way to a well argued whole. Of course, they have not really reached any solid solution, but solved some minor problem based on previous work, but a good candidate will say something new. But to reach that rare inflection point a broad understanding, and a huge amount of creativity, and intuition is needed.
One piece of evidence for ML people not understanding things is the low popularity of uParameterization. After it was released there's every theoretical and epirical reason to use it but most big projects (like llama 2?) just don't.
"Median person in ML" varies wildly by population. To people in AI safety, capabilities researchers at Openai et al represent the quintessential "ML people", and most of them understand deep learning about as well as Bengio. I agree that the median person employed in an ML role knows very little, but you can do a lot of ML without running into the average "ML" person.
Some related information: people around me constantly complain that the paper review process in deep learning is random and unfair. These complaints seem to basically just not be true? I've submitted about ten first or second author papers at this point, with 6 acceptances, and I've agreed with and been able to predict the reviewers accept/reject response with close to 100% accuracy, including acceptance to some first tier conferences.
If the review panel recommends a paper for a spotlight, there is a better than 50% chance a similarly-constituted review panel would have rejected the paper from the conference entirely:
https://blog.neurips.cc/2021/12/08/the-neurips-2021-consistency-experiment/
Strong agree with this content!
Standard response to the model above: “nobody knows what they’re doing!”. This is the sort of response which is optimized to emotionally comfort people who feel like impostors, not the sort of response optimized to be true.
Very true
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
IMO a lot of claims of having imposter syndrome is implicit status signaling. It's announcing that your biggest worry is the fact that you may just be a regular person. Do cashiers at McDonald's have imposter syndrome and believe they at heart aren't really McDonald's cashiers but actually should be medium-high 6-figure ML researchers at Google? Such an anecdote may provide comfort to a researcher at Google, because the ridiculousness of the premise will remind them of the primacy of the way things have settled in the world. Of course they belong in their high-status position, things are the way they are because they're meant to be.
To assert the "realness" of imposter syndrome is to assert the premise that certain people do belong, naturally, in high status positions, and others do belong naturally below them. It is more of a static, conservative view of the world that is masturbation for those on top. There is an element of truth to it: genetically predisposed intelligence, contentiousness, and other traits massively advantage certain people over others in fields with societally high status, but the more we reaffirm the impact of these factors, the more we become a society of status games for relative gain, rather than a society of improvement and learning for mutual gain.
Do cashiers at McDonald's have imposter syndrome
... I think so, yes. It would feel like they're just pretending like they know how to deal with customers, that they're just pretending to be professional staffers who know the ins and outs of the establishment, while in fact they just walked in from their regular lives, put on a uniform, and are not at all comfortable in that skin. An impression that they should feel like an appendage of a megacorporation, an appendage which may not be important by itself, but is still part of a greater whole; while in actuality, they're just LARPing being that appendage. An angry or confused customer confronts them about something, and it's as if they should know how to handle that off the top of their head, but no, they need to scramble and fiddle around and ask their coworkers and make a mess of it.
Or, at least, that's what I imagine I'd initially feel in that role.
IMO a lot of claims of having imposter syndrome is implicit status signaling. It's announcing that your biggest worry is the fact that you may just be a regular person.
Imposter syndrome ≠ being a regular person is your "biggest worry".
Epistemic status: model which I find sometimes useful, and which emphasizes some true things about many parts of the world which common alternative models overlook. Probably not correct in full generality.
Consider Yoshua Bengio, one of the people who won a Turing Award for deep learning research. Looking at his work, he clearly “knows what he’s doing”. He doesn’t know what the answers will be in advance, but he has some models of what the key questions are, what the key barriers are, and at least some hand-wavy pseudo-models of how things work.
For instance, Bengio et al’s “Unitary Evolution Recurrent Neural Networks”. This is the sort of thing which one naturally ends up investigating, when thinking about how to better avoid gradient explosion/death in e.g. recurrent nets, while using fewer parameters. And it’s not the sort of thing which one easily stumbles across by trying random ideas for nets without some reason to focus on gradient explosion/death (or related instability problems) in particular. The work implies a model of key questions/barriers; it isn’t just shooting in the dark.
So this is the sort of guy who can look at a proposal, and say “yeah, that might be valuable” vs “that’s not really asking the right question” vs “that would be valuable if it worked, but it will have to somehow deal with <known barrier>”.
Contrast that to the median person in ML these days, who… installed some libraries, loaded some weights, maybe fine-tuned a bit, and generally fiddled with a black box. They don’t just lack understanding of what’s going on in the black box (nobody knows that), they lack any deep model at all of why things work sometimes but not other times. When trying to evaluate a proposal, they may have some shallow patterns to match against (like “make it bigger”), but mostly they expect any project is roughly-similarly-valuable in expectation modulo its budget; their model of their own field is implicitly “throw lots of random stuff at the wall and see what sticks”. Such a person “doesn’t know what they’re doing”, in the way that Yoshua Bengio knows what he’s doing.
(Aside: note that I’m not saying that all of Yoshua’s models are correct. I’m saying that he has any mental models of depth greater than one, while the median person in ML basically doesn’t. Even a wrong general model allows one to try things systematically, update models as one goes, and think about how updates should generalize. Someone without a model has a hard time building any generalizable knowledge at all. It’s the difference between someone walking around in a dark room bumping into things and roughly remembering the spots they bumped things but repeatedly bumping into the same wall in different spots because they haven’t realized there’s a wall there, vs someone walking around in a dark room bumping into things, feeling the shapes of the things, and going “hmm feels like a wall going that way, I should strategize to not run into that same wall repeatedly” (even if they are sometimes wrong about where walls are).)
General Model
Model: “impostor syndrome” is actually correct, in most cases. People correctly realize that they basically don’t know what they’re doing (in the way that e.g. Bengio knows what he’s doing). They feel like they’re just LARPing their supposed expertise, because they are just LARPing their supposed expertise.
… and under this model it can still be true that the typical person who feels like an impostor is not actually unskilled/clueless compared to the median person in their field. It’s just that (on this model) the median person in most fields is really quite clueless, in the relevant sense. Impostor syndrome is arguably better than the most common alternative, which is to just not realize one’s own degree of cluelessness.
… it also can still be true that, in at least some fields, most progress is made by people who “don’t know what they’re doing”. For example: my grandfather was a real estate agent most of his life, and did reasonably well for himself. At one point in his later years, business was slow, we were chatting about it, and I asked “Well, what’s your competitive advantage? Why do people come to you rather than some other real estate agent?”. And he… was kinda shocked by the question. Like, he’d never thought about that, at all. He thought back, and realized that mostly he’d been involved in town events and politics and the like, and met lots of people through that, which brought in a lot of business… but as he grew older he largely withdrew from such activity. No surprise that business was slow.
Point is, if feedback loops are in place, people can and do make plenty of valuable contributions “by accident”, just stumbling on stuff that works. My grandfather stumbled on a successful business model by accident, the feedback loop of business success made it clear that it worked, but he had no idea what was going on and so didn’t understand why business was slow later on.
In any given field, the relative contributions of people who do and don’t know what’s going on will depend on (1) how hard it is to build some initial general models of what’s going on, (2) the abundance of “low-hanging fruit”, and (3) the quality of feedback loops, so people can tell when someone’s random stumbling has actually found something useful. In a field which has good feedback loops and lots of low-hanging fruit, but not good readily-available general mental models, it can happen that a giant mass of people shooting in the dark are responsible, in aggregate, for most progress. On the other hand, in the absence of good feedback loops OR the absence of low-hanging fruit, that becomes much less likely. And on an individual basis, even in a field with good feedback loops and low-hanging fruit, people who basically know what they’re doing will probably have a higher hit rate and be able to generalize their work a lot further.
“Nobody knows what they’re doing!”
Standard response to the model above: “nobody knows what they’re doing!”. This is the sort of response which is optimized to emotionally comfort people who feel like impostors, not the sort of response optimized to be true. Just because nobody has perfect models doesn’t mean that there aren’t qualitative differences in the degree to which people know what they’re doing.
The real problem of impostor syndrome
The real problem of impostor syndrome is the part where people are supposed to pretend they know what they’re doing.
Ideally, people would just be transparent that they don’t really know what they’re doing, and then explicitly allocate effort toward better understanding what they’re doing (insofar as that’s a worthwhile investment in their particular field). In other words, build inside-view general models of what works and why (beyond just “people try stuff and sometimes it sticks”), and when one is still in the early stages of building those models just say that one is still in the early stages of building those models.
Instead, the “default” in today’s world is that someone obtains an Official Degree which does not involve actually learning relevant models, but then they’re expected to have some models, so the incentive for most people is to “keep up appearances” - i.e. act like they know what they’re doing. Keeping up appearances is unfortunately a strong strategy - generalized Gell-Mann amnesia is a thing, only the people who do know what they’re doing in this particular field will be able to tell that you don’t know what you’re doing (and people who do know what they’re doing are often a small minority).
The biggest cost of this giant civilizational LARP is that people aren’t given much space to actually go build models, learn to the point that they know what they’re doing, etc.
So what to do about it?
From the perspective of someone who feels like an impostor, the main takeaway of this model is: view yourself as learning. Your main job is to learn. That doesn’t necessarily mean studying in a classroom or from textbooks; often it means just performing the day-to-day work of your field, but paying attention to what does and doesn’t work, and digging into the details to understand what’s going on when something unusual happens. If e.g. an experiment fails mysteriously, don’t just shrug and try something else, get a firehose of information, ask lots of questions, and debug until you know exactly what went wrong. Notice the patterns, keep an eye out for barriers which you keep running into.
And on the other side of the equation, have some big goals and plan backward from them. Notice what barriers generalize to multiple goals, and what barriers don’t. Sit down from time to time to check which of your work is actually building toward which of your goals.
Put all that together, give it a few years, and you’ll probably end up with some models of your own.