you do need to be less agentic when wanting to learn a language, compared to when doing original research
I think the main difference is that for language you already have a good dataset that lets you learn words, and with them, concepts they invoke. This might even be enough to explain the Sapient Paradox, how it took humans ~100K years to get culture going: they didn't have abstract concepts for clear thinking (deliberation, agency) not directly invoked by concrete objects already in environment, and it took that long to develop them without scholars. Scholars only became productive after enough concepts facilitating their work crystallized in language/culture, and for thousands of years now they are capable of crafting new concepts not already represented in culture, much faster than culture previously crystallized them on its own. Some of these new concepts, when given names (words) to invoke them, can survive in culture without scholars, as datasets of everyday speech.
So the difficulty with original research is that you also need to craft the datasets for new concepts, there are no existing datasets to learn the concepts from. With deliberative reasoning (System 2), it takes a long time and focused effort (agency), and ideally you seek a reflective equilibrium of post-rigorous knowledge of concepts, where the episodes you can generate for the datasets of these concepts (lemmas, simply-stated solutions to simply-stated problems you can imagine, justified by proofs/constructions) are already learned as intuition, and as intuition (System 1 reasoning) provide no more insight into generating more episodes that would change the intuition (model behavior) significantly.
Disclaimer: This is an exploratory writing post. No checking for typos or other editing was done.
This seems like the core skill to success in almost anything. Especially as things get more complex and less straightforward. E.g. you do need to be less agentic when wanting to learn a language, compared to when doing original research. This is at least in part because it is easier to evaluate if the direction that you are moving in is good
This also seems very important when working in a team. When you are working in a field where determining a good direction is relatively easy, because it is known what is good, then few people can direct many people to do something. E.g. when constructing a building, having a chain of command works well, because it is known what people at each level should do. When making a video game, there can also be some people who direct other people. E.g. when it comes to what art assets to produce, and how to go about producing them. In research, often the hardest part is to find the right direction to move in. If you have a team of people working on a research project, trying to manage what all the people should be doing becomes quickly overwhelming, as long as you still don't have a solid direction to move in, in my experience.
I think that probably a better strategy is, that everybody optimizes for themselves, to do what is best for them to do. I am thinking here in terms of the context of the project, but this is also true at higher levels (e.g. should I be working on this specific project, in this specific team).
I am not satisfied with my level of agenticness, and also with the people, I work with. I have not found a good strategy for making them more agentic yet. The first step is that they agree that they should become more agentic. But after that, I do not know what to do.
This all is in the context of research. When working in a team with other people, making games in the past, this was much less of an issue. Though that might have been because success was not super team-dependent, and I could often carry the team to produce something good by working hard. Even in that situation, more agenticness would probably be good.).