These are two different concepts. They're both useful ones. Interpreting an argument charitably has positive arguments in its own right. It can promote healthier conversational styles and more effective reasoning. Actually understanding what your counterpart is trying to say is difficult, and that is why Intellectual Turing Tests are non-trivial to pass and valuable to engage in.
Steelmanning is useful for changing your own mind, but it is at best rude to the person you are in conversation with, and can as a practice close off more substantial shifts in belief. The argument most likely to change my mind is going to tend to be the one that produces the smallest update, since that needs to conflict with fewer other beliefs about the world.
We should keep both terms, not crush the principle of charity entirely.
I have already made a Post to LW that I would not have counterfactually, and expect to do 1-2 more this week as a result of the experiment. Part of it for me is a psychological forcing function: if I did not have a particular time at which to do this, well, now I do. I will be interested in seeing if there's an increase or decrease April 8-22 relative to the pre-experiment trend: I'm not very confident in the direction, but I do have an expectation of higher variance relative to an average week.
Econ in general, no. The specific model of rational actors optimizing for outcomes, the intuition for why markets often succeed at delivering on desires (at least for those with money), and the practice of making multi-stage models and following an impact in one area through to others, yes. Nobody needs macro. Lots of people need to reflexively think of how other actors in a system respond to a change, and econ is one of the more effective ways of teaching this. Critical if you want to actually have a good understanding of multi-polar scenarios. What I'm talking about is rarely adequately understood just by studying game theory, which usually analyzes games too simple to stick in student's minds, but it matters.
In addition to understanding what statistics are on a theoretical level, an intuitive understanding that getting the right numbers yields real benefits, and that the right numbers are very precisely defined, seems important.
These are both softer skills, but I think that they're important.
Elaborating and making more explicit some of the other models here, I propose this alternative explanation which I don't think you've ruled out (and which I'm sympathetic to).
1. PhDs have no causal impact on research productivity.
2. PhDs, for the sort of person who does groundbreaking impressive original research, have substantial positive expected personal value. You get social legibility and status, you get higher pay, and it is a chance to do funded research for a few years while building useful connections. "PhDs are fun" is not a popular view in 2020, but I'm enjoying mine.
Now, I'd be surprised if this strong model is entirely true. The social legibility and status make it easier to spend more time on research, the presence potentially pushes people away from less interesting but more profitable problems, etc. But your current analysis does not allow us to distinguish between this model and its inverse (The entire observed effect of PhDs on research productivity is causal, and none of it is omitted variable bias).
It may be useful to note that a regional accent, in the UK, is much more indicative of a working class background than it is in the US.
Sadly, people have been trying to prop up that rotting corpse ever since. Goldman is a decent example.
I worry that I harmed the results by mentioning that I have meditated for cognitive benefit reasons, without a way to note that it wasn't to deal with Akrasia. I wanted to answer truthfully, but at the same time the truthful answer was misleading.
I have made a good-faith post as a result of the extension!