My hypothesis: I think the incentives for "cultivating more/better researchers in a preparadigmatic field" lean towards "don't discourage even less-promising researchers, because they could luck out and suddenly be good/useful to alignment in an unexpected way".
Analogy: This is like how investors encourage startup founders because they bet on a flock of them, not necessarily because any particular founder's best bet is to found a startup.
If timelines are short enough that [our survival depends on [unexpectedly-good paradigms]], and [unexpectedly-good paradigms] come from [black-swan researchers], then the AI alignment field is probably (on some level, assuming some coordination/game theory) incentivized to black-swan farm researchers.
Note: This isn't necessarily bad (and in fact it's probably good overall), it just puts the incentives into perspective. So individual researchers don't feel so bad about "not making it" (where "making it" could be "getting a grant" or "getting into a program" or...)
The questions: Is this real or not? What, if anything, should anyone do, with this knowledge in hand?
I think it's possible that talent funnels are soul-killing for the people facilitating them, not just due to churning >50% people out, but also due to Goodhart's law where evaluating intelligent humans is confusing and difficult because the hiring/competition culture is so Moloch-filled in American society (as opposed to, say, dath ilan) that things get rough even when everyone is 100% on board with alignment.
If true, that indicates there is a bias against expanding talent funnels because of the emotional difficulty, making people estimate the value of talent funnels as lower than it actually is; in reality, the people facilitating the talent funnels should be aware of the mental health sacrifice they are making, and decide whether or not they choose to be grist for the X-risk mines.