My hypothesis: I think the incentives for "cultivating more/better researchers in a preparadigmatic field" lean towards "don't discourage even less-promising researchers, because they could luck out and suddenly be good/useful to alignment in an unexpected way".
Analogy: This is like how investors encourage startup founders because they bet on a flock of them, not necessarily because any particular founder's best bet is to found a startup.
If timelines are short enough that [our survival depends on [unexpectedly-good paradigms]], and [unexpectedly-good paradigms] come from [black-swan researchers], then the AI alignment field is probably (on some level, assuming some coordination/game theory) incentivized to black-swan farm researchers.
Note: This isn't necessarily bad (and in fact it's probably good overall), it just puts the incentives into perspective. So individual researchers don't feel so bad about "not making it" (where "making it" could be "getting a grant" or "getting into a program" or...)
The questions: Is this real or not? What, if anything, should anyone do, with this knowledge in hand?
I can attest to the validity of the premise you're raising based on my own experience, but there are multiple factors at play. One is the scarcity of resources, which tends to limit opportunities to groups or teams that can demonstrably make effective use of those resources—whether it's time, funds, or mentorship. Another factor that is less frequently discussed is the courage to deviate from conventional thinking. It's inherently risky to invest in emerging alignment researchers who haven't yet produced tangible results. Making such choices based on a gut feeling of their potential to deliver meaningful contributions can seem unreasonable, especially to funders. There are more layers to this, but nevertherless didn't took any bad blood about what is the landscape. It is what it is.