Are there any organizations or research groups that are specifically working on improving the effectiveness of the alignment research community? E.g.
The classic one is Lightcone Infrastructure, the team that runs LessWrong and the Alignment Forum.
"Why should we have to recruit people? Or train them, for that matter? If they're smart/high-executive-function enough, they'll find their way here".
Note: CFAR has had been a MIRI hiring pipeline for years, and they also seemed to function as a way of upskilling people in CFAR-style rationality, which CFAR thought was the load-bearing bits required to turn someone into a world saver.
I don't think anyone is saying this outright so I suppose I will - pushing forward the frontier on intelligence enhancement as a solution to alignment is not wise. The second order effects of pushing that particular frontier (both the capabilities and overton window) are disastrous, and our intelligence outpacing our wisdom is what got us into this mess in the first place.
I absolutely agree. Since lots of things are happening in the brain, you can't amplify intelligence without tearing down lots of Chesterton-Schelling fences. Making a community wealthy or powerful will make all the people and structures and norms inside it OOD.
But at the same time, we need nuanced calculations comparing the expected costs and the expected benefits. We will need to do those calculations as we go along, so we can update based on which tech and projects turn out to be low hanging fruit. Staying the course also doesn't seem to be a winning strategy.
You're not going to just be able to stop the train at the moment the costs outweigh the benefits. The majority of negative consequences will most likely come from grey swans that won't show up in your nuanced calculations of costs and benefits.
Especially if people like @Valentine are called upon to return from their cold sleep because the world needs them.
Double-click? I'm wondering what you mean by "cold sleep" here.
Special thanks to Justis Mills for pointing out important caveats.
Intelligence Enhancement
Megaprojects
dath ilan
Adaptability
"In times of change, learners inherit the earth, while the learned find themselves beautifully equipped to deal with a world that no longer exists"
—Eric Hoffer
Extracting much more value from the Sequences