Consider the state of funding for AI alignment.
Is the field more talent-constrained, or funding-constrained?
I think most existing researchers, if they take the AI-based extinction-risk seriously, think it's talent constrained.
I think the bar-for-useful-contribution could be so high, that we loop back around to "we need to spend more money (and effort) on finding (and making) more talent". And the programs to do those may themselves be more funding-constrained than talent-constrained.
Like, the 20th century had some really good mathematicians and physicists, and the US government spared little expense towards getting them what they needed, finding them, and so forth. Top basketball teams will "check up on anyone over 7 feet that’s breathing".
Consider how huge Von Neumann's expense account must've been, between all the consulting and flight tickets and car accidents. Now consider that we don't seem to have Von Neumanns anymore. There are caveats to at least that second point, but the overall problem structure still hasn't been "fixed".
Things an entity with absurdly-greater funding (e.g. the US Department of Defense the US deferal government in a non-military-unless-otherwise-stated capacity) could probably do, with their absurdly-greater funding and probably coordination power:
- Indefinitely-long-timespan basic minimum income for everyone who is working on solely AI alignment.
- Coordinating, possibly by force, every AI alignment researcher and aspiring alignment researcher on Earth to move to one place that doesn't have high rents like the Bay. Possibly up to and including creating that place and making rent free for those who are accepted in.
- Enforce a global large-ML-training shutdown.
- An entire school system (or at least an entire network of universities, with university-level funding) focused on Sequences-style rationality in general and AI alignment in particular.
- Genetic engineering, focused-training-from-a-young-age, or other extreme "talent development" setups.
- Deeper, higher-budget investigations into how "unteachable" things like security mindset really are, and how deeply / quickly you can teach them.
- Any of the above ideas, but with a different tradeoff on the Goodharting-vs-missed-opportunities continuum.
- All of these at once.
I think the big logistical barrier here is something like "LTFF is not the U,S government", or more precisely "nothing as crazy as these can be done 'on-the-margin' or with any less than the full funding". However, I think some of these could be scaled down into mere megaprojects or less. Like, if the training infrastructure is bottlenecked on trainers, then we need to fund indirect "training" work just to remove the bottleneck on the bottleneck of the problem. (Also, the bottleneck is going to move at least when you solve the current bottleneck, and also "on its own" as the entire world changes around you).
Also... this might be the first list of ideas-in-precisely-this-category, on all of LessWrong/the EA Forum. (By which I mean "technical AI alignment research projects that you could fund, without having to think about the alignment problem itself in much detail beyond agreeing with 'doom could actually happen in my lifetime', if funding really wasn't the constraint".)
EDIT: I think this comment was overly harsh, leaving it below for reference. The harsh tone was contributed from being slightly burnt out from feeling like many people in EA were viewing me as their potential ender wiggin, and internalizing it.[1]
The people who suggest schemes like what I'm criticizing are all great people who are genuinely trying to help, and likely are.
Sometimes being a child in the machine can be hard though, and while I think I was ~mature and emotionally robust enough to take the world on my shoulders, many others (including adults) aren't.
Please stop being a fucking coward speculating on the internet about how child soldiers could solve your problems for you. Enders game is fiction, it would not work in reality, and that isn't even considering the negative effects on the kids. You aren't smart enough for galaxy brained plans like this to cause anything other than disaster.
In general rationalists need to get over their fetish for innate intelligence and actually do something instead of making excuses all day. I've mingled with good alignment researchers, they aren't supergeniuses, but they did actually try.
(This whole comment applies to Rationalists generally, not just the OP.)
I should clarify this mostly wasn't stuff the atlas program contributed to. Most of the damage was done from my personality + heroic responsibility in rat fiction + dark arts of rationality + death with dignity post. Nor did atlas staff do much to extenuate this, seeing myself as one of the best they could find was most of it, cementing the deep "no one will save you or those you love" feeling. ↩︎
In retrospect, I've come to agree more on this since we last debated, and I now think genetic effects are log-normally distributed, and I think you were directionally correct here (though I do still think that there's a significant chance that what's going on is people get mostly lucky, and then post-hoc a story about how their innate intelligence/sheer willpower made them successful, and this is important, because I do think the world in general is way more extreme than human genetics/traits.)
Thanks to @tailcalled for convincing me I was wrong here:
https:... (read more)