I'm not a Friendliness researcher, but I did once consider whether trying to slow down AI research might be a good idea. Current thinking is probably not, but only because we're forced to live in a third-best world:
First best: Do AI research until just before we're ready to create an AGI. Either Friendliness is already solved by then, or else everyone stop and wait until Friendliness is solved.
Second best: Friendliness looks a lot harder than AGI, and we can't expect everyone to resist the temptation of fame and fortune when the possibility of creating AGI...
Following some somewhat misleading articles quoting me, I thought I’d present the top 9 myths about the AI risk thesis: