I've said this elsewhere, but I think we need to also be working on training wise AI advisers in order to help us navigate these situations.
Do you think there's any other updates you should make as well?
Well, does this improve automated ML research and kick off an intelligence explosion sooner?
"Funders of independent researchers we’ve interviewed think that there are plenty of talented applicants, but would prefer more research proposals focused on relatively few existing promising research directions" - Would be curious to hear why this is. Is it that if there is too great a profusion of research directions that there won't be enough effort behind each individual one?
I'd love to hear some more specific advice about how to communicate in these kinds of circumstances when it's much easier for folk not to listen.
Just going to put it out there, it's not actually clear that we actually should want to advance AI for maths.
I maintain my position that you're missing the stakes if you think that's important. Even limiting ourselves strictly to concentration of power worries, risks of totalitarianism dominate these concerns.
My take - lots of good analysis, but makes a few crucial mistakes/weaknesses that throw the conclusions into significant doubt:
The USG will be able and willing to either provide or mandate strong infosecurity for multiple projects.
I simply don't buy that the infosec for multiple such projects will be anywhere near the infosec of a single project because the overall security ends up being that of the weakest link.
Additionally, the more projects there are with a particular capability, the more folk there are who can leak information either by talking or by being spies.
The probability-weighted impacts of AI takeover or the proliferation of world-ending technologies might be high enough to dominate the probability-weighted impacts of power concentration.
Comment: We currently doubt this, but we haven’t modelled it out, and we have lower p(doom) from misalignment than many (<10%).
Seems entirely plausible to me that either one could dominate. Would love to see more analysis around this.
Reducing access to these services will significantly disempower the rest of the world: we’re not talking about whether people will have access to the best chatbots or not, but whether they’ll have access to extremely powerful future capabilities which enable them to shape and improve their lives on a scale that humans haven’t previously been able to.
If you're worried about this, I don't think you quite realise the stakes. Capabilities mostly proliferate anyway. People can wait a few more years.
My take: Bits of this review come off as a bit too status-oriented to me. This is ironic, because the best part of the review is towards the end when it talks about the risk of rationality becoming a Fandom.
Echoing others: turning Less Wrong into Manifold would be a mistake. Manifold already exists. However, maybe you should suggest to them that they add a forum independent of any particular market.