(interested in hearing how other donors frame allocation between SI and CFAR)
I still only donate to SI. It's great that we can supposedly aim the money at FAI now, due to the pivot towards research.
But I would also love to see EY's appeal to MoR readers succeed:
I don’t work for the Center for Applied Rationality and they don’t pay me, but their work is sufficiently important that the Singularity Institute (which does pay me) has allowed me to offer to work on Methods full-time until the story is finished if HPMOR readers donate a total of $1M to CFAR.
I'm donating to CFAR but not SI because CFAR would help in a wider variety of scenarios.
If AGI will be developed by a single person or a very small team, it seems likely that it won't be done by someone we recognize in advance as likely to do it (for example, think of the inventions of the airplane or the web). CFAR is more oriented toward influencing large enough numbers of smart people that it will be more likely to reach such a developer.
Single-person AGI development seems like a low probability scenario to me, but the more people that are needed to create an AGI, the less plausible it seems that intelligence will be intelligible enough to go foom. So I imagine a relatively high fraction of scenarios in which UFAI takes over the world as coming from very small development teams.
Plus it's quite possible that we're all asking the wrong questions about existential risks. CFAR seems more likely than SI to help in those scenarios.
http://appliedrationality.org/fundraising/
(interested in hearing how other donors frame allocation between SI and CFAR)