OK then, so how would one go about making an organization that is capable of funding and building this? Are there any interested donors yet?
Very much agree on this one, as do many other people that I know of. However, the key counterargument as to why this may be better as an EA project than a rationality one is that "rationality" is vague on what you're applying it to, while "EA" is at least slightly more clear, and a community like this benefits from having clear goals. Nevertheless, it may make sense to market it as a "rationality" project and just have EA be part of the work it does.
So the question now turns to, how would one go about building it?
Thanks for giving some answers here to these questions; it was really helpful to have them laid out like this.
1. In hindsight, I was probably talking more about moves towards decentralization of leadership, rather than decentralization of funding. I agree that greater decentralization of funding is a good thing, but it seems to me like, within the organizations funded by a given funder, decentralization of leadership is likely useless (if leadership decisions are still being made by informal networks between orgs rather than formal ones), or it may lead to a lack of clarity and direction.
3. I understand the dynamics that may cause the overrepresentation of women. However, that still doesn't completely explain why there is an overrepresentation of white women, even when compared to racial demographics within EA at large. Additionally, this also doesn't explain why the overrepresentation of women here isn't seen as a problem on CEA's part, if even just from an optics perspective.
4. Makes sense, but I'm still concerned that, say, if CEA had an anti-Stalinism team, they'd be reluctant to ever say "Stalinism isn't a problem in EA."
5. Again, this was a question that was badly worded on my end. I was referring more specifically to organizations within AI safety, more than EA at large. I know that AMF, GiveDirectly, The Humane League, etc. fundraise outside EA.
6. I was asking a descriptive question here, not a normative one. Guilt by association, even if weak, is a very commonly used form of argument, and so I would expect it to be in used in this case.
7. That makes sense. That was one of my hypotheses (hence my phrase "at least upon initial examination"), and I guess in hindsight it's probably the best one.
10. Starting an AI capabilities company that does AI safety as a side project generally hasn't gone well, and yet people keep doing it. The fact that something hasn't gone well in the past doesn't seem to me to be a sufficient explanation for why people don't keep doing it, especially because it largely seems like Leverage failed for Leverage-specific reasons (i.e. too much engagement with woo). Additionally, your argument here seems to prove too much; the Manhattan Project was a large scientific project operating under an intense structure, and yet it was able to maintain good epistemics (i.e. not fixating too hard on designs that wouldn't work) under those conditions. Same with a lot of really intense start-ups.
11. They may not be examples of the unilateralist's curse in the original sense, but the term seems to have been expanded well past its original meaning, and they're examples of that expanded meaning.
12. It seems to me like this is work of a different type than technical alignment work, and could likely be accomplished by hiring different people than the people already working on technical alignment, so it's not directly trading off against that.
Sorry; I thought I had used the "Question" type.
What's preventing MIRI from making massive investments into human intelligence augmentation? If I recall correctly, MIRI is most constrained on research ideas, but human intelligence augmentation is a huge research idea that other grantmakers, for whatever reason, aren't funding. There are plenty of shovel-ready proposals already, e.g. https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing; why doesn't MIRI fund them?
Thank you very much! I won't be sending you a bounty, as you're not an AI ethicist of the type discussed here, but I'd be happy to send $50 to a charity of your choice. Which one do you want?
I've seen plenty of AI x-risk skeptics present their object-level argument, and I'm not interested in paying out a bounty for stuff I already have. I'm most interested in the arguments from this specific school of thought, and that's why I'm offering the terms I offer.
Man, this article hits different now that I know the psychopharmacology theory of the FTX crash...
Have any prizes been awarded yet? I haven't heard anything about prizes, but that could have just been that I didn't win one...
I'm not proposing to never take breaks. I'm proposing something more along the lines of "find the precisely-calibrated amount of breaks to maximize productivity and take exactly those."