You said
If you "withdraw from a cause area" you would expect that if you have an organization that does good work in multiple cause areas, then you would expect you would still fund the organization for work in cause areas that funding wasn't withdrawn from. However, what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations, where if you are associated with a certain set of ideas, or identities or causes, then no matter how cost-effective your other work is, you cannot get funding from OP
I'm wondering if you have a list of organizations where Open Phil would have funded their other work, but because they withdrew from funding part of the organization they decided to withdraw totally.
This feels very importantly different from good ventures choosing not to fund certain cause areas (and I think you agree, which is why you put that footnote).
what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations
is there a list of these somewhere/details on what happened?
Thanks for writing this up! I wonder how feasible it is to just do a cycle of bulking and cutting and then do one of body recomposition and compare the results. I expect that the results will be too close to tell a difference, which I guess just means that you should do whichever is easier.
I think it would be helpful for helping others calibrate, though obviously it's fairly personal.
Possibly too sensitive, but could you share how the photos performed on Photfeeler? Particularly what percentile attractiveness?
Sure, I think everyone agrees that marginal returns to labor diminish with the number of employees. John's claim though was that returns are non-positive, and that seems empirically false.
We have Wildeford's Third Law: "Most >10 year forecasts are technically also AI forecasts".
We need a law like "Most statements about the value of EA are technically also AI forecasts".
Yep that's fair, there is some subjectivity here. I was hoping that the charges from SDNY would have a specific amount that Sam was alleged to have defrauded, but they don't seem to.
Regarding $4B missing: adding in Anthropic gets another $4B on the EA side of the ledger, and founders pledge another $1B. The value produced by Anthropic is questionable, and maybe negative of course, but I think by the strict definition of "donated or built in terms of successful companies" EA comes out ahead.
(And OpenAI gets another $80B, so if you count that then I think even the most aggressive definition of how much FTX defrauded is smaller. But obviously OAI's EA credentials are dubious.)
EA has defrauded much more money than we've ever donated or built in terms of successful companies
FTX is missing $1.8B. OpenPhil has donated $2.8B.
Thanks!