Give me feedback! :)
I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:
I'm open to this argument, but I'm not sure it's true under the Trump administration.
Not sure this is interesting to anyone, but I compiled Zillow's data on 2021-2025 Berkeley average rent prices recently, to help with rent negotiation. I did not adjust for inflation; these are the raw averages at each time.
I definitely think that people should not look at my estimates and say "here is a good 95% confidence interval upper bound of the number of employees in the AI safety ecosystem." I think people should look at my estimates and say "here is a good 95% confidence interval lower bound of the number of employees in the AI safety ecosystem," because you can just add up the names. I.e., even if there might be 10x the number of employees as I estimated, I'm at least 95% confident that there are more than my estimate obtained by just counting names (obviously excluding the 10% fudge factor).
So, conduct a sensitivity analysis on the definite integral with respect to choices of integration bounds? I'm not sure this level of analysis is merited given the incomplete data and unreliable estimation methodology for the number of independent researchers. Like, I'm not even confident that the underlying distribution is a power law (instead of, say, a composite of power law and lognormal distributions, or a truncated power law), and the value of seems very sensitive to data in the vicinity, so I wouldn't want to rely on this estimate except as a very crude first pass. I would support an investigation into the number of independent researchers in the ecosystem, which I would find useful.
By "upper bound", I meant "upper bound on the definite integral ". I.e., for the kind of hacky thing I'm doing here, the integral is very sensitive to the choice of bounds . For example, the integral does not converge for . I think all my data here should be treated as incomplete and all my calculations crude estimates at best.
I edited the original comment to say " might be a bad upper bound" for clarity.
It's also worth noting that almost all of these roles are management, ML research, or software engineering; there are very few operations, communications, non-ML research, etc. roles listed, implying that these roles are paid significantly less.
Apparently the headcount for US corporations follows a power-law distribution, apart from mid-sized corporations, which fit a lognormal distribution better. I fit a power law distribution to the data (after truncating all datapoints with over 40 employees, which created a worse fit), which gave . This seems to imply that there are ~400 independent AI safety researchers (though note that is probability density function and this estimate might be way off); Claude estimates 400-600 for comparison. Integrating this distribution over gives ~1400 (2 s.f.) total employees working on AI safety or safety-adjacent work ( might be a bad upper bound, as the largest orgs have <100 employees).
Why does the AI safety community need help founding projects?