I have, over the last year, become fairly well-known in a small corner of the internet tangentially related to AI.
As a result, I've begun making what I would have previously considered astronomical amounts of money: several hundred thousand dollars per month in personal income.
This has been great, obviously, and the funds have alleviated a fair number of my personal burdens (mostly related to poverty). But aside from that I don't really care much for the money itself.
My long term ambitions have always been to contribute materially to the mitigation of the impending existential AI threat. I never used to have the means to do so, mostly because of more pressing, safety/sustenance concerns, but now that I do, I would like to help however possible.
Some other points about me that may be useful:
- I'm intelligent, socially capable, and exceedingly industrious.
- I have a few hundred thousand followers worldwide across a few distribution channels. My audience is primarily small-midsized business owners. A subset of these people are very high leverage (i.e their actions directly impact the beliefs, actions, or habits of tens of thousands of people).
- My current work does not take much time. I have modest resources (~$2M) and a relatively free schedule. I am also, by all means, very young.
Given the above, I feel there's a reasonable opportunity here for me to help. It would certainly be more grassroots than a well-funded safety lab or one of the many state actors that has sprung up, but probably still sizeable enough to make a fraction of a % of a difference in the way the scales tip (assuming I dedicate my life to it).
What would you do in my shoes, assuming alignment on core virtues like maximizing AI safety?
(I basically endorse Daniel and Habryka's comments, but wanted to expand the 'it's tricky' point about donation. Obviously, I don't know what they think, and they likely disagree on some of this stuff.)
There are a few direct-work projects that seem robustly good (METR, Redwood, some others) based on track record, but afaict they're not funding constrained.
Most incoming AI safety researchers are targeting working at the scaling labs, which doesn't feel especially counterfactual or robust against value drift, from my position. For this reason, I don't think prosaic AIS field-building should be a priority investment (and Open Phil is prioritizing this anyway, so marginal value per dollar is a good deal lower than it was a few years ago).
There are various governance things happening, but much of that work is pretty behind the scenes.
There are also comms efforts, but the community as a whole has just been spinning up capacity in this direction for ~a year, and hasn't really had any wild successes, beyond a few well-placed op-eds (and the juries out on if / which direction these moved the needle).
Comms is a devilishly difficult thing to do well, and many fledgling efforts I've encountered in this direction are not in the hands of folks whose strategic capacities I especially trust. I could talk at length about possible comms failure modes if anyone has questions.
I'm very excited about Palisade and Apollo, which are both, afaict, somewhat funding constrained in the sense that they have fewer people than they should, and the people currently working there are working for less money than they could get at another org, because they believe in the theory of change over other theories of change. I think they should be better supported than they are currently, on a raw dollars level (but this may change in the future, and I don't know how much money they need to receive in order for that to change).
I am not currently empowered to make a strong case for donating to MIRI using only publicly available information, but that should change by the end of this year, and the case to be made there may be quite strong. (I say this because you may click my profile and see I work at MIRI, and so it would seem a notable omission from my list if I didn't mention why it's omitted; reasons for donating to MIRI exist, but they're not public, and I wouldn't feel right trying to convince anyone of that, especially when I expect it to become pretty obvious later).
I don't know how much you know about AI safety and the associated ecosystem but, from my (somewhat pessimistic, non-central) perspective, many of the activities in the space are likely (or guaranteed, in some instances) to have the opposite of their stated intended impact. Many people will be happy to take your money and tell you it's doing good, but knowing that it is doing good by your own lights (as opposed to doing evil or, worse, doing nothing*) is the hard part. There is ~no consensus view here, and no single party that I would trust to make this call with my money without my personal oversight (which I would also aim to bolster through other means, in advance of making this kind of call).
*this was a joke. Don't Be Evil.