I have, over the last year, become fairly well-known in a small corner of the internet tangentially related to AI.
As a result, I've begun making what I would have previously considered astronomical amounts of money: several hundred thousand dollars per month in personal income.
This has been great, obviously, and the funds have alleviated a fair number of my personal burdens (mostly related to poverty). But aside from that I don't really care much for the money itself.
My long term ambitions have always been to contribute materially to the mitigation of the impending existential AI threat. I never used to have the means to do so, mostly because of more pressing, safety/sustenance concerns, but now that I do, I would like to help however possible.
Some other points about me that may be useful:
- I'm intelligent, socially capable, and exceedingly industrious.
- I have a few hundred thousand followers worldwide across a few distribution channels. My audience is primarily small-midsized business owners. A subset of these people are very high leverage (i.e their actions directly impact the beliefs, actions, or habits of tens of thousands of people).
- My current work does not take much time. I have modest resources (~$2M) and a relatively free schedule. I am also, by all means, very young.
Given the above, I feel there's a reasonable opportunity here for me to help. It would certainly be more grassroots than a well-funded safety lab or one of the many state actors that has sprung up, but probably still sizeable enough to make a fraction of a % of a difference in the way the scales tip (assuming I dedicate my life to it).
What would you do in my shoes, assuming alignment on core virtues like maximizing AI safety?
My personal take is that projects where the funder is actively excited about them and understands the work and wants frequent reports tend to get stuff done faster... And considering the circumstances, faster seems good. So I'd recommend supporting something you find interesting and inspiring, and then keep on top of it.
In terms of groups which have their eyes on a variety of unusual and underfunded projects, I recommend both the Foresight Institute and AE Studio.
In terms of specific individuals/projects that are doing novel and interesting things, which are low on funding... (Disproportionately representing ones I'm involved with since those are the ones I know about)...
Self-Other Overlap (AE studio)
Brain-like AI safety (Stephen Byrnes, or me (very different agenda from Stephen's, focusing on modularity for interpretability rather than on Stephen's idea about reproducing human empathy circuits))
Deep exploration of the nature and potential of LLMs (Upward Spiral Research, particularly Janus aka repligate)
Decentralized AI Governance for mutual safety compacts (me, and ??? surely someone else is working on this)
Pre-training on rigorous ethical rulesets, plus better cleaning of pretraining data (Erik Passoja, Sean Pan, and me)