Series: How to Purchase AI Risk Reduction
Here is yet another way to purchase AI risk reduction...
Much of the work needed for Friendly AI and improved algorithmic decision theories requires researchers to invent new math. That's why the Singularity Institute's recruiting efforts have been aimed a talent in math and computer science. Specifically, we're looking for young talent in math and compsci, because young talent is (1) more open to considering radical ideas like AI risk, (2) not yet entrenched in careers and status games, and (3) better at inventing new math (due to cognitive decline with age).
So how can the Singularity Institute reach out to young math/compsci talent? Perhaps surprisingly, Harry Potter and the Methods of Rationality is one of the best tools we have for this. It is read by a surprisingly large proportion of people in math and CS departments. Here are some other projects we have in the works:
- Run SPARC, a summer program on rationality for high school students with exceptional math ability. Cost: roughly $30,000. (There won't be classes on x-risk at SPARC, but it will attract young talent toward efficient altruism in general.)
- Print copies of the first few chapters of HPMoR cheaply in Taiwan, ship them here, distribute them to leading math and compsci departments. Cost estimate in progress.
- Send copies of Global Catastrophic Risks to lists of bright young students. Cost estimate in progress.
Here are some things we could be doing if we had sufficient funding:
- Sponsor and be present at events where young math/compsci talent gathers, e.g. TopCoder High School and the International Math Olympiad. Cost estimate in progress.
- Cultivate a network of x-risk reducers with high mathematical ability, build a database of conversations for them to have with strategically important young math/compsci talent, schedule those conversations and develop a pipeline so that interested prospects have a "next person" to talk to. Cost estimate in progress.
- Write Open Problems in Friendly AI, send it to interested parties so that even those who don't think AI risk is important will at least see "Ooh, look at these sexy, interesting problems I could work on!"
Intelligence seems relatively static, but AFAIK once you've reached a certain minimum threshold in intelligence, conscientiousness becomes a more important factor for actual accomplishment. (Anecdotally and intuitively, conscientiousness seems more amenable to change, but I don't know if the psychological evidence supports that.)
Wait, there's real evidence of durable changes in conscientiousness? Point me its way. The psychology literature does not appear (after a brief search) to support the idea of lasting change. I would be happy to be wrong.