We spent an evening at last week's Rationality Minicamp brainstorming strategies for reducing existential risk from Unfriendly AI, and for estimating their marginal benefit-per-dollar. To summarize the issue briefly, there is a lot of research into artificial general intelligence (AGI) going on, but very few AI researchers take safety seriously; if someone succeeds in making an AGI, but they don't take safety seriously or they aren't careful enough, then it might become very powerful very quickly and be a threat to humanity. The best way to prevent this from happening is to promote a safety culture - that is, to convince as many artificial intelligence researchers as possible to think about safety so that if they make a breakthrough, they won't do something stupid.
We came up with a concrete (albeit greatly oversimplified) model which suggests that the marginal reduction in existential risk per dollar, when pursuing this strategy, is extremely high. The model is this: assume that if an AI is created, it's because one researcher, chosen at random from the pool of all researchers, has the key insight; and humanity survives if and only if that researcher is careful and takes safety seriously. In this model, the goal is to convince as many researchers as possible to take safety seriously. So the question is: how many researchers can we convince, per dollar? Some people are very easy to convince - some blog posts are enough. Those people are convinced already. Some people are very hard to convince - they won't take safety seriously unless someone who really cares about it will be their friend for years. In between, there are a lot of people who are currently unconvinced, but would be convinced if there were lots of good research papers about safety in machine learning and computer science journals, by lots of different authors.
Right now, those articles don't exist; we need to write them. And it turns out that neither the Singularity Institute nor any other organization has the resources - staff, expertise, and money to hire grad students - to produce very much research or to substantially alter the research culture. We are very far from the realm of diminishing returns. Let's make this model quantitative.
Let A be the probability that an AI will be created; let R the fraction of researchers that would be convinced to take safety seriously if there were a 100 good papers in about it in the right journals; and let C be the cost of one really good research paper. Then the marginal reduction in existential risk per dollar is A*R/100*C. The total cost of a grad student-year (including recruiting, management and other expenses) is about $100k. Estimate a 10% current AI risk, and estimate that 30% of researchers currently don't take safety seriously but would be convinced. That gives is a marginal existential risk reduction per dollar of 0.1*0.3/100*100k = 3*10^-9. Counting only the ~7 billion people alive today, and not any of the people who will be born in the future, this comes to a little over two expected lives saved per dollar.
That's huge. Enormous. So enormous that I'm instantly suspicious of the model, actually, so let's take note of some of the things it leaves out. First, the "one researcher at random determines the fate of humanity" part glosses over the fact that research is done in groups; but it's not clear whether adding in this detail should make us adjust the estimate up or down. It ignores all the time we have between now and the creation of the first AI, during which a safety culture might arise without intervention; but it's also easier to influence the culture now, while the field is still young, rather than later. In order for promoting AI research safety to not be an extraordinarily good deal for philanthropists, there would have to be at least an additional 10^3 penalty somewhere, and I can't find one.
As a result of this calculation, I will be thinking and writing about AI safety, attempting to convince others of its importance, and, in the moderately probable event that I become very rich, donating money to the SIAI so that they can pay others to do the same.
Summary:
One big penalty that was discussed is the likelihood of another researcher having the key insight before the first researcher can leverage the insight into friendly AI. Throwing some crazy numbers down (aka a concrete albeit greatly simplified model), call it a 1% "no one would possibly think of this before FAI", 10% "50% someone else will think of this in time to beat me if they're unfriendly", 89% "this is idea whose time has come, we save a couple years on the first friendly researcher, a year on the next two, and months on the rest" and call it some fraction of 50 years. That gives something like 0.02 + 0.065 + 0.048 = down by a factor of 10.
(Edited) Another factor is whether being "safety conscious" about your key insight actually ends up gaining us anything. e.g. telling a collaborator you thought was okay but wasn't loses some of the gains. I haven't thought through this but wouldn't be viscerally upset if someone said anywhere from 10%-50% that being safety conscious works. (Edited) After reading Eliezer's comment, I think I was confusing two things (and maybe others are). There's a spectrum of safety consciousness, and I don't think all of those 30% of researchers convinced by 100 papers get to the 10%-50% level of "safety consciousness from them will work". Maybe 2% get to 50%, 10% get to 10%, and 88% get to 2% or worse aka 1%. That brings this factor down to 3%.
(Edited: this is a non-issue) There's also the possibility of very negative consequences to buying up the bright grad students (I assume we need the bright ones to get good papers produced in the right journals). I don't know if this is actually any concern at all to those with at least some intuition into the matter - I have no such intuition. (Edited) This came from my thought: "if it was generally well-known that a relatively small group of people was trying to buy up 30% of the AI research, might that cause a social backlash?" which is just flat-out wrong, we're trying to write 100 papers to convince 30% of the community, not actually buy 30% of the research. :)
In the other direction there's the possibility that "100 good papers" leads to "30% convinced" leads to "balloon upwards to 80% due to networking and no-longer-non-mainstream effects". (Edited: if this happens, it gives us a factor of about 1.5, so its total contribution is pretty small unless it's very likely)
Oh, and there's the expected time until FAI as compared to GAI... if FAI is too much longer, we only get a benefit from the 1% piece of that model which would make it down by an extremely unstable (1% plus or minus 1% ;)) factor of 50. (Edited) Let's put some crazy numbers down and say FAI being so much friggin' harder than GAI is 25%, plus 10% FAI is actually just impossible, for a 35% chance we only get the benefits from the 1% (plus or minus 1%) piece of the "delaying general AI" model. My other intuitions were coming from FAI being really hard but not a century harder than GAI. This takes my original 0.02 + 0.065 + 0.048 = 0.133 down to 0.35 0.02 + 0.65 0.133 = 0.0935, which is still about a 10x factor.
Anyone with better intuitions/experience/(gasp)knowledge want to redo those numbers or note why one of the models is terribly broken or brainstorm other yet-unmentioned factors?
I don't understand this sentence. Please explain.
What negative consequences?