Artificial general intelligence researcher Ben Goertzel answered my question on charitable giving and gave his permission to publish it here. I think the opinion of highly educated experts who have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI.
Alexander Kruel asked:
What would you do with $100,000 if it were given to you on the condition that you donate it to a charity of your choice?
Ben Goertzel replied:
Unsurprisingly, my answer is that I would donate the $100,000 to the OpenCog project which I co-founded and with which I'm currently heavily involved. This doesn't mean that I think OpenCog should get 100% of everybody's funding; but given my own state of knowledge, I'm very clearly aware that OpenCog could make great use of $100K for research working toward beneficial AGI and a positive Singularity. If I had $100M rather than $100K to give away, I would have to do more research into which other charities were most deserving, rather than giving it all to OpenCog!
What can one learn from this?
- The SIAI is not the only option to work towards a positive Singularity
- The SIAI should try to cooperate more closely with other AGI projects to potentially have a positive impact.
I'm planning to contact and ask various experts, who are aware of risks from AI, the same question.
As in "extraordinary claims demand extraordinary evidence".
A summary of the evidence can be found on Ben's blog
Adding some more info... Basically the evidence can be divided into two parts. 1) Evidence that the OpenCog design (or something reasonably similar) would be a successful AGI system when fully implemented and tested. 2) Evidence that the OpenCog design can be implemented and tested within a decade.
1) The OpenCog design has been described in considerable detail in various publications (formal or otherwise); see http://opencog.org/research/ for an incomplete list. A lot of other information is available in other papers co-authored by Ben Goertzel, talks/papers from the AGI Conferences (http://agi-conf.org/), and the AGI Summer School (http://agi-school.org/) amongst other places.
These resources also include explanations for why various parts of the design would work. They use a mix of different types of arguments (i.e. intuitive arguments, math, empirical results). It doesn't constitute a formal proof that it will work, but it is good evidence.
2) The OpenCog design is realistic to achieve with current software/hardware and doesn't require any major new conceptual breakthroughs. Obviously it may take years longer than intended (or even years less); it depends on funding, project efficiency, how well other people solve parts of the problem, and various other things. It's not realistic to estimate the exact number of years at this point, but it seems unlikely that it needs to take more than, say 20 years, given adequate funding.
By the way, the two year project mentioned in that blog post is the OpenCog Hong Kong project, which is where ferrouswheel (Joel Pitt) and I are currently working. We have several other people here as well, and various other people working right now (including Nil Geisweiller who posted before as nilg).