Well, if you bothered looking at our/OpenCog's roadmap you'll see it doesn't expect AGI in a "few years".
What magical software engineering tools are you after that can't be built with the current tools we have?
If nobody attempts to build these then nothing will ever improve - people will just go "oh, that can't be done right now, let's just wait a while until the tools appear that make AGI like snapping lego together". Which is fine if you want to leave the R&D to other people... like us.
ferrouswheel:
Well, if you bothered looking at our/OpenCog's roadmap you'll see it doesn't expect AGI in a "few years".
The roadmap on opencog.org has among its milestones: "2019-2021: Full-On Human Level AGI."
What magical software engineering tools are you after that can't be built with the current tools we have?
Well, if I knew, I'd be cashing in on the idea, not discussing it here. In any case, surely you must agree that claiming the ability to develop an AGI within a decade is a very extraordinary claim.
Artificial general intelligence researcher Ben Goertzel answered my question on charitable giving and gave his permission to publish it here. I think the opinion of highly educated experts who have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI.
Alexander Kruel asked:
Ben Goertzel replied:
What can one learn from this?
I'm planning to contact and ask various experts, who are aware of risks from AI, the same question.