Or perhaps it could be that Ben is too busy actually developing and researching AI to spend time discussing them ad nauseum? I stopped following many mailing lists or communities like this because I don't actually have time to argue in circles with people.
(But make an exception when people start making up untruths about OpenCog)
Even if they don't want to discuss their insights "ad nauseum", I need some indication that they have new insights. Otherwise they won't be able to build AI. "Busy developing and researching" doesn't look very promising from the outside, considering how many other groups present themselves the same way.
Artificial general intelligence researcher Ben Goertzel answered my question on charitable giving and gave his permission to publish it here. I think the opinion of highly educated experts who have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI.
Alexander Kruel asked:
Ben Goertzel replied:
What can one learn from this?
I'm planning to contact and ask various experts, who are aware of risks from AI, the same question.