Well, we get a lot of the "I can't compile it" emails and while we are not especially excited to receive these, we usually reply and guide people through the process with minimal condescension.
There has been progressive additions to OpenCog from closed source projects, but they've never prevented the core framework from compiling and working in and of itself.
Apologies for my tone too. We occasionally get people trolling or trash-talking us without taking any time to understand the project... sometimes they just outright lie, and that's frustrating. Of course, we're not perfect as an OSS project, but we are constantly trying to improve.
Artificial general intelligence researcher Ben Goertzel answered my question on charitable giving and gave his permission to publish it here. I think the opinion of highly educated experts who have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI.
Alexander Kruel asked:
Ben Goertzel replied:
What can one learn from this?
I'm planning to contact and ask various experts, who are aware of risks from AI, the same question.