My comment relates to the state of OpenCog when I downloaded it in November 2009. It's entirely possible that things are much improved since then. I think it was reasonable to assume that things hadn't changed much though since the code looked mostly empty at that time and I didn't sense that there was any active development by anyone who wasn't on the Novamente/OpenCog team an an employee or close team member. There were comments in the code at the time stating that pieces were missing because they hadn't yet been released from Novamente. Hopefully those are gone now.
Sorry I didn't join you on IRC. I never noticed you had a channel.
I could have sent an email to the list. But again, it looked like I couldn't contribute to OpenCog unless I somehow got hired by OpenCog/Novamente or ingratiated myself to the current team and found a way to become part of the inner circle. I was considering if that would be a good idea at the time but figured that emailing the list with "Duuuhhhh... I can't compile it. WTF?" would only frustrate internal developers, get condescending replies from people who had unreleased code that made their versions work, or get requests for funding to help open source the unreleased code.
Hopefully things have improved in the last 1.5 years. I would love to support OpenCog. The vision you guys have looks great.
Well, we get a lot of the "I can't compile it" emails and while we are not especially excited to receive these, we usually reply and guide people through the process with minimal condescension.
There has been progressive additions to OpenCog from closed source projects, but they've never prevented the core framework from compiling and working in and of itself.
Apologies for my tone too. We occasionally get people trolling or trash-talking us without taking any time to understand the project... sometimes they just outright lie, and that's frustrating. Of course, we're not perfect as an OSS project, but we are constantly trying to improve.
Artificial general intelligence researcher Ben Goertzel answered my question on charitable giving and gave his permission to publish it here. I think the opinion of highly educated experts who have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI.
Alexander Kruel asked:
Ben Goertzel replied:
What can one learn from this?
I'm planning to contact and ask various experts, who are aware of risks from AI, the same question.