As I see it, OpenCog is making practical progress towards an architecture for AGI, whereas SIAI is focused on the theory of Friendly AI.
I specifically added "consultation with SIAI" in the latter part of OpenCog's roadmap to try to ensure the highest odds of OpenCog remaining friendly under self-improvement.
As far as I'm aware there is no software development going on in SIAI, it's all theoretical and philosophical comment on decision theory etc. (this might have changed, but I haven't heard anything about them launching an engineering or experimental effort).
As far as I'm aware there is no software development going on in SIAI, it's all theoretical and philosophical comment on decision theory etc. (this might have changed, but I haven't heard anything about them launching an engineering or experimental effort).
Indeed, that is another reason for me to conclude that the SIAI should seek cooperation with projects that follow an experimental approach.
Artificial general intelligence researcher Ben Goertzel answered my question on charitable giving and gave his permission to publish it here. I think the opinion of highly educated experts who have read most of the available material is important to estimate the public and academic perception of risks from AI and the effectiveness with which the risks are communicated by LessWrong and the SIAI.
Alexander Kruel asked:
Ben Goertzel replied:
What can one learn from this?
I'm planning to contact and ask various experts, who are aware of risks from AI, the same question.