OpenCog
Hey, speaking as an AI layman, how do you rate the odds that a design based on OpenCog could foom? I haven't really dug into that codebase, but from reading the Wiki it's my impression that it's a bit of a heap left behind by multiple contributors trying to make different parts of it work for their own ends, and if a coherent whole could be wrought from it it would be too complex to feasibly understand itself. In that sense: how far out do you think OpenCog is from containing a complete operational causal model of its own codebase and operation? How much of it would have to be modified or rewritten to reach this point?
Cross-posted from my blog.
Yudkowsky writes:
My own projection goes more like this:
At least one clear difference between my projection and Yudkowsky's is that I expect AI-expert performance on the problem to improve substantially as a greater fraction of elite AI scientists begin to think about the issue in Near mode rather than Far mode.
As a friend of mine suggested recently, current elite awareness of the AGI safety challenge is roughly where elite awareness of the global warming challenge was in the early 80s. Except, I expect elite acknowledgement of the AGI safety challenge to spread more slowly than it did for global warming or nuclear security, because AGI is tougher to forecast in general, and involves trickier philosophical nuances. (Nobody was ever tempted to say, "But as the nuclear chain reaction grows in power, it will necessarily become more moral!")
Still, there is a worryingly non-negligible chance that AGI explodes "out of nowhere." Sometimes important theorems are proved suddenly after decades of failed attempts by other mathematicians, and sometimes a computational procedure is sped up by 20 orders of magnitude with a single breakthrough.