Punoxysm comments on Will AGI surprise the world? - Less Wrong

12 Post author: lukeprog 21 June 2014 10:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 22 June 2014 08:20:55AM 1 point [-]

In terms of practicalities, AI and AGI share two letters in common, and that's about it. OpenCog / CogPrime is at core nothing more than an interface language specification built on hypergraphs which is capable of storing inputs, outputs, and trace data for any kind of narrow AI application. It is most importantly a platform for integrating narrow AI techniques. (If you read any of the official documentation, you'll find most of it covers the specific narrow AI components they've selected, and the specific interconnect networks they are deploying. But those are secondary details to the more important contribution: the universal hypergraph language of the atomspace.)

So when you say:

I am not familiar with OpenCog, but I do not see how it can address these sorts of issues.

It doesn't really make sense. OpenCog solves these issues in the same way: through traditional text parsing and logical inference techniques. What's different is that the inputs, outputs, and the way in which these components are used are fully specified inside of the system, in a data structure that is self-modifying. Think LISP: code is data (albeit using a weird hypergraph language instead of s-expressions), data is code, and the machine has access to its own source code.

That's mostly what AGI is about: the interconnects and reflection layers which allow an otherwise traditional narrow AI program to modify itself in order to adapt to circumstances outside of its programmed expertise.

Comment author: Punoxysm 22 June 2014 07:06:01PM 2 points [-]

My two cents here are just:

1) Narrow AI is still the botteneck to Strong AI, and a feedback loop of development especially in the area of NLP is what's going to eventualy crack the hardest problems.

2) OpenCog's Hypergraphs do not seem especially useful. The power of a language cannot overcome the fact that without sufficiently strong self-modification techniques, it will never be able to self-modify into anything useful. Interconnects and reflection just allow a program to mess itself up, not become more useful, and scale or better NLP modules alone aren't a solution.