xxd comments on What if AI doesn't quite go FOOM? - Less Wrong

11 Post author: Mass_Driver 20 June 2010 12:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: red75 20 June 2010 09:11:07AM *  2 points [-]

And which information to conceal. Right?

As for "Wright brothers" situation, it's not so obvious. We have AI methods which work but don't scale well (theorem provers, semantic nets, expert systems. Not a method, but nevertheless worth mentioning: SHRDLU), we have well scaling methods, which lack generalization power (statistical methods, neural nets, SVMs, deep belief networks, etc.), and yet we don't know how to put it all together.

It looks like we are going to "Wright stage", where one will have all equipment to put together and make working prototype.

Comment author: Daniel_Burfoot 20 June 2010 02:33:06PM *  2 points [-]

we have well scaling methods, which lack generalization power (statistical methods, neural nets, SVMs, deep belief networks

You got it backwards. These methods have generalization power, especially the SVM (achieving generalization is the whole point of the VC theory on which it's based), but don't scale well.

Comment author: red75 20 June 2010 04:08:01PM 0 points [-]

Yes, bad wording on my side. I mean something like capability of representing and operating on complex objects, situations and relations. However it doesn't invalidate my (quite trivial) point that we don't have practical theory of AGI yet.