You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jacob_cannell comments on AIFoom Debate - conclusion? - Less Wrong Discussion

11 Post author: Bound_up 04 March 2016 08:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 05 March 2016 08:53:27AM *  2 points [-]

That's a terrible argument. AlphaGo represents a general approach to AI, but its instantiation on the specific problem of Go tightly constrains the problem domain and solution space. Real life is far more combinatorial still, and an AGI requires much more expensive meta-level repeated cognition as well. You don't just solve one problem, you also look at all past solved problems and think about his you could have solved those better. That's quadratic blowup.

Tl;Dr speed of narrow AI != speed of general AI.

Comment author: jacob_cannell 07 March 2016 09:04:20PM 3 points [-]

AlphaGo represents a general approach to AI, but its instantiation on the specific problem of Go tightly constrains the problem domain and solution space ..

Sure, but that wasn't my point. I was addressing key questions of training data size, sample efficiency, and learning speed. At least for Go, vision, and related domains, the sample efficiency of DL based systems appears to be approaching that of humans. The net learning efficiency of the brain is far beyond current DL systems in terms of learning per joule, but the gap in terms of learning per dollar is less, and closing quickly. Machine DL systems also easily and typically run 10x or more faster than the brain, and thus learn/train 10x faster.