You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gunnar_Zarncke comments on AIFoom Debate - conclusion? - Less Wrong Discussion

11 Post author: Bound_up 04 March 2016 08:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 05 March 2016 08:53:27AM *  2 points [-]

That's a terrible argument. AlphaGo represents a general approach to AI, but its instantiation on the specific problem of Go tightly constrains the problem domain and solution space. Real life is far more combinatorial still, and an AGI requires much more expensive meta-level repeated cognition as well. You don't just solve one problem, you also look at all past solved problems and think about his you could have solved those better. That's quadratic blowup.

Tl;Dr speed of narrow AI != speed of general AI.

Comment author: Gunnar_Zarncke 05 March 2016 12:27:55PM 4 points [-]

But what if a general AI could generate specialized narrow AIs? That is something the human brain cannot do but an AGI could. Thus speed of general AI = speed of AI narrow + time to specialize.

Comment author: V_V 07 March 2016 03:50:39PM 0 points [-]

But what if a general AI could generate specialized narrow AIs?

How is it different than a general AI solving the problems by itself?

Comment author: Gunnar_Zarncke 07 March 2016 07:51:36PM 1 point [-]

It isn't. At least not in my model of what an AI is. But Mark_Friedenbach seems to operate under a model where this is less clear or the consequences of the capability of an AI creating these kind of specialized sub agents seem not to be taken into account enough.