NickiH comments on Say Not "Complexity" - Less Wrong

34 Post author: Eliezer_Yudkowsky 29 August 2007 04:22AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Nathan2 31 August 2007 07:20:30PM 0 points [-]

Forgive me for latching onto the example, but how would an AI discover how to solve a Rubik's cube? Does anyone have a good answer?

Comment author: danlowlite 20 August 2010 02:24:29PM 5 points [-]

Wouldn't the AI have to discover that it is something to be solved, first? Give a kid such a puzzle and she's likelier to put it in her mouth then even try.

Unless I'm being obtuse.

Comment author: NickiH 18 December 2010 05:32:28PM 2 points [-]

You're right, and I think that this is a mistake a lot of people make when thinking about AI - they assume that the fact that they're intelligent means they also know a lot. Like the child, their specific knowledge (such as the fact that there is something to solve), is something they have to learn, or be taught, over time.