bigjeff5 comments on Say Not "Complexity" - Less Wrong

34 Post author: Eliezer_Yudkowsky 29 August 2007 04:22AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Nathan2 31 August 2007 07:20:30PM 0 points [-]

Forgive me for latching onto the example, but how would an AI discover how to solve a Rubik's cube? Does anyone have a good answer?

Comment author: danlowlite 20 August 2010 02:24:29PM 5 points [-]

Wouldn't the AI have to discover that it is something to be solved, first? Give a kid such a puzzle and she's likelier to put it in her mouth then even try.

Unless I'm being obtuse.

Comment author: bigjeff5 30 January 2011 09:51:45PM -1 points [-]

Curiosity could be built-in, I don't see the problem with that.

It seems to be built-in for humans - we don't learn to be curious, though we can learn not to be.

Comment author: danlowlite 31 January 2011 02:27:52PM 1 point [-]

It could be built in. I agree. But the child is curious about it's texture and taste than how the pieces fit together. I had to show my child a puzzle and solve it in front of her to get her to understand it.

Then she took off with it. YMMV.

Good point, though.

Comment author: bigjeff5 31 January 2011 05:37:12PM 0 points [-]

But the child is curious about it's texture and taste than how the pieces fit together.

But as you see, there was an initial curiosity there. They may not be able to make certain leaps that lead them to things they would be curious about, but once you help them make the leap they are then curious on their own.

Also, there are plenty of things some people just aren't curious about, or interested in. You can only bring someone so far, after which they are either curious or not.

It would be very interesting to do the same thing with an AI, just give it a basic curiosity about certain things, and watch how it develops.