bigjeff5 comments on Say Not "Complexity" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (49)
Forgive me for latching onto the example, but how would an AI discover how to solve a Rubik's cube? Does anyone have a good answer?
Wouldn't the AI have to discover that it is something to be solved, first? Give a kid such a puzzle and she's likelier to put it in her mouth then even try.
Unless I'm being obtuse.
Curiosity could be built-in, I don't see the problem with that.
It seems to be built-in for humans - we don't learn to be curious, though we can learn not to be.
It could be built in. I agree. But the child is curious about it's texture and taste than how the pieces fit together. I had to show my child a puzzle and solve it in front of her to get her to understand it.
Then she took off with it. YMMV.
Good point, though.
But as you see, there was an initial curiosity there. They may not be able to make certain leaps that lead them to things they would be curious about, but once you help them make the leap they are then curious on their own.
Also, there are plenty of things some people just aren't curious about, or interested in. You can only bring someone so far, after which they are either curious or not.
It would be very interesting to do the same thing with an AI, just give it a basic curiosity about certain things, and watch how it develops.