Nathan2 comments on Say Not "Complexity" - Less Wrong

34 Post author: Eliezer_Yudkowsky 29 August 2007 04:22AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Nathan2 31 August 2007 07:20:30PM 0 points [-]

Forgive me for latching onto the example, but how would an AI discover how to solve a Rubik's cube? Does anyone have a good answer?

Comment author: DanielLC 27 December 2009 07:49:43PM 0 points [-]

I had the same problem.

I think it would need some genetic algorithm in order to figure out about how "close" it is to the solution, then make a tree structure where it figures out what happens after every combination of however many moves, and it does the one that looks closest to the solution.

It would update the algorithm based on how close it is to the closest solution. For example, if it's five moves away from something that looks about 37 moves away from finishing, then it's about 42 moves away now.

The problem with this is that when you start it, it will have no idea how close anything is to the solution except for the solution, and there's no way it's getting to that by chance.

Essentially, you'd have to cheat and start by giving it almost solved Rubik's cubes, and slowly giving it more randomized ones. It won't learn on its own, but you can teach it pretty easily.

Comment author: CronoDAS 28 December 2009 06:50:33AM 2 points [-]

A less cheating-ish solution is to use some reasonable-seeming heuristic to guess how close you are to a solution. For example, you could just count the number of squares "in the right place" after a move sequence.

Comment author: xfc 20 March 2010 01:00:28PM 0 points [-]

(First post, bear with me.. find the site very interesting :)

I do agree!

But actually I would model the problem with what is known in some circles as a closed-loop controller, and specifically with a POMDP. Then apply RealTime Dynamic Prog. by embedding an heuristic without having to visit all the states in order to compute the rough but optimal h*.

Another way could be done by means of a graphical model, and more specifically a DAG would be quite nicely suited to the problem. Apply a simulated annealing approach (Ising model!) and when you reach "thermal equilibrium" by having minimized some energy functional you get the solution. Obviously this approach would involve learning the parameters of the model, instead of modelling the problem as in my first proposed approach.

Quite geeky, excuse me!

Comment author: CG_Morton 14 June 2011 04:47:48PM 0 points [-]

Exactly the difficulty of solving a Rubik's cube is that it doesn't respond to heuristics. A cube can be 5 moves from solved and yet look altogether a mess, whereas a cube with all but one corner correct is still some 20 moves away from complete (by the methods I looked up at least). In general, -humans- solve a Rubik's cube by memorizing sequences of moves with certain results, and then string these sub-solutions together. An AI, though, probably has the computational power to brute force a solution much faster than it could manipulate the cube.

The more interesting question (I think) is how it figures out a model for the cube in the first place. What makes the cube a good problem is that it's designed to match human pattern intuitions (in that we prefer the colors to match, and we quickly notice the seams that we can rotate through), but an AI has no such intuitions.

Comment author: DanielLC 15 June 2011 02:28:04AM 0 points [-]

Exactly the difficulty of solving a Rubik's cube is that it doesn't respond to heuristics. A cube can be 5 moves from solved and yet look altogether a mess, whereas a cube with all but one corner correct is still some 20 moves away from complete (by the methods I looked up at least).

I don't know the methods you used, but the only ones I know of have certain "steps" where you can easily tell what step it's on. For example, by one method, anything that's five moves away will have all but two sides complete.

Comment author: danlowlite 20 August 2010 02:24:29PM 5 points [-]

Wouldn't the AI have to discover that it is something to be solved, first? Give a kid such a puzzle and she's likelier to put it in her mouth then even try.

Unless I'm being obtuse.

Comment author: NickiH 18 December 2010 05:32:28PM 2 points [-]

You're right, and I think that this is a mistake a lot of people make when thinking about AI - they assume that the fact that they're intelligent means they also know a lot. Like the child, their specific knowledge (such as the fact that there is something to solve), is something they have to learn, or be taught, over time.

Comment author: bigjeff5 30 January 2011 09:51:45PM -1 points [-]

Curiosity could be built-in, I don't see the problem with that.

It seems to be built-in for humans - we don't learn to be curious, though we can learn not to be.

Comment author: danlowlite 31 January 2011 02:27:52PM 1 point [-]

It could be built in. I agree. But the child is curious about it's texture and taste than how the pieces fit together. I had to show my child a puzzle and solve it in front of her to get her to understand it.

Then she took off with it. YMMV.

Good point, though.

Comment author: bigjeff5 31 January 2011 05:37:12PM 0 points [-]

But the child is curious about it's texture and taste than how the pieces fit together.

But as you see, there was an initial curiosity there. They may not be able to make certain leaps that lead them to things they would be curious about, but once you help them make the leap they are then curious on their own.

Also, there are plenty of things some people just aren't curious about, or interested in. You can only bring someone so far, after which they are either curious or not.

It would be very interesting to do the same thing with an AI, just give it a basic curiosity about certain things, and watch how it develops.

Comment author: CCC 21 October 2012 10:37:54AM -1 points [-]

Consider how this could be tested. One would write a program that generates a virtual rubik's cube, and passes this on to the AI to be solved (this avoids the complexity of first having to learn how to control robotic hands). It can't just randomly assign colours to sides, lest it end up with an unsolveable cube. Hence, the preparatory program starts with a solved cube, and then applies a random sequence of moves to it.

This will almost certainly be done on the same computer as the AI is running on. A good AI, therefore, should be able to learn to inspect its own working memory, and observe other running threads on the system - it will simply observe the moves used to shuffle the cube, and can then easily reverse them if asked.

It is possible, of course, for test conditions to be altered to avoid this solution. That would, I think, be a mistake - the AI will be able to learn a lot from inspecting its own running processes (combined with the research that led to its development), and this behaviour should (in a known Friendly AI) be encouraged.

Comment author: stack 19 September 2016 11:02:17PM 0 points [-]

the problem with this is the state space is so large that it cannot explore every transition, so it can't follow transitions backwards in a straight forward manner as you've proposed. It needs some kind of intuition to minimize the search space, to generalize it.

Unfortunately I'm not sure what that would look like. :(

Comment author: CCC 20 September 2016 10:24:09AM 0 points [-]

(Wow, this was from a while back)

I wasn't suggesting that the AI might try to calculate the reverse sequence of moves. I was suggesting that, if the cube-shuffling program is running on the same computer, then the AI might learn to cheat by, in effect, looking over the shoulder of the cube-shuffler and simply writing down all the moves in a list; then it can 'solve' the cube by simply running the list backwards.

Comment author: stack 20 September 2016 01:12:25PM 0 points [-]

Oh I see: for that specific instance of the task.

I'd like to see someone make this AI, I want to know how it could be done.

Comment author: CCC 13 October 2016 01:43:14PM 0 points [-]

Observe the contents of RAM as it's changing?

I'm not 100% sure of the mechanism of said observations, but I'm assuming a real AI would be able to do things on a computer that we can't - much as we can easily recognise an object in an image.