You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

chaosmage comments on Open Thread for February 3 - 10 - Less Wrong Discussion

6 Post author: NancyLebovitz 03 February 2014 03:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (331)

You are viewing a single comment's thread. Show more comments above.

Comment author: chaosmage 05 February 2014 12:30:49PM *  2 points [-]

You'll want to give it as little data as possible, in order to be able to analyze how it is processing it. What Deepmind are doing is put their AI prototypes into computer game environments and see if and how they learn to play the game.

Comment author: djm 05 February 2014 02:38:42PM 0 points [-]

Yes, and the tricky problem is to work out what data to give it in the first place. Do you give it core facts like the periodic table of elements, laws of physics, maths? If you don't give it some sort of framework/language to communicate then how will we know if it is actually learning or just running random loops?

Comment author: chaosmage 05 February 2014 02:48:34PM *  1 point [-]

I fail to see the problem. We can see how it gains competence, and that is evidence of learning. It works for toddlers and for rats in mazes, why wouldn't it work for mute AGIs?