You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gwern comments on AlphaGo versus Lee Sedol - Less Wrong Discussion

17 Post author: gjm 09 March 2016 12:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (183)

You are viewing a single comment's thread. Show more comments above.

Comment author: ShardPhoenix 10 March 2016 12:15:16AM 6 points [-]

One difference is that you can't get feedback as fast when dealing with human judgement rather than win/lose in a game (where AlphaGo can play millions of games against itself).

Comment author: gwern 10 March 2016 12:44:36AM *  3 points [-]

You can get fast feedback by reusing existing databases if your RL agent can do off-policy learning. (You can consider this what the supervised pre-learning phase is 'really' doing.) Your agent doesn't have to take an action before it can learn from it. Consider the experience replay buffers. You could imagine a song-writing RL agent which has a huge experience replay buffer which is made just of fragments of songs you grabbed online (say, from the Touhou megatorrent with its 50k tracks).