moridinamael comments on [Link] AlphaGo: Mastering the ancient game of Go with Machine Learning - Less Wrong

14 Post author: ESRogs 27 January 2016 09:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread. Show more comments above.

Comment author: moridinamael 28 January 2016 03:52:58PM 1 point [-]

I included the word "sufficient" as an ass-covering move, because one facet of the problem is we don't really know what will serve as a "sufficient" amount of training data in what context.

But, what specific types of tasks do you think machines still can't do, given sufficient training data? If your answer is something like "physics research," I would rejoinder that if you could generate training data for that job, a machine could do it.

Comment author: Lumifer 28 January 2016 04:01:50PM 3 points [-]

Grand pronouncements with an ass-covering move look silly :-)

One obvious problem is that you are assuming stability. Consider modeling something that changes (in complex ways) with time -- like the economy of the United States. Is "training data" from the 1950s relevant to the currrent situation?

Generally speaking, the speed at which your "training data" gets stale puts an upper limit on the relevant data that you can possibly have and that, in turn, puts an upper limit on the complexity of the model (NNs included) that you can build on its basis.

Comment author: Nick_Tarleton 28 January 2016 04:27:07PM *  1 point [-]

I don't see how we anything like know that deep NNs with ‘sufficient training data’ would be sufficient for all problems. We've seen them be sufficient for many different problems and can expect them to be sufficient for many more, but all?