You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

bogus comments on [Link] AlphaGo: Mastering the ancient game of Go with Machine Learning - Less Wrong Discussion

14 Post author: ESRogs 27 January 2016 09:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread. Show more comments above.

Comment author: bogus 30 January 2016 07:31:16PM *  0 points [-]

This is probably a misconception for several reasons. Firstly, given that we don't fully understand the learning mechanisms in the brain yet, it's unlikely that it's mostly one thing ...

We don't understand the learning mechanisms yet, but we're quite familiar with the data they use as input. "Internally" supervised learning is just another term for semi-supervised learning anyway. Semi-supervised learning is plenty flexible enough to encompass the "multi-objective" features of what occurs in the brain.

The GTX TitanX has a peak perf of 6.1 terraflops, so you'd need only a few hundred to get a petaflop supercomputer (more specifically, around 175).

Raw and "peak performance" FLOPS numbers should be taken with a grain of salt. Anyway, given that a TitanX apparently draws as much as 240W of power at full load, your "petaflop-scale supercomputer" will cost you a few hundred-thousand dollars and draw 42kW to do what the brain does within 20W or so. Not a very sensible use for that amount of computing power - except for the odd publicity stunt, I suppose. Like playing Go.

It's just a circuit, and it obeys the same physical laws.

Of course. Neuroglia are not magic or "woo". They're physical things, much like silicon chips and neurons.

Comment author: jacob_cannell 31 January 2016 12:06:02AM *  0 points [-]

Raw and "peak performance" FLOPS numbers should be taken with a grain of salt.

Yeah, but in this case the best convolution and gemm codes can reach like 98% efficiency for the simple standard algorithms and dense input - which is what most ANNs use for about everything.

given that a TitanX apparently draws as much as 240W of power at full load, your "petaflop-scale supercomputer" will cost you a few hundred-thousand dollars and draw 42kW to do what the brain does within 20W or so

Well, in this case of Go and for an increasing number of domains, it can do far more than any brain - learns far faster. Also, the current implementations are very very far from optimal form. There is at least another 100x to 1000x easy perf improvement in the years ahead. So what 100 gpus can do now will be accomplished by a single GPU in just a year or two.

It's just a circuit, and it obeys the same physical laws.

Of course. Neuroglia are not magic or "woo". They're physical things, much like silicon chips and neurons.

Right, and they use a small fraction of the energy budget, and thus can't contribute much to the computational power.

Comment author: bogus 31 January 2016 12:11:54AM *  1 point [-]

Well, in this case of Go and for an increasing number of domains, it can do far more than any brain - learns far faster.

This might actually be the most interesting thing about AlphaGo. Domain experts who have looked at its games have marveled most at how truly "book-smart" it is. Even though it has not shown a lot of creativity or surprising moves (indeed, it was comparatively weak at the start of Game 1), it has fully internalized its training and can always come up with the "standard" play.

Right, and they use a small fraction of the energy budget, and thus can't contribute much to the computational power.

Not necessarily - there might be a speed vs. energy-per-op tradeoff, where neurons specialize in quick but energy-intensive computation, while neuroglia just chug along in the background. We definitely see such a tradeoff in silicon devices.

Comment author: Kaj_Sotala 31 January 2016 11:25:26AM *  0 points [-]

Domain experts who have looked at its games have marveled most at how truly "book-smart" it is. Even though it has not shown a lot of creativity or surprising moves (indeed, it was comparatively weak at the start of Game 1), it has fully internalized its training and can always come up with the "standard" play.

Do you have links to such analyses? I'd be interested in reading them.

EDIT: Ah, I guess you were referring to this: https://www.reddit.com/r/MachineLearning/comments/43fl90/synopsis_of_top_go_professionals_analysis_of/