passive_fist comments on Academic Cliques - Less Wrong

21 Post author: ChrisHallquist 08 November 2013 04:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread.

Comment author: passive_fist 08 November 2013 05:27:44AM 9 points [-]

Oh there are many examples of this throughout science.

In my own area (machine learning), a decade ago there used to be a huge clique of researchers who's "consensus" was that ANNs were dead, SVM+kernel methods were superior, and that few other ML techniques mattered. Actually, the problem was simply that they were training ANNs improperly. Later researchers showed how to properly train ANNs, and the work of the Toronto machine intelligence group especially established that ANNs were quite superior to SVMs for many tasks.

In econometrics, subsequence time series (STS) clustering was widely thought to be a good approach for analyzing market movements. After decades of work and hundreds of papers on this technique, Keogh et al showed in 2005 that the results of STS clustering are actually indistinguishable from noise!

Another one, in physics, was pointed out by Lee Smolin in his book, The Trouble with Physics. In string theory it was commonly, but wrongly, consensus opinion that Mandelstam had proven string theory finite. Actually, he had only eliminated some particular forms of infinities. The work on establishing string theory as finite is still ongoing.

Comment author: Daniel_Burfoot 08 November 2013 08:54:13PM *  12 points [-]

ANNs were dead, SVM+kernel methods were superior, and that few other ML techniques mattered. Actually, the problem was simply that they were training ANNs improperly.

Well... I suppose that characterization is true, but only if you allow the acronym "ANN" to designate a really quite broad class of algorithms.

It was true that multilayer perceptrons trained with backpropagation are inferior to SVMs. It is also true that deep belief networks trained with some kind of Hintonian contrastive divergence algorithm are probably better than SVMs. If you tag both the multilayer perceptrons and the deep belief networks with the "ANN" label, then it is true that the consensus in the field reversed itself. But I think it is more precise just to say that people invented a whole new type of learning machine.

(I'm sure you know all this, I'm commenting for the benefit of readers who are not ML experts).

Comment author: falenas108 08 November 2013 07:45:56AM *  3 points [-]

This is a different type of problem. OP is talking about people saying there is a consensus, when actually there's a lot of disagreement. You're talking about times where there was (some kind of) a consensus, but that consensus was wrong.

Comment author: ChrisHallquist 08 November 2013 02:09:00PM -1 points [-]

That's not clear to me from reading the comment. passive_fist, can you clarify?

Comment author: passive_fist 08 November 2013 03:36:54PM 3 points [-]

In all 3 cases I described except the last, it wasn't a consensus at all, but a percieved consensus within a subset of the community.

Comment author: falenas108 08 November 2013 05:22:02PM 0 points [-]

I apologize then, that wasn't how I read it. When you said "huge clique" and "widely thought," I thought you were saying that the majority of the field falls into those groups.