buybuydandavis comments on Paradigm shifts in forecasting - Less Wrong

3 Post author: VipulNaik 08 May 2014 07:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (6)

You are viewing a single comment's thread.

Comment author: buybuydandavis 10 May 2014 02:56:06AM *  1 point [-]

I wouldn't generalize too much from a forecasting competition.

Per Wolpert's No Free Lunch theorems, algorithm performance depends on fit to problem domain. The winner is likely a guy who lucked out on the choice of performance evaluation which fit his algorithm better than the competition. It doesn't mean he'll win the next competition. And it doesn't mean he isn't good, but it likely means that he was good and lucky.

How do we judge the potential and promise of the new complicated forecasting method?

Theory and judgment play a part.

When I first saw the Deep Learning method presented by Hinton, I was confident that it would be good without seeing the results, as it looked like a great theoretical approach, attacking the problem the right way.

Same thing with Wolpert and Stacked Generalization.

What to bet on? Things that theoretically look good, but are currently computationally cost prohibitive. As computers improve, there is an algorithmic land grab by researchers rushing into the areas that become computationally tractable.

Comment author: gwern 10 May 2014 09:11:47PM 2 points [-]

Per Wolpert's No Free Lunch theorems, algorithm performance depends on fit to problem domain.

Aren't all these forecasting competitions using real data from real-world problems, and so NFL is irrelevant?

Comment author: buybuydandavis 17 May 2014 11:20:46PM 0 points [-]

NFL not relevant to the real world? Would you like to elaborate?

Comment author: gwern 18 May 2014 01:27:32AM *  2 points [-]

Real-world problems are not a random sampling from all possible problems and there's plenty of structure to exploit, so invoking NFL in this context seems odd to me.

Comment author: buybuydandavis 18 May 2014 10:55:38PM 0 points [-]

A real world competition isn't a random sample of anything. It's a selection of some problems, with some data. The performance of any algorithm will depend on fit to those problems, with those data.

My takeaways from the NFL theorems - the problems in the real world are some structured subset of all possible problems, and the performance of any generalizer for a problem will depend on fit to that problem.

Comment author: gwern 20 May 2014 05:27:19PM 0 points [-]

The performance of any algorithm will depend on fit to those problems, with those data.

That's not chopped liver.