Vinayak Pathak

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

Thanks, this clarifies many things! Thanks also for linking to your very comprehensive post on generalization.

To be clear, I didn't mean to claim that VC theory explains NN generalization. It is indeed famously bad at explaining modern ML. But "models have singularities and thus number of parameters is not a good complexity measure" is not a valid criticism of VC theory. If SLT indeed helps figure out the mysteries from the "understanding deep learning..." paper then that will be amazing!

But what we'd really like to get at is an understanding of how perturbations to the true distribution lead to changes in model behavior.

Ah, I didn't realize earlier that this was the goal. Are there any theorems that use SLT to quantify out-of-distribution generalization? The SLT papers I have read so far seem to still be talking about in-distribution generalization, with the added comment that Bayesian learning/SGD is more likely to give us "simpler" models and simpler models generalize better. 

I have also been looking for comparisons between classical theory and SLT that make the deficiencies of the classical theories of learning clear, so thanks for putting this in one place.

However, I find the narrative of "classical theory relies on the number of parameters but SLT relies on something much smaller than that" to be a bit of a strawman towards the classical theory. VC theory already only depends on the number of behaviours induced by your model class as opposed to the number of parameters, for example, and is a central part of the classical theory of generalization. Its predictions still fail to explain generalization of neural networks but several other complexity measures have already been proposed

Neural networks are intrinsically biased towards simpler solutions. 

Am I correct in thinking that being "intrinsically biased towards simpler solutions" isn't a property of neural networks, but a property of the Bayesian learning procedure? The math in the post doesn't use much about NN's and it seems like the same conclusions can be drawn for any model class whose loss landscape has many minima with varying complexities?

Perhaps I have learnt statistical learning theory in a different order than others, but in my mind, the central theorem of statistical learning theory is that learning is characterized by the VC-dimension of your model class (here I mean learning in the sense of supervised binary classification, but similar dimensions exist for some more general kinds of learning as well). VC-dimension is a quantity that does not even mention the number of parameters used to specify your model, but depends only on the number of different behaviours induced by the models in your model class on sets of points. Thus if multiple parameter values lead to the same behaviour, this isn't a problem for the theory at all because these redundancies do not increase the VC-dimension of the model class. So I'm a bit confused about why singular learning theory is a better explanation of generalization than VC-dimension based theories. 

On the other hand, one potential weakness of singular learning theory seems to be that its complexity measures depend on the true data distribution (as opposed to VC-dimension that depends only on the model class)? I think what we want from any theory of generalization is that it should give us a prediction process  that takes any learning algorithm as input and predicts whether it will generalize or not. The procedure  cannot require knowledge of the true data distribution because if we knew the data distribution we would not need to learn anything in the first place. If the claim is that it only needs to know certain properties of the true distribution that can be estimated from a small number of samples, then it will be nice to have a proof of such a claim (not sure if that exists). Also note that if  is allowed access to samples, then predicting whether your model generalizes is as simple as checking its performance on the test set. 

The participants will be required to choose a project out of a list I provide. They will be able to choose to work solo or in a group.

 

Is this list or an approximate version of it available right now? Since the application process itself requires a non-trivial time commitment, it might be nice to see the list before deciding whether to apply.

I read the paper, and overall it's an interesting framework. One thing I am somewhat unconvinced about (likely because I have misunderstood something) is its utility despite the dependence on the world model. If we prove guarantees assuming a world model, but don't know what happens if the real world deviates from the world model, then we have a problem. Ideally perhaps we want a guarantee akin to what's proved in learning theory, for example, that the accuracy will be small for any data distribution as long as the distribution remains the same during training and testing. 

But perhaps I have misunderstood what's meant by a world model and maybe it's simply the set of precise assumptions under which the guarantees have been proved. For example, in the learning theory setup, maybe the world model is the assumption that the training and test distributions are the same, as opposed to a description of the data distribution. 

Hmm, but what if everything gets easier to produce at a similar rate as the consumer basket? Won't the prices remain unaffected then?