Wiki Contributions

Comments

Sorted by

IIRC @jake_mendel and @Kaarel have thought about this more, but my rough recollection is: a simple story about the regularization seems sufficient to explain the training dynamics, so a fancier SLT story isn't obviously necessary. My guess is that there's probably something interesting you could say using SLT, but nothing that simpler arguments about the regularization wouldn't tell you also. But I haven't thought about this enough.

Good catch, thanks! Fixed now.

It's worth noting that Jesse is mostly following the traditional "approximation, generalization, optimization" error decomposition from learning theory here - where "generalization" specifically refers to finite-sample generalization (gap between train/test loss), rather than something like OOD generalization. So e.g. a failure of transformers to solve recursive problems would be a failure of approximation, rather than a failure of generalization. Unless I misunderstood you?

Repeating a question I asked Jesse earlier, since others might be interested in the answer: how come we tend to hear more about PAC bounds than MAC bounds?

Note that in the SLT setting, "brains" or "neural networks" are not the sorts of things that can be singular (or really, have a certain ) on their own - instead they're singular for certain distributions of data.

This is a good point I often see neglected. Though there's some sense in which a model  can "be singular" independent of data: if the parameter-to-function map  is not locally injective. Then, if a distribution  minimizes the loss, the preimage of  in parameter space can have non-trivial geometry. 

These are called "degeneracies," and they can be understood for a particular model without talking about data. Though the actual  that minimizes the loss is determined by data, so it's sort of like the "menu" of degeneracies are data-independent, and the data "selects one off the menu." Degeneracies imply singularities, but not necessarily vice-versa, so they aren't everything. But we do think that degeneracies will be fairly important in practice.

A possible counterpoint, that you are mostly advocating for awareness as opssosed to specific points is null, since pretty much everyone is aware of the problem now - both society as a whole, policymakers in particular, and people in AI research and alignment.

I think this specific point is false, especially outside of tech circles. My experience has been that while people are concerned about AI in general, and very open to X-risk when they hear about it, there is zero awareness of X-risk beyond popular fiction. It's possible that my sample isn't representative here, but I would expect that to swing in the other direction, given that the folks I interact with are often well-educated New-York-Times-reading types, who are going to be more informed than average.

Even among those aware, there's also a difference between far-mode "awareness" in the sense of X-risk as some far away academic problem, and near-mode "awareness" in the sense of "oh shit, maybe this could actually impact me." Hearing a bunch of academic arguments, but never seeing anybody actually getting fired up or protesting, will implicitly cause people to put X-risk in the first bucket. Because if they personally believed it to be big a near-term risk, they'd certainly be angry and protesting, and if other people aren't, that's a signal other people don't really take it seriously. People sense a missing mood here and update on it.

In the cybersecurity analogy, it seems like there are two distinct scenarios being conflated here:

1) Person A says to Person B, "I think your software has X vulnerability in it." Person B says, "This is a highly specific scenario, and I suspect you don't have enough evidence to come to that conclusion. In a world where X vulnerability exists, you should be able to come up with a proof-of-concept, so do that and come back to me."

2) Person B says to Person A, "Given XYZ reasoning, my software almost certainly has no critical vulnerabilities of any kind. I'm so confident, I give it a 99.99999%+ chance." Person A says, "I can't specify the exact vulnerability your software might have without it in front of me, but I'm fairly sure this confidence is unwarranted. In general it's easy to underestimate how your security story can fail under adversarial pressure. If you want, I could name X hypothetical vulnerability, but this isn't because I think X will actually be the vulnerability, I'm just trying to be illustrative."

Story 1 seems to be the case where "POC or GTFO" is justified. Story 2 seems to be the case where "security mindset" is justified.

It's very different to suppose a particular vulnerability exists (not just as an example, but as the scenario that will happen), than it is to suppose that some vulnerability exists. Of course in practice someone simply saying "your code probably has vulnerabilities," while true, isn't very helpful, so you may still want to say "POC or GTFO" - but this isn't because you think they're wrong, it's because they haven't given you any new information.

Curious what others have to say, but it seems to me like this post is more analogous to story 2 than story 1.

I wish I had a more short-form reference here, but for anyone who wants to learn more about this, Rocket Propulsion Elements is the gold standard intro textbook. We used in my university rocketry group, and it's a common reference to see in industry. Fairly well written, and you should only need to know high school physics and calculus.

Obviously this is all speculation but maybe I'm saying that the universal approximation theorem implies that neural architectures are fractal in space of all distributtions (or some restricted subset thereof)?


Oh I actually don't think this is speculation, if (big if) you satisfy the conditions for universal approximation then this is just true (specifically that the image of  is dense in function space). Like, for example, you can state Stone-Weierstrass as: for a Hausdorff space X, and the continuous functions under the sup norm , the Banach subalgebra of polynomials is dense in . In practice you'd only have a finite-dimensional subset of the polynomials, so this obviously can't hold exactly, but as you increase the size of the polynomials, they'll be more space-filling and the error bound will decrease.

Curious what's your beef with universal approximation? Stone-weierstrass isn't quantitative - is that the reason?

The problem is that the dimension of  required to achieve a given  error bound grows exponentially with the dimension  of your underlying space . For instance, if you assume that weights depend continuously on the target function, -approximating all  functions on  with Sobolev norm  provably takes at least  parameters (DeVore et al.). This is a lower bound.

So for any realistic  universal approximation is basically useless - the number of parameters required is enormous. Which makes sense because approximation by basis functions is basically the continuous version of a lookup table.

Because neural networks actually work in practice, without requiring exponentially many parameters, this also tells you that the space of realistic target functions can't just be some generic function space (even with smoothness conditions), it has to have some non-generic properties to escape the lower bound.

Sorry, I realized that you're mostly talking about the space of true distributions and I was mainly talking about the "data manifold" (related to the structure of the map  for fixed ). You can disregard most of that.

Though, even in the case where we're talking about the space of true distributions, I'm still not convinced that the image of  under  needs to be fractal. Like, a space-filling assumption sounds to me like basically a universal approximation argument - you're assuming that the image of  densely (or almost densely) fills the space of all probability distributions of a given dimension. But of course we know that universal approximation is problematic and can't explain what neural nets are actually doing for realistic data.

Load More