To build intuition about content vs architecture in AI (which comes up a lot in discussions about AI takeoff that involve Robin Hanson), I've been wondering about content vs architecture size (where size is measured in number of bits).
Here's how I'm operationalizing content and architecture size for ML systems:
- content size: The number of bits required to store the learned model of the ML system (e.g. all the floating point numbers in a neural network).
- architecture size: The number of bits of source code. I'm not sure if it makes sense to include the source code of supporting software (e.g. standard machine learning libraries).
I tried looking at the AlphaGo paper to see if I could find this kind of information, but after trying for about 30 minutes was unable to find what I wanted. I can't tell if this is because I'm not acquainted enough with the ML field to locate this information or if that information just isn't in the paper.
Is this information easily available for various ML systems? What is the fastest way to gather this information?
I'm also wondering about this same content vs architecture size split in humans. For humans one way I'm thinking of it is as "amount of information encoded in inheritance mechanisms" vs "amount of information encoded in a typical adult human brain". I know that Eliezer Yudkowsky has cited 750 megabytes as the amount of information in the human DNA, and also emphasizes that most of this information is junk. This was in 2011 and I don't know if there's a new consensus or how to factor in epigenetic information. There is also content stored in genes, and I'm not sure how to separate out the content and architecture in genes.
I'm pretty uncertain about whether this is even a good way to think about this topic, so I would also appreciate any feedback on this question itself. For example, if this isn't an interesting question to ask, I would like to know why.
I don't think any of the AG-related papers specify the disk size of the model; they may specify total # of parameters somewhere but if so, I don't recall offhand. It should be possible to estimate from the described model architecture by multiplying out all of the convolutions by strides/channels/etc but that's pretty tricky and easy to get wrong.
I once loosely estimated using the architecture on R.J. Lipton's blog when he asked the same question that the AZ model is probably somewhere ~300MB. So, large but not unusually so.
However, as I point out, if you are interested in interpreting that in an information-theoretic sense, you have to ask whether model compression/distillation/sparsification is relevant. The question of why NNs are so overparameterized, aside from being extremely important to AI risk and the hardware overhang question, is a pretty interesting one. There is an enormous literature (some of which I link here) showing an extreme range of size decreases/speed increases, with 10x being common and 100x not impossible depending on details like how much accuracy you want to give up. (For AZ, you could probably get 10x with no visible impact on ELO, but if you were willing to search another ply or two at runtime, perhaps you could get another order? It's a tradeoff: the bigger the model, the higher the value function accuracy & less search it needs to achieve a target ELO strength.)
But is that fair? After all, you can't learn that small neural network in the first place except by first passing through the very large one (as far as anyone knows). Similarly, with DNA, you have enormous ranges of genome sizes for no good apparent reason even among closely related species and viruses demonstrate that you can get absurd compression out of DNA by overlapping genes or reading them backwards (among other insane tricks), but such minified genomes may be quite fragile and such junk DNA and chromosomal or whole-genome duplications often lead to big genetic changes and adaptations and speciations, so all that fat may be serving evolvability or robustness purposes. Like NNs, maybe you can only get that hyper-specialized efficient genome after passing through a much larger overparameterized genome. (Viruses, then, may get away with such tiny genomes by optimizing for relatively narrow tasks, and applying extraordinary replication & mutation rates, and outsourcing as much as they can to regular cells or other viruses or other copies of themselves, like 'multipartite viruses'. And even then, some viruses will have huge genomes.) https://slatestarcodex.com/2020/05/12/studies-on-slack/ and https://www.gwern.net/Backstop and https://www.gwern.net/Hydrocephalus might be relevant reading here.