Alexander Gietelink Oldenziel

(...) the term technical is a red flag for me, as it is many times used not for the routine business of implementing ideas but for the parts, ideas and all, which are just hard to understand and many times contain the main novelties.
                                                                                                           - Saharon Shelah

 

As a true-born Dutchman I endorse  Crocker's rules.

For my most of my writing see my short-forms (new shortform, old shortform)

Twitter: @FellowHominid

Personal website: https://sites.google.com/view/afdago/home

Sequences

Singular Learning Theory

Wiki Contributions

Comments

Sorted by

Suppose one buys your thesis that most or all animals are conscious and feel intense pain. What is to be done ? Upload the shrimp ?

Thank you for the references Dan.

I agree neural networks probably don't actually satisfy the padding argument on the nose and agree that the exact degeneracy is quite interesting (as I say at the end of the op).

I do think for large enough overparameterization the padding argument suggests the LLC might come close to the K-complexity in many cases. But more interestingly to me is that the padding argument doesn't really require the programming language to be Turing-complete. In those cases the degeneracy will be proportional to complexity/simplicity measures that are specific to the programming language (/architecture class). Inshallah I will get to writing something about that soon.

The Padding Argument or Simplicity = Degeneracy

[I learned this argument from Lucius Bushnaq and Matthias Dellago. It is also latent already in Solomonoff's original work]

Consider binary strings of a fixed length  

Imagine feeding these strings into some turing machine; we think of strings as codes for a function. Suppose we have a function that can be coded by a short compressed string  of length . That is, the function is computable by a small program. 

Imagine uniformly sampling a random code for  . What number of the codes implement the same function as the string ? It's close to . Indeed, given the string  of length   we can 'pad' it to a string of length  by writing the code

"run  skip  "

where  is an arbitrary string of length  where  is a small constant accounting for the overhead. There are approximately  of such binary strings. If our programming language has a simple skip / commenting out functionality then we expect approximately  codes encoding the same function as . The fraction of all codes encoding s is 2^-k. 

I find this truly remarkable: the degeneracy or multiplicity is inversely exponentially proportional to the minimum description length of the function! 

Just by sampling codes uniformly at random we get the Simplicity prior!!

Why do Neural Networks work? Why do polynomials not work?

It is sometimes claimed that neural networks work well because they are 'Universal Approximators'. There are multiple problems with this explanation, see e.g. here but a very basic problem is that being a universal approximaton is very common. Polynomials are universal approximators!

Many different neural network architectures work. In the limit of large data, compute the difference of different architectures start to vanish and very general scaling laws dominate. This is not the case for polynomials.  

Degeneracy=Simplicity explains why: polynomials are uniquely tied down by their coefficients, so a learning machine that tries to fit polynomials is does not have a 'good' simplicity bias that approximates the Solomonoff prior. 

The lack of degeneracy applies to any set of functions that form an orthogonal basis. This is because the decomposition is unique. So there is no multiplicity and no implicit regularization/ simplicity bias. 

[I learned this elegant argument from Lucius Bushnaq.]

The Singular Learning Theory and Algorithmic Information Theory crossover 

I described the padding argument as an argument not a proof. That's because technically it only gives a lower bound on the number of codes equivalent to the minimal description code. The problem is there are pathological examples where the programming language (e.g. the UTM) hardcodes that all small codes  encode a single function 

When we take this problem into account the Padding Argument is already in Solomonoff's original work. There is a theorem that states that the Solomonoff prior is equivalent to taking a suitable Universal Turing Machine and feeding in a sequence of (uniformly) random bits and taking the resulting distribution. To account for the pathological examples above everything is asymptotic and up to some constant like all results in algorithmic information theory. This means that like all other results in algorithmic information theory it's unclear whether it is at all relevant in practice.

However, while this gives a correct proof I think this understates the importance of the Padding argument to me. That's because I think in practice we shouldn't expect the UTM to be pathological in this way. In other words, we should heuristically expect the simplicity  to be basically proportional to the fraction of codes yielding  for a large enough (overparameterized) architecture. 

The bull case for SLT is now: there is a direct equality between algorithmic complexity and the degeneracy. This has always been SLT dogma of course but until I learned about this argument it wasn't so clear to me how direct this connection was. The algorithmic complexity can be usefully approximated by the (local) learning coefficient !

EDIT: see Clift-Murfet-Wallbridge and Tom Warings thesis for more. See below, thanks Dan

The bull case for algorithmic information: the theory of algorithmic information, Solomonoff induction, AIXI etc is very elegant and in some sense gives answers to fundamental questions we would like to answer. The major problem was that it is both uncomputable and seemingly intractable. Uncomputability is perhaps not such a problem - uncomputability often arises from measure zero highly adversarial examples. But tractability is very problematic. We don't know how tractable compression is, but it's likely untractable. However, the Padding argument suggests that we should heuristically expect the simplicity  to be basically proportional to the fraction of codes yielding  for a large enough (overparameterized) architecture - in other words it can be measured by the local Learning coefficient.

Do Neural Networks actually satisfy the Padding argument?

Short answer: No. 

Long answer: Unclear. maybe... sort of... and the difference might itself be very interesting...!

Stay tuned. 

Neural Network have a bias towards Highly Decomposable Functions. 

tl;dr Neural networks favor functions that can be "decomposed" into a composition of simple pieces in many ways - "highly decomposable functions". 

Degeneracy = bias under uniform prior

[see here for why I think bias under the uniform prior is important]

Consider a space  of parameters used to implement functions, where each element  specifies a function via some map . Here, the set  is our parameter space, and we can think of each as representing a specific configuration of the neural network that yields a particular function

The mapping  assigns each point  to a function . Due to redundancies and symmetries in parameter space, multiple configurations  might yield the same function, forming what we call a fiber, or the "set of degenerates." of  

 This fiber is the set of ways in which the same functional behavior can be achieved by different parameterizations. If we uniformly sample from codes, the degeneracy of a function  counts how likely it is to be sampled. 

The Bias Toward Decomposability

Consider a neural network architecture built out of  layers. Mathematically, we can decompose the parameter space  as a product:

where each  represents parameters for a particular layer. The function implemented by the network, , is then a composition:

For a  function  its degeneracy (or the number of ways to parameterize it) is 

.

Here,  is the set of all possible decompositions ,  of 

That means that functions that have many such decompositions are more likely to be sampled. 

In summary, the layered design of neural networks introduces an implicit bias toward highly decomposable functions. 

I think I speak for all of the LessWrong commentariat when I say I am sad to see you go. 

That said, congratulations for building such a wonderfully eigen website!

Looking for specific tips and tricks to break AI out of formal/corporate writing patterns. Tried style mimicry ('write like Hemingway') and direct requests ('be more creative') - both fell flat. What works?

Should I be using different AI models ( I am using GPT and Claude)? The base models output an enormous creative storm, but somehow the RLHF has partially lobotomized LLMs such that they always seem to output either cheesy stereotypes or overly verbose academise/corporatespeak. 

Is true Novelty a Mirage?

One view on novelty is that it's a mirage. Novelty is 'just synthesis of existing work, plus some randomness.'

I don't think that's correct. I think true novelty is more subtle than that. Yes sometimes novel artforms or scientific ideas are about noisily mixing existing ideas. Does it describe all forms of novelty?

A reductio ad absurdum of the novelty-as-mirage point of view is that all artforms that have appeared since the dawn of time are simply noised versions of cavepaintings. This seems absurd.

Consider AlphaGO. Does AlphaGO just noisily mix human experts? No, alphaGO works on a different principle and I would venture strictly outcompetes anything based on averaging or smoothing over human experts. 

AlphaGO is based on a different principle than averaging over existing data. Instead, AlphaGO starts with an initial guess on what good play looks like, perhaps imitated from previous plays. It then plays out to a long horizons and prunes those strategies that did poorly and upscales those strategies that did well. It iteratively amplifies, refines and distilles. I strongly suspect that approximately this modus operandi underlies much of human creativity as well. 

True novelty is based on both the synthesis and refinement of existing work. 

Yes thats worded too strongly and a result of me putting in some key phrases into Claude and not proofreading. :p

I agree with you that most modern math is within-paradigm work.

I shall now confess to a great caveat. When at last the Hour is there the Program of the World is revealed to the Descendants of Man they will gaze upon the Lines Laid Bare and Rejoice; for the Code Kernel of God is written in category theory.

Load More