"implies that machine learning alone is not a complete path to human-level intelligence."
I don't think this is even a little true, unless you are using definitions of human level intelligence and machine learning which are very different than the ideas I have of them.
If you have a human who has never heard of the definition of prime numbers, how do you think they would do on this test? Am I allowed to.supply my model with something equivalent to the definition?
The best physicists on Earth, including Edward Witten and Alain Connes, believe that the distribution of primes and Arithmetic Geometry encode mathematical secrets that are of fundamental importance to mathematical physics. This is why the Langlands program and the Riemann Hypothesis are of great interest to mathematical physicists.
If number theory, besides being of fundamental importance to modern cryptography, allows us to develop a deep understanding of the source code of the Universe then I believe that such advances are a critical part of human intelligence, and would be highly unlikely if the human brain had a different architecture.
I agree with your entire first paragraph. It doesn't seem to me that you have addressed my question though. You are claiming that this hypothesis "implies that machine learning alone is not a complete path to human-level intelligence." I disagree. If I try to design an ML model which can identify primes, is it fair for me to give it some information equivalent to the definition (no more information than a human who has never heard of prime numbers has)?
If you allow that it is fair for me to do so, I think I can probably design an ML model which will do this. If you do not allow this, then I don't think this hypothesis has any bearing on whether ML alone is "a complete path to human-level intelligence." (Unless you have a way of showing that humans who have never received any sensory data other than a sequence of "number:(prime/composite)label" pairs would do well on this.)
Does any ML model that tells cats from dogs get definitions thereof? I think the only input it gets is "picture:(dog/cat)label". It does learn to tell them apart, to some degree, at least. One would expect the same approach here. Otherwise you can ask right away for the sieve of Eratosthenes as a functional and inductive definition, in which case things get easy ...
In that case, I believe your conjecture is trivially true, but has nothing to do with human intelligence or Bengio's statements. In context, he is explicitly discussing low dimensional representations of extremely high dimensional data, and the things human brains learn to do automatically (I would say analogously to a single forward pass).
If you want to make it a fair fight, you either need to demonstrate a human who learns to recognize primes without any experience of the physical world (please don't do this) or allow an ML model something more analogous to the data humans actually receive, which includes math instruction, interacting with the world, many brain cycles, etc
I also believe my conjecture is true, however non-trivially. At least, mathematically non-trivially. Otherwise, all is trivial when the job is done.
I also believe my conjecture is true, however non-trivially. At least, mathematically non-trivially. Otherwise, all is trivial while the job is done.
Regarding your remark on finding low-dimensional representations, I have added a section on physical intuitions for the challenge. Here I explain how the prime recognition problem corresponds to reliably finding a low-dimensional representation of high-dimensional data.
Let P, P’, P’’= « machine learning alone », « machine learning + noise », « is not a complete path to human-level intelligence »
A few follow up questions: do you also think that P+P’’ == P’+P’’ ? Is your answer proven or more or less uncontroversial ? (ref welcome!)
the empirical observation that deep learning models fail to approximate the Prime Counting Function
I can't find any empirical work on this...
If Bernhard Riemann knew of the Prime Counting Function, it would have had to be by other means than data compression
He obtained his "explicit formulas" by reasoning about an ideal object (his zeta function) which by construction, contains information about all prime numbers.
Thank you for bringing up these points:
Either way, I believe that additional experiments may be enlightening as the applied mathematics that mathematicians do is only true to the extent that it has verifiable consequences.
This might interest you: a language model is used to develop a model of inflation (expansion in the early universe), using a Kolmogorov-like principle (minimum description length).
Here (https://stats.stackexchange.com/questions/142906/what-does-pac-learning-theory-mean) is an accessible explanation. In simple words this would mean that you have a reasonable estimate for the amount of data you need to guarantee that you can learn a concept correctly with high probability.
This sounds similar to whether a contemporary machine learning model can break a cryptographic cipher, a hash function, or something like that.
Yes and no. Yes, because prime inference with high accuracy would make codebreaking much easier. No, because, for example, in RSA you need to deal with semiprimes, and that setup seems different as per Sam Blake's research here: https://arxiv.org/abs/2308.12290
In the following analysis, a machine learning challenge is proposed by Sasha Kolpakov, Managing Editor of the Journal of Experimental Mathematics, and myself to verify a strong form of the Manifold Hypothesis due to Yoshua Bengio.
Motivation:
In the Deep Learning book, Ian Goodfellow, Yoshua Bengio and Aaron Courville credit the unreasonable effectiveness of Deep Learning to the Manifold Hypothesis as it implies that the curse of dimensionality may be avoided for most natural datasets [5]. If the intrinsic dimension of our data is much smaller than its ambient dimension, then sample-efficient PAC learning is possible. In practice, this happens via the latent space of a deep neural network which auto-encodes the input.
Yoshua Bengio in particular argued that the hierarchical/compositional structure of the human brain exploits the Manifold Hypothesis as an inductive bias [7], so any efficiently computable function that is of interest to humans is most likely PAC-learnable. A strong form of this hypothesis, which is generally assumed by OpenAI and DeepMind, suggests that deep learning may be the ultimate path to AGI.
However, a plausible exception to Bengio's heuristic is the mysterious distribution of prime numbers that has captured the imagination of the best mathematicians of all ages:
Though perhaps if Euler had access to GPUs he would have generalised this statement to the minds of machines. In fact, after careful deliberations with Steve Brunton, Marcus Hutter and Hector Zenil regarding the empirical observation that deep learning models fail to approximate the Prime Counting Function...it appears that not all efficiently computable functions that are of human interest are PAC-learnable.
In order to clarify the nature of these observations, Sasha Kolpakov and myself rigorously formulated the Monte Carlo Hypothesis which implies that machine learning alone is not a complete path to human-level intelligence. If Bernhard Riemann knew of the Prime Counting Function, it would have had to be by other means than data compression for reasons that are clarified below.
The Monte Carlo Hypothesis:
The Prime Coding Theorem:
From the definition of the prime encoding XN={xn}Nn=1 where xn=1 if n∈P and xn=0 otherwise, Kolpakov and Rocke(2023) used Kolmogorov's theory of Algorithmic Probability to derive the Prime Coding theorem [1]:
E[KU(XN)]∼π(N)⋅H(Xp1,...,Xpπ(N))∼N
H(Xp1,...,Xpπ(N))∼∑p≤N1p⋅lnp∼lnN
which implies that the locations of all primes in [1,N]⊂N are statistically independent of each other for large N.
Monte Carlo Hypothesis
If the best machine learning model predicts that the next N primes to be at {^pi}Ni=1∈N then for large N this model's statistical performance will converge to a true positive rate that is no better than:
1N∑Ni=11^pi≤−ln(1N)1N
Hence, the true positive rate for any machine learning model converges to zero.
Machine Learning Challenge:
Challenge:
Given the location of the first N prime numbers, predict the location of the next prime.
Training data:
The prime encoding XN where N=104.
Test data:
The location of the next thousand prime numbers.
Evaluation:
For the given reasons, we will evaluate reproducible models on ten distinct rearrangements of the prime encoding XN and calculate the arithmetic mean of its true positive rate.
As the Expected Kolmogorov Complexity of a random variable is asymptotically equivalent to its Shannon Entropy, and Shannon Entropy is permutation invariant:
∀σ∈SN,E[KU(σ∘XN)]∼E[KU(XN)]
It follows that the Prime Coding Theorem, and hence the Monte Carlo Hypothesis, is invariant to rearrangements of prime encodings.
Submission guidelines:
Models may be submitted to the tournament-specific email of Alexander Kolpakov, Managing Editor of the Journal of Experimental Mathematics, before March 14, 2024: wignerweyl@proton.me
The first 30 models we receive shall be evaluated, and the top 3 models shall be rewarded.
Reward:
A hundred dollars for each percentage of the true positive rate(p):
R(p) = $100.00 x p
Consequences for Artificial General Intelligence:
Determine whether Machine Learning offers a complete path to human-level intelligence.
Yang-Hui He's experiments on the Prime Recognition problem:
Discussions with Yang-Hui He, a string theorist and number theorist at Oxford, motivated much of the theoretical analysis behind this scientific enterprise. In Deep Learning the Landscape[8], he finds that deep learning models capable of solving sophisticated computer vision problems converge to a true positive rate of no more than 0.1% on the Prime Recognition problem.
In fact, he summarizes these observations in the following manner:
Physical Intuitions for the Challenge:
In mathematical physics, the Riemann gas is a model in Quantum Statistical Mechanics illustrating deep correspondences between number theory, quantum statistical mechanics and dynamical systems. It helps us understand how the prime numbers describe the eigenspace of a deterministic dynamical system with unbounded phase-space dimension.
Hence, the Prime Recognition problem corresponds to reliably finding a low-dimensional representation of high-dimensional data.
Update: Since the 14th of March, there have been zero submissions and not even late submissions as of the 8th of April 2024. This may be due to the fact that it is practically impossible to do better than the expected true positive rate, as Alexander Kolpakov and myself explain in a recent publication: https://arxiv.org/abs/2403.12588
References:
2000. https://arxiv.org/abs/cs/0004001.