I skimmed the paper, and you're right, it seemed like their toy problems were very small and their search process didn't scale particularly well. There seems to be similar prior work on program synthesis that I don't know much about, so I can't really evaluate what the points of progress in the AE paper are.
Epistemic status: I am writing mainly to clarify my own thoughts on this, but I think it is definitely worth sharing what I have here. This is not meant to be a review of the paper itself, rather a discussion of its implications.
So:
This is a very flashy claim from the authors of this paper, which has been discussed briefly on LW. They are referring to the Apperception Engine (surely a name Babbage would be proud of), which they claim is able to create human-readable causal models, and perform at human-level in certain domains. Wow! Or perhaps not.
The AE itself resembles a small portion of AIXI if it was implemented by someone who hadn't heard of Bayes' theorem (or AIXI), which acts on an incomplete sequence of "sensory inputs" (e.g. 123456789 of which only --34-6-8- might be shown to the system). It has a set of meta-rules which define a large series of what we might call "hypotheses" about the world. Each of these consists of a set of initial conditions at timestep 0; and a list of rules which both relate each timestep to the next, and determine the predicted sensory input. These are iterated to create a list of sensory inputs over time, which is compared with the "visible" elements of the sequence. The most important part of this process is that these hypotheses generally posit some unseen objects with various states: they are models of the world with internal structure and rules.
To choose between multiple hypotheses which perfectly fit the visible elements of the sequence, a cost function is used, playing the role of a complexity-penalizing prior. (There are also some other constraints on hypotheses, but I think these are mostly irrelevant here) These simplifications compared to AIXI mean it is computable (although it presumably sometimes fails to find a hypothesis) but this does not necessarily mean fast.
Computable does not necessarily mean fast
One way of thinking about this is that the AE is an example of a class of systems, analogous but (in many ways) orthogonal to the "class" of neural networks. It seems relevant that the AE is good at tasks that NNs are bad at, especially working with very small amounts of data. It also appears that the AE is not the first system in this class, in fact in the paper it is compared to similar programs designed for building logical models. The AE is simply much more efficient than previous systems in both space and time. What this class of systems currently lacks is an equivalent to backpropagation, a method for improving upon hypotheses the same way a neural network can be reliably moved towards accuracy.
Another comparison to make is to early chess programs. The AE computes a brute-force search of hypothesis space, analogous to a brute-force search of trees of positions in chess. While it may be able to achieve human-level performance in a certain domain, I think it is unlikely that it will be able to scale up trivially to form accurate models of real-world systems. For this a more efficient hypothesis space-searching algorithm will be needed (like modern chess programs which are equipped with position-quality assessment tools and better tree searching algorithms). Deep Blue was several insights away from playing go (or Starcraft), even with Moore's law on its side.
No Bayes', no problem?
The other main limitation to the AE is that it lacks probabilistic reasoning. The logical language which it is coded in is built of finite state objects, rather than probabilities. This makes it very well suited to the sort of discrete-space uncertainty-free tasks which it was tested on, but not to real world data, which is generally continuous and noisy.
I suspect that a probabilistic system could be built based on this architecture, and I also believe that this will be a much smaller hurdle to cross than finding an efficient hypothesis searching algorithm. I do not have explicit models as to how this could be done, but nothing about the architecture seems to preclude the incorporation of randomness, numerical functions, and the outputting of probability distributions rather than iron-clad predictions from each model.
What next for AE-like systems?
Probably not much for now, although I am willing to be convinced if someone with more insight weighs in. But that does not mean they are not worth considering. Being two or three steps away from a very general-looking AI is several more steps closer than I am comfortable with. On the other hand, systems which create human-understandable explicit models could very much be a positive step in the direction of safe AI.
I can also see usages of them in the field of scientific modelling (particularly biology). Many different factors inputs and outputs to a complex system must often be considered, and finding a causal model is the gold standard. For example predicting if someone develops dementia based on genes, sleep, diet etc. while also coming up with a model which gives us more information about the disease. Whether this will actually be used in this manner is unclear to me. It may be that a tool AI which is powerful enough to create an effective model of a biological system based on limited data will only come a very short time before a more agent-y AI takes off.