A basic mathematical structure of intelligence
An important concept here on LW is that of a singularity of intelligence, or at least a very rapid growth. Although it seems mostly hopeless, it would be nice if we could find a mathematical approach to quantify these things. I think the first point to note is that intelligence...
Although we can not rigorously say this yet since we have not chosen a definition of agent, I think this intuitively applies and therefore (H2) can only hold when you are restricted to some set of tasks, perhaps "reasonable tasks", yea.
I wonder if in the stochastic inteprretation of task this issue disappears because "No Free Lunch" tasks that "diagonalize against a model in a particular fashion have very low probability.