You didn't really respond to my argument. You just said: "It's all algorithmic, basta.". The problem is that there is no algorithmic way to determine any algorithm, since if you try to find an algorithm for the algorithm you only have a bigger problem of determining that algorithm. The universe can't run solely on algorithms, except if you invoke "God did it! He created the first algorithm" or "The first algorithm just appered randomly out of nowhere". I think this statement is ridiculous, but there is no refutation for dogma. If the universe would be so absurd, I could as well be a christian fundamentalist or just randomly do nonsensical things (since it's all random either way).
No algorithm can determine the simple axioms of the natural numbers from anything weaker.
It is not clear that this means anything. You certainly have given no reasons to believe it.
What? The axioms of natural numbers can't be determined because they are axioms. If that's not true, "derive 0 is a natural number" and "1 is the succesor of 0" without any notion of numbers.
It means that there is no way that an AI could invent the natural numbers. Hence there are important inventions that AIs can't make - in principle.
There is simply no way to derive the axioms from anything that doesn't already include it.
I think you are confusing derivations within some formal system such as Peano arithmetic (where, indeed, the only way to get the axioms is to begin with them, or with some other axioms that imply them) and (a quite different sort of) derivations outside that formal system, such as whatever Peano did to arrive at his axioms. I know of no reason to believe that the latter is fundamentally non-algorithmic, though for sure we don't know what algorithms would be best.
Instead of asserting that, just try some way to derive the simplest axioms of arithmetic from something that's not more complex (which of course can't always work to arrive at the axioms since we have a limited amount of complex systems). It doesn't work. The axioms of arithmetic are irreducible simple - to simple to be derived.
I know of no reason to believe this, and it seems to me that if it seems true it's because what you imagine when you think about following rules is very simple rule-following, the sort of thing that might be done by a computer program at most a few pages in length running on a rather slow computer. In particular ..
Not at all! It doesn't matter how complex the rules are. You can't go beyond the axioms of the rules, because that is what makes the rules rules. Yet still it is easily possible to invent new axioms. This is essential for intelligence, yet an AI can't do it, since it only works by its axioms. It can do it on a meta-level, for sure, but that's not enough, since in this case the new axioms are just derived from the old ones. Well, or it uses user input, but in this case the program isn't a self-contained intelligence anymore.
since at the very least the rules can't be determined by rules
Whyever not? They have to be different rules, that's all.
And how are these rules determined? Either you have an infinite chain of rules, which itself can't be derived from an rule, or you start picking out a rule without any rule.
Instead, we should expect a singularity that happens due to emergent intelligence.
"Emergence" is not magic.
Really? I think it is, not of course in any anthrophomorphic sense. What else could describe, for example, the emergence of the patterns out of cellular automata rules? It seems to me nature is inherently magical. We just have to be careful to not project our superstitious ideas of magic into nature. Even all materialist have to rely on magic at the most critical points. Look at the anthropic principle. Or at the question "Where do the laws of nature come from?". Either we deny that the question is meaningful or important, or we have to admit it is fundamentally mysterious and magical.
No, I didn't say "it's all algorithmic, basta"; I said "so far as we know, it's all algorithmic". Of course it's possible that we'll somehow discover that actually our minds run on magic fairies and unicorns or something, but so far as I can tell all the available evidence is consistent with everything being basically algorithmic. You're the one claiming to know that that isn't so; I invite you to explain how you know.
I haven't claimed that the axioms of arithmetic are derived from something simpler. I have suggested that for all we kno...
What I write here may be quite simple (and I am certainly not the first to write about it), but I still think it is worth considering:
Say we have an abitrary problem that we assume has an algorithmic solution, and search for the solution of the problem.
How can the algorithm be determined?
Either:
a) Through another algorithm that exist prior to that algorithm.
b) OR: Through something non-algorithmic.
In the case of AI, the only solution is a), since there is nothing else but algorithms at its disposal. But then we have the problem to determine the algorithm the AI uses to find the solution, and then it would have to determine the algorithm to determine that algorithm, etc...
Obviously, at some point we have to actually find an algorithm to start with, so in any case at some point we need something fundamentally non-algorithmic to determine a solution to an problem that is solveable by an algorithm.
This reveals something fundamental we have to face with regards to AI:
Even assuming that all relevant problems are solvable by an algorithm, AI is not enough. Since there is no way to algorithmically determine the appropiate algorithm for an AI (since this would result in an infinite regress), we will always have to rely on some non-algorithmical intelligence to find more intelligent solutions. Even if we found a very powerful seed AI algorithm, there will always be more powerful seed AI algorithms that can't be determined by any known algorithm, and since we were able to find the first one, we have no reason to suppose we can't find another more powerful one. If an AI recursively improves 100000x times until it is 100^^^100 times more powerful, it still will be caught up if a better seed AI is found, which ultimately can't be done by an algorithm, so that further increases of the most general intelligence always rely on something non-algorithmic.
But even worse, it seems obvious to me that there are important practical problems that have no algorithmic solution (as opposed to theoretical problems like the halting problem, which are still tractable in practice), apart from the problem of finding the right algorithm.
In a sense, it seems all algorithms are too complicated to find the solution to the simple (though not necessarily easy) problem of giving rise to further general intelligence.
For example: No algorithm can determine the simple axioms of the natural numbers from anything weaker. We have postulate them by virtue of the simple seeing that they make sense. Thinking that AI could give rise to ever improving *general* intelligence is like thinking that an algorithm can yield "there is a natural number 0 and every number has a successor that, too, is a natural number". There is simply no way to derive the axioms from anything that doesn't already include it. The axioms of the natural numbers are just obvious, yet can't be derived - the problem of finding the axioms of natural numbers is too simple to be solved algorithmically. Yet still it is obvious how important the notion of natural numbers is.
Even the best AI will always be fundamentally incapable of finding some very simple, yet fundamental principles.
AI will always rely on the axioms it already knows, it can't go beyond it (unless reprogrammed by something external). Every new thing it learns can only be learned in term of already known axioms. This is simply a consequence of the fact that computers/programs are functioning according to fixed rules. But general intelligence necessarily has to transcend rules (since at the very least the rules can't be determined by rules).
I don't think this is an argument against a singularity of ever improving intelligence. It just can't happen driven (solely or predominantly) by AI, whether through a recursively self-improving seed AI or cognitive augmentation. Instead, we should expect a singularity that happens due to emergent intelligence. I think it is the interaction of different kind of intelligence (like human/animal intuitive intelligence, machine precision and the inherent order of the non-living universe, if you want to call that intelligence) that leads to increase in general intelligence, not just one particular kind of intelligence like formal reasoning used by computers.