No, I didn't say "it's all algorithmic, basta"; I said "so far as we know, it's all algorithmic". Of course it's possible that we'll somehow discover that actually our minds run on magic fairies and unicorns or something, but so far as I can tell all the available evidence is consistent with everything being basically algorithmic. You're the one claiming to know that that isn't so; I invite you to explain how you know.
I haven't claimed that the axioms of arithmetic are derived from something simpler. I have suggested that for all we know, the process by which we found those axioms was basically algorithmic, though doubtless very complicated. (I'm not claiming that that algorithmic process is why the axioms are right. If you're really arguing not about the processes by which discoveries are made but about why arithmetic is the way it is, then we need to have a different discussion.)
it is easily possible to invent new axioms. This is essential for intelligence, yet an AI can't do it, since it only works by its axioms.
I'm afraid this is very, very wrong. Perhaps the following analogy will help: suppose I said "It is easily possible to contemplate arbitrarily large numbers, even ones bigger than 2^32 or 2^64. This is essential for intelligence, yet an AI can't do it, since it only works with 32-bit or 64-bit arithmetic." That would be crazy, right?, because an AI (or anything else) implemented on a hardware substrate that can only do a very limited set of operations can still do higher-level things if it's programmed to. A computer can do arbitrary-precision arithmetic by doing lots of 32-bit arithmetic, if the latter is organized in the right way. Similarly, it can cook up new axioms and rules by following fixed rules satisfying fixed axioms, if the latter are organized in the right way.
And how are these rules determined?
Depends how far back the chain of causation you want to go. There'll be some rules programmed into the computer by human beings. Those were determined by whatever complicated algorithms human brains execute. Those were determined by whatever complicated algorithms human cultures and biological evolution execute. Those were determined by ... etc. As you go further back, you get algorithms with less direct connection to intelligence (ours, or a computer's, or whatever). Ultimately, you end up with whatever the basic laws of nature are, and no one knows those for sure. (But, again, so far as anyone knows they're algorithmic in nature.)
So: no infinite chain, probably (though it's not clear to me that there's anything actually impossible about that); you start with whatever the laws of nature are, and so far as anyone knows they just are what they are. (I suppose you could try to work that up into some kind of first-cause argument for the existence of God, but I should warn you that it isn't likely to work well.)
Really? I think it [emergence] is [magic] ... It seems to me nature is inherently magical.
Oh. Either you're using the word "magical" in a nonstandard way that I don't currently understand, or at least one of us is so terribly wrong about the nature of the universe that further discussion seems unlikely to be helpful.
What I write here may be quite simple (and I am certainly not the first to write about it), but I still think it is worth considering:
Say we have an abitrary problem that we assume has an algorithmic solution, and search for the solution of the problem.
How can the algorithm be determined?
Either:
a) Through another algorithm that exist prior to that algorithm.
b) OR: Through something non-algorithmic.
In the case of AI, the only solution is a), since there is nothing else but algorithms at its disposal. But then we have the problem to determine the algorithm the AI uses to find the solution, and then it would have to determine the algorithm to determine that algorithm, etc...
Obviously, at some point we have to actually find an algorithm to start with, so in any case at some point we need something fundamentally non-algorithmic to determine a solution to an problem that is solveable by an algorithm.
This reveals something fundamental we have to face with regards to AI:
Even assuming that all relevant problems are solvable by an algorithm, AI is not enough. Since there is no way to algorithmically determine the appropiate algorithm for an AI (since this would result in an infinite regress), we will always have to rely on some non-algorithmical intelligence to find more intelligent solutions. Even if we found a very powerful seed AI algorithm, there will always be more powerful seed AI algorithms that can't be determined by any known algorithm, and since we were able to find the first one, we have no reason to suppose we can't find another more powerful one. If an AI recursively improves 100000x times until it is 100^^^100 times more powerful, it still will be caught up if a better seed AI is found, which ultimately can't be done by an algorithm, so that further increases of the most general intelligence always rely on something non-algorithmic.
But even worse, it seems obvious to me that there are important practical problems that have no algorithmic solution (as opposed to theoretical problems like the halting problem, which are still tractable in practice), apart from the problem of finding the right algorithm.
In a sense, it seems all algorithms are too complicated to find the solution to the simple (though not necessarily easy) problem of giving rise to further general intelligence.
For example: No algorithm can determine the simple axioms of the natural numbers from anything weaker. We have postulate them by virtue of the simple seeing that they make sense. Thinking that AI could give rise to ever improving *general* intelligence is like thinking that an algorithm can yield "there is a natural number 0 and every number has a successor that, too, is a natural number". There is simply no way to derive the axioms from anything that doesn't already include it. The axioms of the natural numbers are just obvious, yet can't be derived - the problem of finding the axioms of natural numbers is too simple to be solved algorithmically. Yet still it is obvious how important the notion of natural numbers is.
Even the best AI will always be fundamentally incapable of finding some very simple, yet fundamental principles.
AI will always rely on the axioms it already knows, it can't go beyond it (unless reprogrammed by something external). Every new thing it learns can only be learned in term of already known axioms. This is simply a consequence of the fact that computers/programs are functioning according to fixed rules. But general intelligence necessarily has to transcend rules (since at the very least the rules can't be determined by rules).
I don't think this is an argument against a singularity of ever improving intelligence. It just can't happen driven (solely or predominantly) by AI, whether through a recursively self-improving seed AI or cognitive augmentation. Instead, we should expect a singularity that happens due to emergent intelligence. I think it is the interaction of different kind of intelligence (like human/animal intuitive intelligence, machine precision and the inherent order of the non-living universe, if you want to call that intelligence) that leads to increase in general intelligence, not just one particular kind of intelligence like formal reasoning used by computers.