Again, you are mistaken. I assumed that you could execute any finite number of instructions in an instant. Computing Solomonoff probabilities requires executing an infinite number of instructions, since it implies assigning probabilities to all possible hypotheses that result in the appearances.
In other words, if you assume the ability to execute an infinite number of instructions (as opposed to simply the instantaneous execution of any finite number), you will indeed be able to "compute" the incomputable. But you will also be able to solve the halting problem, by running a program for an infinite number of steps and checking whether it halts during that process or not. As you said earlier, this is not what is typically meant by computable.
(If that is not clear enough for you, consider the fact that a Turing machine is allowed an infinite amount of "memory" by definition, and the amount of time it takes to execute a program is no part of the formalism. So "computable" and "incomputable" in standard terminology do indeed apply to computers with infinite resources in the sense that I specified.)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Ok, let's go back and review this conversation.
You started out by saying, in essence, that general AI is just a matter of having good enough hardware.
You were wrong. Dead wrong. The opposite is true: it is purely a matter of software, and sufficiently good hardware. We have no idea how good the hardware needs to be. It is possible that a general AI could be programmed on the PC I am currently using, for all we know. Since we simply do not know how to program an AI, we do not know whether it could run on this computer or not.
You supported your mistake with the false claim that AIXI and Solomonoff induction are computable, in the usual, technical sense. You spoke of this as though it were a simple fact that any well educated person knows. The truth was the opposite: neither one is computable, in the usual, technical sense. And the usual technical sense of incomputable implies that the thing is incomputable even without a limitation on memory or clock speed, as long as you are allowed to execute a finite number of instructions, even instantaneously.
You respond now by saying, "Solomonoff induction is not in fact infinite..." Then you are not talking about Solomonoff induction, but some approximation of it. But in that case, conclusions that follow from the technical sense of Solomonoff induction do not follow. So you have no reason to assume that some particular program will result in intelligent behavior, even removing limitations of memory and clock speed. And until someone finds that program, and proves that it will result in intelligent behavior, no one knows how to program general AI, even without hardware limitations. That is our present situation.
Ok this is where the misunderstanding happened. What I said was "if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions." Truly infinite compute resources will never exist. So that's not a claim about "we just need better hardware" but rather "if we had magic oracle pixie dust, it'd be easy."
The rest I am uninterested in debating further.