yes, you can consider a finite computer in the real world to be Turing-complete/Turing-universal/Turing-equivalent,
You can, but you'll be wrong.
Great, "unbounded" isn't the same as "infinite", but in fact all physically realizable computers are bounded. There's a specific finite amount of tape available. You cannot in fact just go down to the store and buy any amount of tape you want. There isn't unlimited time either. Nor unlimited energy. Nor will the machine tolerate unlimited wear.
For that matter, real computers can't even address unlimited storage, nor is there a place to plug it in. You can't in fact write a 6502 assembly language program to solve a problem that requires more than 64kiB of memory. Nor an assembly language program for any physically realized computer architecture, ever, that can actually use unbounded memory.
There are always going to be Turing-computable problems that your physical device cannot solve. Playing word games, or twisting what you'll accept as being Turing-equivalent, doesn't change that fundamental limitation. Actual physics strictly limit the real usefulness of the Turing abstraction. Use it when it makes sense, but there's no point in pretending it applies to physical computers in any strong way.
Great, "unbounded" isn't the same as "infinite", but in fact all physically realizable computers are bounded. There's a specific finite amount of tape available. You cannot in fact just go down to the store and buy any amount of tape you want. There isn't unlimited time either. Nor unlimited energy. Nor will the machine tolerate unlimited wear.
Yes, but that's not relevant to the definition of Turing equivalence/completeness/universality.
The question isn't if the specific computer at your hands can solve all Turing-computable problems, but rather if we had the ability to scale a computer's memory, time and reliability indefinitely, could we solve the problem on an unbounded input and output domain without changing the code/descriptor?
And for a lot of popular programming languages, like Lisp or Lambda Calculus, this is true.
For that matter, real computers can't even address unlimited storage, nor is there a place to plug it in. You can't in fact write a 6502 assembly language program to solve a problem that requires more than 64kiB of memory. Nor an assembly language program for any physically realized computer architecture, ever, that can actually use unbounded memory.
My guess is that the issues are fundamentally because writing an assembly programming language that used arbitrary precision arithmetic/arbitrary precision operations would make programs a whole lot slower by constant factors, so there is no incentive to make assembly programming language that is Turing-complete, and at any rate is already duplicative of the high-level programming languages like Java or Lisp, and you can write a program in Lisp that duplicates the assembly's functionality.
And at any rate, there exist assembly languages that are Turing Complete, so this is irrelevant.
More here:
On X86 being Turing Complete in at least 3 ways:
- X86 shenanigans:
- MMU shuffle computer RAM around to make programming easier; if a program sets up its share of memory properly, it can execute arbitrary computations via MMU page-faults (comments; paper) without ever running code itself by turning the MMU faulting mechanism into a one-instruction set computer.
- mov: “
mov
is Turing-complete”: the apparently innocuous x86 assembler instructionmov
, which copies data between the CPU & RAM, can be used to implement a transport-triggered-architecture one instruction set computer, allowing for playing Doom (and for bonus points, it can be done usingxor
too—there are many such TC one-instruction set computers, such as ByteByteJump)- register-less X86: “x86 is Turing-complete with no registers”
On implementation issues like fixed-precision vs arbitrary precision arithmetic:
Before we dive in, let me clear something up. My critiques below are not about hardware limitations or implementation. Some models rely on numerical values with arbitrary precision to ‘attend’ to tokens in the sequence. That’s fine. It’s common to use 64-bit floating point precision types, which are incapable of representing arbitrary precision values, but that’s an implementation choice separate from the abstract model of a transformer. In the same way that you can switch to a computer with more memory, you can always switch to higher fixed-precision to run a transformer on something that needs that extra boost to execute properly. You can even abandon the floating-point format altogether and store numbers as strings representing the numerator and denominator. As long as you don’t rely on infinite-precision, arbitrary finite precision is easily implementable in a digital representation scheme.
https://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic
There are always going to be Turing-computable problems that your physical device cannot solve. Playing word games, or twisting what you'll accept as being Turing-equivalent, doesn't change that fundamental limitation. Actual physics strictly limit the real usefulness of the Turing abstraction. Use it when it makes sense, but there's no point in pretending it applies to physical computers in any strong way.
The main value of the Turing abstraction in our universe is that it isolates where the bottleneck actually is, and that the bottleneck is not about algorithms/developing new codes to solve specific problems of specific input sizes (with the enormous caveat that if we care about how efficient a program is, and don't just care about whether we can solve a problem, then algorithmic considerations become relevant), but rather that the bottleneck is energy, which gives us memory, time and reliability.
And from the perspective of the early 20th century, this was no small feat.
Yes, but that's not relevant to the definition of Turing equivalence/completeness/universality.
Every Turing machine definition I've ever seen says that the tape has to be truly unbounded. How that's formalized varies, but it always carries the sense that the program doesn't ever have to worry about running out of tape. And every definition of Turing equivalence I've ever seen boils down to "can do any computation a Turing machine can do, with at most a bounded speedup or slowdown". Which means that programs on Turing equivalent computer must not have to worry about running out of storage.
You can't in fact build a computer that can run any arbitrary program and never run out of storage.
One of the explicitly stated conditions of the definition is not met. How is that not relevant to the definition?
The question isn't if the specific computer at your hands
Your title says "finite physical device". Any finite physical device (or at least any constructible finite physical device) can at least in principle be "the specific computer at your hands". For a finite physical device to be Turing equivalent, there would have to be a specific finite physical device that actually was Turing-equivalent. And no such device can ever actually be constructed. In fact no such device could exist even if it popped into being without even having to be constructed.
can solve all Turing-computable problems, but rather if we had the ability to scale a computer's memory, time and reliability indefinitely, could we solve the problem on an unbounded input and output domain without changing the code/descriptor?
I don't think that is the question, and perhaps more importantly I don't think that's an interesting question. You don't have that ability, you won't get that ability, and you'll never get close enough that it's practical to ignore the limitation. So who cares?
... and if you're going to talk in terms of fundamental math definitions that everybody uses, I think you have to stick to what they conventionally mean.
And for a lot of popular programming languages, like Lisp or Lambda Calculus, this is true.
Lisp is obviously Turing-complete. Any Lisp interpreter actually realized on any finite physical computer isn't and can't ever be. If you keep sticking more and more cells onto a list, eventually the Lisp abstraction will be violated by the program crashing with an out-of-memory error. You can't actually implement "full Lisp" in the physical world.
On X86 being Turing Complete in at least 3 ways:
OK, it's possible that there's some subset of the X86 machine language that's Turing equivalent the same way Lisp is. I'm not going to go and try to figure out whatever hackery the examples do, since it's probably very complicated and will probably never be of any actual use. But if there is, it's still not Turing equivalent as actually implemented in any actual device.
Any actual physically constructed X86 computer will have a finite number of possible states, no matter what operations you use to manipulate them. There are only so many address wires coming out of the chip. There are only so many registers, memory cells, or whatever. Even if you put a Turing tape reader on it as a peripheral, there's still a hard limit on how much tape it can actually have.
If you write a program that ignores that reality, and put it in an actual X86 computer, you won't have created a Turing complete physical computer. When the input gets to a certain size, the program just won't work. The physical hardware can't support pushing the abstract language past a certain limit.
In the same way that you can switch to a computer with more memory, you can always switch to higher fixed-precision to run a transformer on something that needs that extra boost to execute properly.
No, you can't. It's possible to have a problem that requires so much precision that you can't physically construct enough memory to hold even a single number.
(with the enormous caveat that if we care about how efficient a program is, and don't just care about whether we can solve a problem, then algorithmic considerations become relevant),
A useful definition of "can" has to take efficiency into account, because there are some resources you actually can't provide. There's not a lot of point in saying you "can" solve a problem when you really have no hope of applying the needed resources.
We use that practically all the time. That's how cryptography works: you assume that your adversary won't be able to do more than N operations in X time, where X is how long the cryptography has to be effective for.
the bottleneck is energy, which gives us memory, time and reliability.
Maybe, although I don't think we can at present turn energy in just any form into any of the above, and I'm not sure that, in principle, unlimited energy translates into unlimited anything else. If I have some huge amount of energy in some tiny space, I have a black hole, not a computer.
And from the perspective of the early 20th century, this was no small feat.
... but even if that were true, it wouldn't make finite physical computers Turing equivalent.
Every Turing machine definition I've ever seen says that the tape has to be truly unbounded. How that's formalized varies, but it always carries the sense that the program doesn't ever have to worry about running out of tape. And every definition of Turing equivalence I've ever seen boils down to "can do any computation a Turing machine can do, with at most a bounded speedup or slowdown". Which means that programs on Turing equivalent computer must not have to worry about running out of storage.
You can't in fact build a computer that can run any arbitrary program and never run out of storage.
One of the explicitly stated conditions of the definition is not met. How is that not relevant to the definition?
Yes, this is correct, with the important caveat that the memory is unbounded by the systems descriptor/programming language, not the physical laws or anything else which is the key thing you missed.
Essentially speaking, it's asking if modern computers can cope with arbitrary extensions to their memory and time and reliability without requiring us to write new programming languages/coding, not if a specific computer at a specified memory and time limit is Turing complete.
Looking at your comment more, I think this disagreement is basically a definitional dispute, in a way, because I allow machines that are limited by the laws of physics but are not limited by their systems descriptor/programming language to be Turing complete, while you do not, and I noticed we had different definitions that led to different results.
I suspect this was due to focusing on different things, where I was focused on the extensibility of the computer concept as well as the more theoretical aspects, whereas you were much more focused on the low level situation.
A crux might be that I definitely believe that given an unlimited energy generator, it is very easy to to create a universal computer out of it, and I think energy is much, much closer to a universal currency than you do.
I don't think a TC computer can ever be built in our universe. The observable universe has a finite number of atoms, I have seen numbers around thrown around. Even if you can build a RAM where each atom stores 1 bit,[1] this is still very much finite.
I think a much more interesting question is why TC machines are — despite only existing in theory — such useful models for thinking about real-world computers. There is obviously some approximation going on here, where for the vast majority of real-world problems, you can write them in such a way that they work just fine with finite RAM. (E.g. the cases where we would run out of RAM are quite easy to predict, and don't just crop up randomly.)
You can maybe push that number up by e.g. using subatomic particles or using a number of different quantum states of a single particle, but I think with all the tricks you will still end up with some well bounded constant multiplier for how many bits you can store, which does not matter much for this argument. ↩︎
Notably, this was exactly the sort of belief I was trying to show is false, and your observation about the physical universe does not matter for the argument I made here, because the question is whether with say 2^1000000 atoms, you can solve larger problem sizes with the same code, and Turing-complete systems say yes to the question.
In essence, it's a question of whether we can scale our computers with more memory and time without having to change the code/algorithms, and basically all modern computers can do this in theory.
I think a much more interesting question is why TC machines are — despite only existing in theory — such useful models for thinking about real-world computers. There is obviously some approximation going on here, where for the vast majority of real-world problems, you can write them in such a way that they work just fine with finite RAM. (E.g. the cases where we would run out of RAM are quite easy to predict, and don't just crop up randomly.)
Short version, because you can always extend them with more memory and time, and it really matters a lot in practical computing if you can get a general coding solution that is also cheap to work with, because it can handle upgrades to it's memory very well.
In essence, the system descriptor/code doesn't have to change if the memory and time increases (for a finite state machine or look-up table, they would have to change.
Notably, this was exactly the sort of belief I was trying to show is false
Please point out if there is a specific claim I made in my comment that you believe to be false. I said that "I don't think a TC computer can ever be built in our universe.", which you don't seem to argue with? (If we assume that we can only ever get access to a finite number of atoms. If you dispute this I won't argue with that, neither of us has a Theory of Everything to say for certain.)
Just to make precise why I was making that claim and what it was trying to argue against, take this quote from the post:
yes, you can consider a finite computer in the real world to be Turing-complete
I dropped the "finite computer" constraint and interpreted the phrase "real world" to mean that "it can be built in our universe", this is how I arrived at the "a TC computer can be built in our universe" statement, which I claimed was false.
I think I understand the question now.
I actually agree that if we assume that there's a finite maximum of atoms, we could in principle reformulate the universal computer as a finite state automaton, and if we were willing to accept the non-scalability of a finite state automaton, this could actually work.
The fundamental problem is that now we would have software that only works up to a specified memory limit, because we essentially burned the software into the hardware of the finite automaton and if you are ever uncertain of how much memory or time a problem requires, or more worryingly if we were ever uncertain about how much resources we could actually use, then our "software" for the finite automaton is no longer usable and we'd have to throw it away and recreate a new computer for every input length.
Turing Machine models automatically handle arbitrarily large inputs without having to throw away expensive work on developing the software.
So in essence, if you want to handle the most general case, or believe unbounded atoms are possible, like me, then you really want the universal computer architecture of modern computers.
The key property of real computers that makes them Turing Complete in theory is that they can scale with more memory and time arbitrarily without changing the system descriptior/code.
More below:
(If we assume that we can only ever get access to a finite number of atoms. If you dispute this I won't argue with that, neither of us has a Theory of Everything to say for certain.)
Let be the state space of our finite/physical computer, where is the number of bits of state the computer has. This can include RAM, non-volatile storage, CPU registers, cache, GPU RAM, etc... just add up all the bits.
The stateless parts of the computer can be modeled as a state transition function , which is applied at every time step to produce the next state. (And let's suppose that there is some special halting state .)
This is clearly a FSM with states, and not TC. The halting problem can be trivially solved for it: it is guaranteed to either halt or loop after steps.
(Proof: It can be seen that if we ever get back to a state where we have already been, we will loop. So let's suppose that after time we have had a different state at each time step, and haven't halted. This is impossible, since there are only non-halting states, so there cannot exist distinct ones.)
Notably, this is why we focus on the arbitrarily large memory and time case, where we can assume that the machine has arbitrarily large memory and time to work with.
The key question here is whether a finite physical computer can always be extended with more memory and time without requiring us to recode the machine into a different program/computer, and most modern computers can do this (modulo physical issues of how you integrate more memory and time).
In essence, the key property of modern computers is that the code/systems descriptor doesn't change if we add more memory and time, and this is the thing that leads to Turing-completeness if we allow unbounded memory and time.
Here's some very useful quotes by the article, and the short version is yes, you can consider a finite computer in the real world to be Turing-complete/Turing-universal/Turing-equivalent, because you can always write code that correctly computes a problem and have that code always work, no matter how much memory or time or energy they use in modern computers.
In essence, physical limitations don't matter for the purpose of determining whether a computational system is Turing-complete/Turing-universal/Turing-equivalent, it's whether the code you wrote for a problem can arbitrarily scale with more resources and time, or whether more resources and time fundamentally require you to rewrite new code to deal with larger inputs: