shminux comments on Can somebody explain this to me?: The computability of the laws of physics and hypercomputation - Less Wrong

12 Post author: ChrisHallquist 21 April 2013 09:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: pragmatist 22 April 2013 01:39:42AM *  19 points [-]

There are spacetime models in general relativity (i.e. solutions to the Einstein equations) that permit hypercomputation. In a Malament-Hogarth spacetime, there are wordlines such that an object traveling along the worldline will experience infinite time, but for an observer at some point p outside the wordline, the time it takes for the object to traverse the worldline is finite, and the wordline is entirely within p's causal past. So if you had a Turing machine traveling along this wordline, it could send a signal to p if and onliy if it halted, and the observer at p is guaranteed to receive the signal if the machine ever halts. No infinite-precision measurements are involved (unless perhaps you believe that a Turing machine operating reliably for an indefinite period of time is tantamount to an infinite-precision measurement).

Are these spacetimes physically possible? Well, like I said, they satisfy the basic laws of GR. However, they are not globally hyperbolic, which means there is is no space-like surface (analogous to an "instant of time") such that providing all data on that surface fully determines the data over the rest of space-time uniquely. In other words, determinism of a particularly strong variety fails.

The strong version of the cosmic censorship hypothesis essentially states that a physically reasonable spacetime must be globally hyperbolic, so if you take it as a criterion of physical possibility, then Malament-Hogarth spacetimes are not physically possible. I guess this just brings up a certain amount of vagueness in the phrase "physically possible". It is usually taken to mean "possible according to the physical laws", but what exactly delimits what counts as a physical law? Suppose our universe is globally hyperbolic. Would it then be a law that space-time is globally hyperbolic? Anyway, if you have a more restrictive notion of physical law, such that only laws of temporal evolution count, then the laws of general relativity at least appear to permit hypercomputation.

At the end of your post you suggest that the computability of physical law might rule out hypercomputation. But the M-H spacetime argument does not require that the laws are uncomputable. There is a simple argument from the computability of the physical laws to the impossibility of hypercomputation if you assume full determinism, but that is an additional assumption. You can easily prove that hypercompuation is physically impossible if you make the following four assumptions (presented in reverse order of plausibility, according to my own metric):

(1) The laws of physics are computable.

(2) The laws of physics are complete (i.e. there are no phenomena that are not covered by the laws).

(3) Spacetime must be globally hyperbolic (i.e. there must be a space-like surface of the kind described above).

(4) Finite-precision data over any space-like surface is sufficient for accurately determining the data everywhere on that surface's domain of dependence.

Comment author: shminux 22 April 2013 06:19:10AM *  12 points [-]

In a Malament-Hogarth spacetime, there are wordlines such that an object traveling along the worldline will experience infinite time, but for an observer at some point p outside the wordline, the time it takes for the object to traverse the worldline is finite, and the wordline is entirely within p's causal past.

Is it an actual spacetime, or just a class of every spacetime with this property? I'm having trouble locating a paper describing the metric that is not just the inside of the Kerr metric.

Are these spacetimes physically possible? Well, like I said, they satisfy the basic laws of GR.

As a self-appointed resident GR expert, I would like to caution against this misapplication of GR. You cannot simply "place a Turing machine into a spacetime". The Turing machine is not a test particle. It has mass, it computes stuff and hence radiates heat, it increases the entropy of the universe. As a result, the spacetime with the Turing machine in it is different from the spacetime without. The change is tiny and can be neglected in many circumstances, but most emphatically not in this case.

The reason that placing a material object at non-zero temperature into a spacetime which maps an infinite affine-parameter geodesic into a finite one is that you get infinite amount of radiation and entropy in a finite time. As a result, your spacetime blows up into bits. This is a modification of Hawking's argument in favor of his famous Chronology Protection Conjecture (he used CTCs and virtual particles, not real objects).

This argument is quite general, but not often appreciated and not formalized, except in a few cases. It also applies to any attempts to use a CTC as an actual trajectory of a warm material object: you cannot hope to match up every microstate exactly after a complete loop simply by evolution. Hence the Novikov's disclaimer about his self-consistency principle: it's vacuous unless you presume "new physics" beyond GR.

Comment author: pragmatist 22 April 2013 07:47:15AM *  9 points [-]

Is it an actual spacetime, or just a class of every spacetime with this property? I'm having trouble locating a paper describing the metric that is not just the inside of the Kerr metric.

It's the class of every spacetime with the property. Examples besides the Kerr spacetime are the universal covering of anti-de Sitter spacetime, the Reissner-Nodstrom spacetime, even a simple Minkowski spacetime rolled up along the temporal axis (or in fact any spacetime with CTCs).

As a self-appointed resident GR expert, I would like to caution against this misapplication of GR. You cannot simply "place a Turing machine into a spacetime". The Turing machine is not a test particle. It has mass, it computes stuff and hence radiates heat, it increases the entropy of the universe. As a result, the spacetime with the Turing machine in it is different from the spacetime without. The change is tiny and can be neglected in many circumstances, but most emphatically not in this case.

Fair point. The spacetime structure will indeed indefinitely amplify even the tiniest bit of thermal radiation. And it is also true that Landauer's principle tells us that a computational process must radiate heat.

But Landauer's principle is a consequence of the Second Law of Thermodynamics, and the Second Law is not, to the best of our knowledge, a fundamental law. It holds in our universe because of special boundary conditions, but it is entirely possible to construct universes with the same fundamental laws and different boundary conditions so that entropy stops increasing at some point in time and begins decreasing, or where entropy does not exhibit any significant monotonic tendency at all.

Does rigging boundary conditions in this manner take us outside the realm of physical possibility? Again, that depends on what the OP means by "physically possible". If all he means is "consistent with the fundamental laws of temporal evolution" then no, choosing special boundary conditions which negate the Second Law does not violate physical possibility. Of course, one would need very specifically (and implausibly) rigged boundary conditions in order to get a universe with a M-H setup that does not blow up, but astronomical unlikelihood is not the same as impossibility.

ETA: If you're interested, here's a nice paper showing that a Malament-Hogarth spacetime can be constructed that satisfies various criteria of physical reasonableness (energy conditions, stable causality, etc.).

Comment author: shminux 22 April 2013 04:44:02PM *  5 points [-]

It's the class of every spacetime with the property. Examples besides the Kerr spacetime are the universal covering of anti-de Sitter spacetime, the Reissner-Nodstrom spacetime, even a simple Minkowski spacetime rolled up along the temporal axis (or in fact any spacetime with CTCs).

Thanks for the examples, that's what I suspected, though I find the CTC examples dubious at best, as you appeal to a much stronger impossibility to justify a weaker one. I am not a stickler for global hyperbolicity, I can certainly imagine topological and/or geometric instantons "magically" appearing and disappearing. These don't cause infinite backreaction the way CTCs do.

If you're interested, here's a nice paper showing that a Malament-Hogarth spacetime can be constructed that satisfies various criteria of physical reasonableness (energy conditions, stable causality, etc.).

It does indeed attempts to address most of the issues, but not the divergent emissions one, which seems mutually exclusive with non-divergent red shift. I am even fine with the "requires infinite energy" issue, since I can certainly imagine pumping energy through a whitehole from some other inaccessible spacetime (or some other instanton-like event).

Does rigging boundary conditions in this manner take us outside the realm of physical possibility?

My interest is whether some hypercomputational construct can be embedded into our universe (which is roughly of the expanding FRW-dS type), not whether some other universe where entropy can decrease can perform these tricks. The reason, again, is that if you use much stronger assumptions to justify something weaker, the argument becomes much less interesting. In an extreme case "because DM decided so" would trivially support anything you want.

Comment author: SilasBarta 23 April 2013 06:22:41PM *  2 points [-]

But Landauer's principle is a consequence of the Second Law of Thermodynamics, and the Second Law is not, to the best of our knowledge, a fundamental law. It holds in our universe because of special boundary conditions, but it is entirely possible to construct universes with the same fundamental laws and different boundary conditions so that entropy stops increasing at some point in time and begins decreasing, or where entropy does not exhibit any significant monotonic tendency at all.

What about the Drescher/Barbour argument that the Second Law is an artifact of observers' ability to record time histories? That is, the only states that will contain "memories" (however implemented) of past states are the ones where entropy is higher than in the "remembering" state, because all processes of recording increase entropy.

So even in those thought experiments where you "reverse time" of the chaotic billiards-ball-world back to a low-entropy t = 0 and keep going so that entropy increases in the negative time direction, the observers in that "negative time" state will still regard t = 0 to be in their past. Furthermore, any scenario you could set up where someone is only entangled with stuff that you deliberately decrease the entropy of (by increasing entropy outside the "bubble"), will result in that person thinking that the flow of time was the opposite of what you think.

I don't know how well this arguments meshes with the possibility of such GR solutions.