Eliezer_Yudkowsky comments on Causal Universes - Less Wrong

60 Post author: Eliezer_Yudkowsky 29 November 2012 04:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (385)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 28 November 2012 06:13:09AM 22 points [-]

Mainstream status:

I haven't yet particularly seen anyone else point out that there is in fact a way to finitely Turing-compute a discrete universe with self-consistent Time-Turners in it. (In fact I hadn't yet thought of how to do it at the time I wrote Harry's panic attack in Ch. 14 of HPMOR, though a primary literary goal of that scene was to promise my readers that Harry would not turn out to be living in a computer simulation. I think there might have been an LW comment somewhere that put me on that track or maybe even outright suggested it, but I'm not sure.)

The requisite behavior of the Time Turner is known as Stable Time Loops on the wiki that will ruin your life, and known as the Novikov self-consistency principle to physicists discussing "closed timelike curve" solutions to General Relativity. Scott Aaronson showed that time loop logic collapses PSPACE to polynomial time.

I haven't yet seen anyone else point out that space and time look like a simple generalization of discrete causal graphs to continuous metrics of relatedness and determination, with c being the generalization of locality. This strikes me as important, so any precedent for it or pointer to related work would be much appreciated.

Comment author: Kaj_Sotala 28 November 2012 08:07:59AM 19 points [-]

The relationship between continuous causal diagrams and the modern laws of physics that you described was fascinating. What's the mainstream status of that?

Comment author: Eliezer_Yudkowsky 28 November 2012 11:20:35AM 3 points [-]

Odd, the last paragraph of the above seems to have gotten chopped. Restored. No, I haven't particularly heard anyone else point that out but wouldn't be surprised to find someone had. It's an important point and I would also like to know if anyone has developed it further.

Comment author: shev 30 November 2012 08:13:26PM *  8 points [-]

I found that idea so intriguing I made an account.

Have you considered that such a causal graph can be rearranged while preserving the arrows? I'm inclined to say, for example, that by moving your node E to be on the same level - simultaneous with - B and C, and squishing D into the middle, you've done something akin to taking a Lorentz transform?

I would go further to say that the act of choosing a "cut" of a discrete causal graph - and we assume that B, C, and D share some common ancestor to prevent completely arranging things - corresponds to the act of the choosing a reference frame in Minkowski space. Which makes me wonder if max-flow algorithms have a continuous generalization.

edit: in fact, max-flows might be related to Lagrangians. See this.

Comment author: irrationalist 28 November 2012 12:45:41PM 2 points [-]

Showed up in Penrose's "The Fabric of Reality." Curvature of spacetime is determined by infinitesimal light cones at each point. You can get a uniquely determined surface from a connection as well as a connection from a surface.

Comment author: Eliezer_Yudkowsky 28 November 2012 05:58:03PM 10 points [-]

Obviously physicists totally know about causality being restricted to the light cone! And "curvature of space = light cones at each point" isn't Penrose, it's standard General Relativity.

Comment author: irrationalist 30 November 2012 04:08:42PM 0 points [-]

Not claiming it's his own idea, just that it showed up in the book, I assume it's standard.

Comment author: diegocaleiro 29 November 2012 04:04:47AM 6 points [-]

David Deutsch, not Roger Penrose. Or wrong title.

Comment author: lukeprog 28 November 2012 03:17:55PM 3 points [-]

Page number?

Comment author: Cyan 28 November 2012 09:22:55PM 17 points [-]

space and time look like a simple generalization of discrete causal graphs to continuous metrics of relatedness and determination

Mind officially blown once again. I feel something analogous to how I imagine someone who had been a heroin addict in the OB-bookblogging time period and in methadone treatment during the subsequent non-EY-non-Yvain-LW time period would feel upon shooting up today. Hey Mr. Tambourine Man, play a song for me / In the jingle-jangle morning I'll come following you.

Comment author: Gust 26 December 2012 05:33:12AM 2 points [-]

Seconded.

Comment author: Plasmon 28 November 2012 09:33:16AM 12 points [-]

finitely Turing-compute a discrete universe with self-consistent Time-Turners in it

In computational physics, the notion of self-consistent solutions is ubiquitous. For example, the behaviour of charged particles depends on the electromagnetic fields, and the electromagnetic fields depend on the behaviour of charged particles, and there is no "preferred direction" in this interaction. Not surprisingly, much research has been done on methods of obtaining (approximations of) such self-consistent solutions, notably in plasma physics and quantum chemistry. just some examples.

It is true that these examples do not involve time travel, but I expect the mathematics to be quite similar, with the exception that these physics-based examples tend to have (should have) uniquely defined solutions.

Comment author: Eliezer_Yudkowsky 28 November 2012 11:16:36AM 4 points [-]

Er, I was not claiming to have invented the notion of an equilibrium but thank you for pointing this out.

Comment author: Plasmon 28 November 2012 11:48:21AM 2 points [-]

I didn't think you were claiming that, I was merely pointing out that the fact that self-consistent solutions can be calculated may not be that surprising.

Comment author: Eliezer_Yudkowsky 28 November 2012 05:59:32PM 2 points [-]

The Novikov self-consistency principle has already been invented; the question was whether there was precedent for "You can actually compute consistent histories for discrete universes." Discrete, not continuous.

Comment author: Plasmon 28 November 2012 06:49:20PM 1 point [-]

Yes, hence, "In computational physics", a branch of physics which necessarily deals with discrete approximations of "true" continuous physics. It seems really quite similar, I can even give actual examples of (somewhat exotic) algorithms where information from the future state is used to calculate the future state, very analogous to your description of a time-travelling game of life.

Comment author: Peterdjones 28 November 2012 12:05:59PM *  10 points [-]

There are precedents and parallels in Causal Sets and Causal Dynamical Triangulation

CDT is particularly interesting for its ability to predict the correct macroscopic dimensionality of spacetime:

" At large scales, it re-creates the familiar 4-dimensional spacetime, but it shows spacetime to be 2-d near the Planck scale, and reveals a fractal structure on slices of constant time"

Comment author: Jach 29 November 2012 08:35:52AM 0 points [-]

I was going to reply with something similar. Kevin Knuth in particular has an interesting paper deriving special relativity from causal sets: http://arxiv.org/abs/1005.4172

Comment author: evand 28 November 2012 06:05:32PM 8 points [-]

Scott Aaronson showed that time loop logic collapses PSPACE to polynomial time.

It replaces the exponential time requirement with an exactly analogous exponential MTBF reliability requirement. I'm surprised by how infrequently this is pointed out in such discussions, since it seems to me rather important.

Comment author: Douglas_Knight 29 December 2012 04:50:07PM 1 point [-]

It's true that it requires an exponentially small error rate, but that's cheap, so why emphasize it?

Comment author: evand 29 December 2012 06:49:28PM 1 point [-]

I am not aware of any process, ever, with a demonstrated error rate significantly below that implied by a large, fast computer operating error-free for an extended period of time. If you can't improve on that, you aren't getting interesting speed improvements from the time machine, merely moderately useful ones. (In other words, you're making solvable expensive problems cheap, but you're not making previously unsolvable problems solvable.)

In cases where building high-reliability hardware is more difficult than normal (for example: high-radiation environments subject to drastic temperature changes and such), the existing experience base is that you can't cheaply add huge amounts of reliability, because the error detection and correction logic starts to limit the error performance.

Right now, a high performance supercomputer working for a couple weeks can perform ~ 10^21 operations, or about 2^70. If we assume that such a computer has a reliability a billion times better than it has actually demonstrated (which seems like a rather generous assumption to me), that still only leaves you solving 100-bit size NP / PSPACE problems. Adding error correction and detection logic might plausibly get you another factor of a billion, maybe two factors of a billion. In other words: it might improve things, but it's not the indistinguishable from magic NP-solving machine some people seem to think it is.

Comment author: Eugine_Nier 29 November 2012 06:10:31AM -1 points [-]

It's also interesting how few people seem to realize that Scott Aaronson's time loop logic is basically a form of branching timelines rather than HP's one consistent universe.

Comment author: shminux 30 November 2012 12:34:37AM *  6 points [-]

It is rarely appreciated that the Novikov self-consistency principle is a trivial consequence of the uniqueness of the metric tensor (up to diffeomorphisms) in GR.

Indeed, given that (a neighborhood of) each spacetime point, even in a spacetime with CTCs, has a unique metric, it also has unique stress-energy tensor derived from this metric (you neighborhoods to do derivatives). So there is a unique matter content at each spacetime point. In other words, your grandfather cannot be alternately alive (first time through the loop) or dead (when you kill him the second time through the loop) at a given moment in space and time.

The unfortunate fact that we can even imagine the grandfather paradox to begin with is due to our intuitive thinking that that the spacetime is only a background for "real events", a picture as incompatible with GR as perfectly rigid bodies are with SR.

Comment author: iainjcoleman 05 December 2012 11:32:29PM 0 points [-]

How does the mass-energy of a dead grandfather differ from the mass-energy of a live one?

Comment author: shminux 05 December 2012 11:37:13PM 5 points [-]

Pretty drastically. One is decaying in the ground, the other is moving about in search of a mate. Most people have no trouble telling the difference.

Comment author: [deleted] 06 December 2012 01:20:20AM 3 points [-]

The total four-momentum may well be the same in both case, but the stress-energy-momentum tensor is different (the blood is moving in the live grandfather but not the dead one, etc., etc.)

Comment author: orthonormal 28 November 2012 11:54:22PM 5 points [-]

I've seen academic physicists use postselection to simulate closed timelike curves; see for instance this arXiv paper, which compares a postselection procedure to a mathematical formalism for CTCs.

Comment author: nigerweiss 28 November 2012 10:27:52PM 4 points [-]

I tend to believe that most fictional characters are living in malicious computer simulations, to satisfy my own pathological desire for consistency. I now believe that Harry is living in an extremely expensive computer simulation.

Comment author: DanArmak 30 November 2012 05:40:57PM *  3 points [-]
Comment author: Steve_Rayhawk 28 November 2012 07:16:54PM *  4 points [-]

I know that the idea of "different systems of local consistency constraints on full spacetimes might or might not happen to yield forward-sampleable causality or things close to it" shows up in Wolfram's "A New Kind of Science", for all that he usually refuses to admit the possible relevance of probability or nondeterminism whenever he can avoid doing so; the idea might also be in earlier literature.

that there is in fact a way to finitely Turing-compute a discrete universe with self-consistent Time-Turners in it.

I'd thought about that a long time previously (not about Time-Turners; this was before I'd heard of Harry Potter). I remember noting that it only really works if multiple transitions are allowed from some states, because otherwise there's a much higher chance that the consistency constraints would not leave any histories permitted. ("Histories", because I didn't know model theory at the time. I was using cellular automata as the example system, though.) (I later concluded that Markov graphical models with weights other than 1 and 0 were a less brittle way to formulate that sort of intuition (although, once you start thinking about configuration weights, you notice that you have problems about how to update if different weight schemes would lead to different partition function values).)

I think there might have been an LW comment somewhere that put me on that track

I know we argued briefly at one point about whether Harry could take the existence of his subjective experience as valid anthropic evidence about whether or not he was in a simulation. I think I was trying to make the argument specifically about whether or not Harry could be sure he wasn't in a simulation of a trial timeline that was going to be ruled inconsistent. (Or, implicitly, a timeline that he might be able to control whether or not it would be ruled inconsistent. Or maybe it was about whether or not he could be sure that there hadn't been such simulations.) But I don't remember you agreeing that my position was plausible, and it's possible that that means I didn't convey the information about which scenario I was trying to argue about. In that case, you wouldn't have heard of the idea from me. Or I might have only had enough time to figure out how to halfway defensibly express a lesser idea: that of "trial simulated timelines being iterated until a fixed point".

Comment author: Alexei 28 November 2012 12:57:25PM *  3 points [-]

You can do some sort of lazy evaluation. I took the example you gave with the 4x4 grid (by the way you have a typo: "we shall take a 3x3 Life grid"), and ran it forwards, and it converges to all empty squares in 4 steps. See this doc for calculations.

Even if it doesn't converge, you can add another symbol to the system and continue playing the game with it. You can think of the symbol as a function. In my document x = compute_cell(x=2,y=2,t=2)

Comment author: Alex_Altair 28 November 2012 06:23:53PM 0 points [-]

by the way you have a typo

Fixed.

Comment author: Liron 01 December 2012 04:40:19AM 4 points [-]

I haven't yet seen anyone else point out that space and time look like a simple generalization of discrete causal graphs to continuous metrics of relatedness and determination, with c being the generalization of locality.

Yeah, this is one of the most profound things I've ever read. This is a RIDICULOUSLY good post.

Comment author: kremlin 26 December 2013 04:36:54PM 2 points [-]

The 'c is the generalization of locality' bit looked rather trivial to me. Maybe that's just EY rubbing off on me, but...

Its obvious that in Conways Game, it takes at least 5 iterations for one cell to affect a cell 5 units away, and c has for some time seemed to me like our worlds version of that law

Comment author: gjm 28 November 2012 03:11:12PM 3 points [-]

I think there might have been an LW comment somewhere that put me on that track or maybe even outright suggested it

I certainly made a remark on LW, very early in HPMoR, along the following lines: If magic, or anything else that seems to operate fundamentally at the level of human-like concepts, turns out to be real, then we should see that as substantial evidence for some kind of simulation/creation hypothesis. So if you find yourself in the role of Harry Potter, you should expect that you're in a simulation, or in a universe created by gods, or in someone's dream ... or the subject of a book :-).

I don't think you made any comment on that, so I've no idea whether you read it. I expect other people made similar points.

Comment author: RobbBB 29 November 2012 04:10:14AM *  3 points [-]

It's more immediately plausible to hypothesize that certain phenomena and regularities in Harry's experience are intelligently designed, rather than that the entire universe Harry occupies is. We can make much stronger inferences about intelligences within our universe being similar to us, than about intelligences who created our universe being similar to us, since, being outside our universe/simulation, they would not necessarily exist even in the same kind of logical structure that we do.

Comment author: aaronsw 29 November 2012 11:40:30PM *  1 point [-]

I don't totally understand it, but Zuse 1969 seems to talk about spacetime as a sort of discrete causal graph with c as the generalization of locality ("In any case, a relation between the speed of light and the speed of transmission between the individual cells of the cellular automaton must result from such a model."). Fredkin and Wolfram probably also have similar discussions.

Comment author: Dentin 29 November 2012 10:02:45PM 1 point [-]

I'm not sure how to respond to this; the ability to compute it in a finite fashion for discrete universes seemed trivially obvious to me when I first pondered the problem. It would never have occurred to me to actually write it down as an insight because it seemed like something you'd figure out within five minutes regardless.

"Well, we know there are things that can't happen because there are paradoxes, so just compute all the ones that can and pick one. It might even be possible to jig things such that the outcome is always well determined, but I'd have to think harder about that."

That said, this may just be a difference in background. When I was young, I did a lot of thinking about Conway's Life and in particular "garden of eve" states which have no precursor. Once you consider the possibility of garden of eve states and realize that some Life universes have a strict 'start time', you automatically start thinking about what other kinds of universes would be restricted. Adding a rule with time travel is just one step farther.

On the other hand, the space/time causal graph generalization is definitely something I didn't think about and isn't even something I'd heard vaguely mentioned. That one I'll have to put some thought into.