Comment author: Benito 25 July 2013 11:31:41PM 1 point [-]

You didn't quite say 'choose what is true', I was just pointing out how closely what you wrote matched certain anti-epistemologies :-)

I'm also saying that if you think the other worlds 'collapse' then your intuitions will collide with reality when you have to account for one of those other worlds decohering something you were otherwise expecting not to decohere. But this is relatively minor in this context.

Also, unless I misunderstood you, your last point is not relevant to the truth-value of the claim, which is what we're discussing here, not it's social benefit (or whatever).

Comment author: RogerS 26 July 2013 11:05:19AM 1 point [-]

the truth-value of the claim, which is what we're discussing here

More precisely, it's what you're discussing. (Perhaps you mean I should be!) In the OP I discussed the implications of an infinitely divisible system for heuristic purposes without claiming such a system exists in our universe. Professionally, I use Newtonian mechanics to get the answers I need without believing Einstein was wrong. In other words, I believe true insights can be gained from imperfect accounts of the world (which is just as well, since we may well never have a perfect account). But that doesn't mean I deny the value of worrying away at the known imperfections.

Comment author: Benito 25 July 2013 09:24:13PM 0 points [-]

I observe the usual "Well, both explanations offer the exact same experimental outcomes, therefore I can choose what is true as I feel".

Furthermore, thinking in the Copenhagen way will constantly cause you to re-remember to include the worlds which you thought had 'collapsed' into your calculations, when they come to interfere with your world. It's easier (and a heck of a lot more parsimonious, but for that argument see the QM Sequence) to have your thoughts track reality if you think Many-Worlds is true.

Comment author: RogerS 25 July 2013 11:10:49PM 2 points [-]

Well, I didn't quite say "choose what is true". What truth means in this context is much debated and is another question. The present question is to understand what is and isn't predictable, and for this purpose I am suggesting that if the experimental outcomes are the same, I won't get the wrong answer by imagining CI to be true, however unparsimonious. If something depends on the whether an unstable nucleus decays earlier or later than its half life, I don't see how the inhabitants of the world where it has decayed early and triggered a tornado (so to speak) will benefit much by being confident of the existence of a world where it decayed late. Or isn't that the point?

Comment author: Vaniver 30 March 2013 12:55:28AM 0 points [-]

As for the Naval Gunner, the point is that he would be right in other fields than fundamental physics. In weather forecasting long term forecasts using coarser models are actually more accurate than those using fine meshes, because of the chaotic behaviour at smaller scales.

I don't quite agree here. It's true that chaotic interactions and floating point multiplication errors mean that long-running fine-grained maps are less accurate than long-running coarse-grained maps, but it seems cleaner to consider that a fact about computer science, not meteorology.

Thanks for pointing to the more recent EY post, which I look forward to reading. No time tonight.

I would actually recommend Hands vs. Fingers first if you haven't read it yet. It's shorter and may be more directly relevant to your interests.

Comment author: RogerS 25 July 2013 02:25:51PM 0 points [-]

I mentioned back in April that the point about chaos and computer science needed a proper discussion. It is here.

I also mentioned another way of taking the reductionism question further. I was referring to this.

Comment author: ChristianKl 25 July 2013 12:43:07PM 3 points [-]

It is often claimed that the the Uncertainty Principle of quantum mechanics [10] makes the future unpredictable [11], but in the terms of the above analysis this is far from the whole story.

The predominant interpretation of quantum dynamics on LessWrong seems Many Worldism. I think it would make sense to address it shortly.

Comment author: RogerS 25 July 2013 01:21:43PM 4 points [-]

I agree, I had thought of mentioning this but it's tricky. As I understand it, living in one of Many Worlds feels exactly like living in a single "Copenhagen Interpretation" world, and the argument is really over which is "simpler" and generally Occam-friendly - do you accept an incredibly large number of extra worlds, or an incredibly large number of reasons why those other worlds don't exist and ours does? So if both interpretations give rise to the same experience, I think I'm at liberty to adopt the Vicar of Bray strategy and align myself with whichever interpretation suits any particular context. It's easier to think about unpredictability without picturing Many Worlds - e.g. do we say "don't worry about driving too fast because there will be plenty of worlds where we don't kill anybody?" But if anybody can offer a Many Worlds version of the points I have made, I'd be most interested!

Comment author: Viliam_Bur 25 July 2013 12:18:53PM *  7 points [-]

So, shortly, these are the important things to consider:
* in some systems small errors grow to large errors;
* our measurements cannot be completely precise;
* there is some kind of randomness in physics (random collapse / indexic uncertainty of many-world splitting).

Therefore even the best possible measurement in time T1 may not give good results about time T2. Therefore exactly determining the future is not possible (unless we have some subsystem where the errors don't grow).

Comment author: RogerS 25 July 2013 01:08:27PM 1 point [-]

Yes, that looks like a good summary of my conclusions, provided it is understood that "subsystems" in this context can be of a much larger scale than the subsystems within them which diverge. (Rivers converge while eddies diverge).

The difference between Determinism & Pre-determination

3 RogerS 25 July 2013 11:41AM

1. Scope

 

There are two arm-waving views often expressed about the relationship between “determinism/causality” on the one hand and “predetermination/predictability in principle” on the other. The first treats them as essentially interchangeable: what is causally determined from instant to instant is thereby predetermined over any period - the Laplacian view. The second view is that this is a confusion, and they are two quite distinct concepts. What I have never seen thoroughly explored (and therefore propose to make a start on here) is the range of different cases which give rise to different relationships between determinism and predetermination. I will attempt to illustrate that, indeed, determinism is neither a necessary nor a sufficient condition for predetermination in the most general case.

To make the main argument clear, I will relegate various pedantic qualifications, clarifications and comments to [footnotes].

Most of the argument relates to cases of a physically classical, pre-quantum world (which is not as straightforward as often assumed, and certainly not without relevance to the world we experience). The difference that quantum uncertainty makes will be considered briefly at the end.

 

2. Instantaneous determinism

To start with it is useful to define what exactly we mean by an (instantaneously) determinist system. In simple terms this means that how the system changes at any instant is fully determined by the state of the system at that instant [1]. This is how physical laws work in a Newtonian universe. The arm-waving argument says that if this is the case, we can derive the state of the system at any future instant by advancing through an infinite number of infinitesimal steps. Since each step is fully determined, the outcome must be as well. However, as it stands this is a mathematical over-simplification. It is well known that an infinite number of infinitesimals is indeterminate as such, and so we have to look at this process more carefully - and this is where there turn out to be significant differences between different cases.

 

3. Convergent and divergent behaviour

To illustrate the first difference that needs to be recognized, consider two simple cases - a snooker ball just about to collide with another snooker ball, and a snooker ball heading towards a pocket. In the first case, a small change in the starting position of the ball (assuming the direction of travel is unchanged) results in a steadily increasing change in the positions at successive instants after impact - that is, neighbouring trajectories diverge. In the second case, a small change in the starting position has no effect on the final position hanging in the pocket: neighbouring trajectories converge. So we can call these “convergent” and “divergent” cases respectively. [1.1]

Now consider what happens if we try to predict the state of some system (e.g. the position of the ball) after a finite time interval. Any attempt to find the starting position will involve a small error. The effect on the accuracy of prediction differs markedly in the two cases. In the convergent case, small initial errors will fade away with time. In the divergent case, by contrast, the error will grow and grow. Of course, if better instruments were available we could reduce the initial error and improve the prediction - but that would also increase the accuracy with which we could check the final error! So the notable fact about this case is that no matter how accurately we know the initial state, we can never predict the final state to the same level of accuracy - despite the perfect instantaneous determinism assumed, the last significant figure that we can measure remains as unpredictable as ever. [2]

One possible objection that might be raised to this conclusion is that with “perfect knowledge” of the initial state, we can predict any subsequent state perfectly. This is philosophically contentious - rather analagous to arguments about what happens when an irresistable force meets an immovable object. For example, philosophers who believe in “operational definitions” may doubt whether there is any operation that could be performed to obtain “the exact initial conditions”. I prefer to follow the mathematical convention that says that exact, perfect, or infinite entities are properly understood as the limiting cases of  more mundane entities. On this convention, if the last significant figure of the most accurate measure we can make of an outcome remains unpredictable for any finite degree of accuracy, then we must say that the same is true for “infinite accuracy”.

 

The conclusion that there is always something unknown about the predicted outcome places a “qualitative upper limit”, so to speak, on the strength of predictability in this case, but we must also recognize a “qualitative lower limit” that is just as important, since in the snooker impact example whatever the accuracy of prediction that is desired after whatever time period, we can always calculate an accuracy of initial measurement that would enable it. (However, as we shall shortly see [3], this does not apply in every case.)  The combination of predictability in principle to any degree, with necessary unpredictability to the precision of the best available measurement, might be termed “truncated predictability”.

 

4. More general cases

The two elemementary cases considered so far illustrate the importance of distinguishing convergent from divergent behaviour, and so provide a useful paradigm to be kept in mind, but of course, most real cases are more complicated than this.

To take some examples, a system can have both divergent parts and convergent parts at any instant - such as different balls on the same snooker table; an element whose trajectory is behaving divergently at one instant may behave convergently at another instant; convergent movement along one axis may be accompanied by divergent movement relative to another; and, significantly, divergent behaviour at one scale may be accompanied by convergent behaviour at a different scale. Zoom out from that snooker table, round positions to the nearest metre or so, and the trajectories of all the balls follow that of the adjacent surface of the earth.

There is also the possibility that a system can be potentially divergent at all times and places. A famous case of such behaviour is the chaotic behaviour of the atmosphere, first clearly understood by Edward Lorentz in 1961. This story comes in two parts, the second apparently much less well known than the first.

 

5. Chaotic case: discrete

The equations normally used to describe the physical behaviour of the atmosphere formally describe a continuum, an infinitely divisible fluid. As there is no algebraic “solution” to these equations, approximate solutions have to be found numerically, which in turn require the equations to be “discretised”, that is adapted to describe the behaviour at, or averaged around, a suitably large number of discrete points. 

 

The well-known part of Lorenz’s work [4] arose from an accidental observation, that a very small change in the rounding of the values at the start of a numerical simulation led in due course to an entirely different “forecast”. Thus this is a case of divergent trajectories from any starting point, or “sensitivity to initial conditions” as it has come to be known.

 

The part of “chaos theory” that grew out of this initial insight describes the convergent trajectories from any starting point: they diverge exponentially, with a time constant known as the Kolmogorov constant for the particular problem case [5]. Thus we can still say, as we said for the snooker ball, that whatever the accuracy of prediction that is desired after whatever time period, we can always calculate an accuracy of initial measurement that would enable it.

 

6. Chaotic case: continuum

Other researchers might have dismissed the initial discovery of sensitivity to initial conditions as an artefact of the computation, but Lorenz realised that even if the computation had been perfect, exactly the same consequences would flow from disturbances in the fluid in the gaps between the discrete points of the numerical model.  This is often called the “Butterfly Effect” because of a conference editor's colourful summary that “the beating of a butterfly’s wings in Brazil could cause a tornado in Texas”.

 

It is important to note that the Butterfly Effect is not strictly the same as “Sensitivity to Initial Conditions” as is often reported [6], although they are closely related. Sensitivity to Initial Conditions is an attribute of some discretised numerical models. The Butterfly Effect describes an attribute of the equations describing a continuous fluid, so is better described as “sensitivity to disturbances of minimal extent”, or in practice, sensitivity to what falls between the discrete points modelled.

 

Since, as noted above, there is no algebraic solution to the continuous equations, the only way to establish the divergent characteristics of the equations themselves is to repeatedly reduce the scale of discretisation (the typical distance between the points on the grid of measurements) and observe the trend. In fact, this was done for a very practical reason: to find out how much benefit would be obtained, in terms of the durability of the forecast [7], by providing more weather stations. The result was highly significant: each doubling of the number of stations increased the durability of the forecast by a smaller amount, so that (by extrapolation) as the number of imaginary weather stations was increased without limit, the forecast durability of the model converged to a finite value[8]. Thus, beyond this time limit, the equations that we use to describe the atmosphere give indeterminate results, however much detail we have about the initial conditions. [9]

 

Readers will doubtless have noticed that this result does not strictly apply to the earth’s atmosphere, because that is not the infinitely divisible fluid that the equations assumed (and a butterfly is likewise finitely divisible). Nevertheless, the fact that there are perfectly well-formed, familiar equations which by their nature have unpredictable outcomes after a finite time interval vividly exposes the difference between determinism and predetermination.

 

With hindsight, the diminishing returns in forecast durability from refining the scale of discretisation is not too surprising: it is much quicker for a disturbance on a 1 km scale to have effects on a 2 km scale than for a disturbance on a 100 km scale to have effects on a 200 km scale.

 

7. Consequences of quantum uncertainty

It is often claimed that the the Uncertainty Principle of quantum mechanics [10] makes the future unpredictable [11], but in the terms of the above analysis this is far from the whole story.

 

The effect of quantum mechanics is that at the scale of fundamental particles [12] the laws of physical causality are probabilistic. As a consequence, there is certainly no basis, for example, to predict whether an unstable nucleus will disintegrate before or after the expiry of its half-life.

 

However, in the case of a convergent process at ordinary scales, the unpredictability at quantum scale is immaterial, and at the scale of interest predictability continues to hold sway. The snooker ball finishes up at the bottom of the pocket whatever the energy levels of its constituent electrons. [13]

 

It is in the case of divergent processes that quantum effects can make for unpredictability at large scales. In the case of the atmosphere, for example, the source of that tornado in Texas could be a cosmic ray in Colombia, and cosmic radiation is strictly non-deterministic. The atmosphere may not be the infinitely divisible fluid considered by Lorenz, but a molecular fluid subject to random quantum processes has just the same lack of predictability.

 

[EDIT] How does this look in terms of the LW-preferred Many Worlds interpretation of quantum mechanics?[14] In this framework, exact "objective prediction" is possible in principle but the prediction is of an ever-growing array of equally real states. We can speak of the "probability" of a particular outcome in the sense of the probability of that outcome being present in any state chosen at random from the set. In a convergent process the cases become so similar that there appears to be only one outcome at the macro scale (despite continued differences on the micro scale); whereas in a divergent process the "density of probability" (in the above sense) becomes so vanishingly small for some states that at a macro scale the outcomes appear to split into separate branches. (They have become decoherent.) Any one such branch appears to an observer within that branch to be the only outcome, and so such an observer could not have known what to "expect" - only the probability distribution of what to expect. This can be described as a condition of subjective unpredictability, in the sense that there is no subjective expectation that can be formed before the divergent process which can be reliably expected to coincident with an observation made after the process. [END of EDIT]

 

8. Conclusions

What has emerged from this review of different cases, it seems to me, is that it is the convergent/divergent dichotomy that has the greatest effect on the predictability of a system’s behaviour, not the deterministic/quantised dichotomy at subatomic scales.

 

More particularly, in short-hand:-

Convergent + deterministic => full predictability

Convergent + quantised => predictability at all super-atomic scales

Divergent + deterministic + discrete => “truncated predictability”

Divergent + deterministic + continuous => unpredictability

[EDIT] Divergent + quantised => objective predictability of the multiverse but subjective unpredictability

 

Footnotes

1. The “state” may already include time derivatives of course, and in the case of a continuum, the state includes spatial gradients of all relevant properties.

1.1 For simplicity I have ignored the case between the two where neighbouring trajectories are parallel. It should be obvious how the argument applies to this case. Convergence/divergence is clearly related to (in)stability, and less directly to other properties such as (non)-linearity and (a)periodicity, but as convergence defines the characteristic that matters in the present context it seems better to focus on that.

2. In referring to a “significant figure” I am of course assuming that decimal notation is used, and that the initial error has diverged by at least a factor of 10.

3. In section 6.

4. For example, see Gleick, “Chaos”, "The Butterfly Effect" chapter.

5. My source for this statement is a contribution by Eric Kvaalen to the New Scientist comment pages.

6. E.G by Gleick or Wikipedia.

7. By durability I mean the period over which the required degree of accuracy is maintained.

8. This account is based on my recollection, and notes made at the time, of an article in New Scientist, volume 42, p290. If anybody has access to this or knows of an equivalent source available on-line, I would be interested to hear!

9. I am referring to predictions of the conditions at particular locations and times. It is, of course, possible to predict average conditions over an area on a probabilistic basis, whether based on seasonal data, or the position of the jetstream etc. These are further examples of how divergence at one scale can be accompanied by something nearer to convergence on another scale.

10. I am using “quantum mechanics” as a generic term to include its later derivatives such as quantum chromodynamics. As far as I understand it these later developments do not affect the points made here. However, this is certainly well outside my  professional expertise in aspects of Newtonian mechanics, so I will gladly stand corrected by more specialist contributors!

11. E.G. by Karl Popper in an appendix to The Poverty of Historicism.

12. To be pedantic, I’m aware that this also applies to greater scales, but to a vanishingly small extent.

13. In such cases we could perhaps say that predictability is effectively an “emergent property” that is not present in the reductionist laws of the ultimate ingredients but only appears in the solution space of large scale aggregates. 

14. Thanks to the contributors of the comments below as at 30 July 2013 which I have tried to take into account. The online preview of "The Emergent Multiverse: Quantum Theory According to the Everett Interpretation" by David Wallace has also been helpful to understanding the implications of Many Worlds.

 

 

 

 

 

 

 

Comment author: JonahSinick 13 June 2013 04:11:08PM 3 points [-]

In hindsight, my presentation in this article was suboptimal. I clarify in a number of comments on this thread.

The common thread that ties together the quantitative majors example and the Penrose example is "rather than dismissing arguments that appear to break down upon examination, one should recognize that such arguments often have a nontrivial chance of succeeding owing to model uncertainty, and one should count such arguments as evidence."

In the case of the quantitative majors example, the point is that you can amass a large number such arguments to reach a confident conclusion. In the Penrose example, the point is that one should hedge rather than concluding that Penrose is virtually certain to be wrong.

I can give more examples of the use of MWAs to reach a confident conclusion. They're not sufficiently polished to post, so if you're interested in hearing them, shoot me at email at jsinick@gmail.com.

Comment author: RogerS 19 June 2013 06:14:08PM 0 points [-]

Perhaps "hedging" is another term that also needs expanding here. One can reasonably assume that Penrose's analysis has some definite flaws in it, given the number of probable flaws identified, while still suspecting (for the reasons you've explained) that it contains insights that may one day contribute to sounder analysis. Perhaps the main implication of your argument is that we need to keep arguments in our mind in more categories then just a spectrum from "strong" to "weak". Some apparently weak arguments may be worth periodic re-examination, whereas many probably aren't.

In response to Reductionism
Comment author: RogerS 23 April 2013 04:24:57PM *  0 points [-]

"having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory

Why do we distinguish “map” and “territory”? Because they correspond to “beliefs” and “reality”, and we have learnt elsewhere in the Sequences that

my beliefs determine my experimental predictions, but only reality gets to determine my experimental results.

Let’s apply that test. It isn’t only predictions that apply at different levels, so do the results. We can have right or wrong models at quark level, atom level, crystal level, and engineering component level. At each level, the fact that one model is right and another wrong is a fact about reality: it is Talking about Territory. When we say a 747 wing is really there, we mean that (for example) visualising it as a saucepan will result in expectations that the results will not fulfil in the way that they will when visualising it as a wing. Indeed, we can have many different models of the wing, all equally correct - since they all result in predictions that conform to the same observations. The choice of correct model is what is in our head. The fact that it has to be (equivalent to) a model of a wing to be correct is in the Territory. In short, when Talking about Territory we can describe things at as many levels (of aggregation) as yield descriptions that can be tested against observation.

at different levels

What exactly is meant by “levels” here? The Naval Gunner is arguing about levels of approximation. The discussion of Boeing 747 wings is an argument about levels of aggregation. They are not the same thing. Treating the forces on an aircraft wing at the aggregate level is leaving out internal details that per se do not affect the result. There will certainly be approximations involved in practice, of course, but they don’t stem from the actual process of aggregation, which is essentially a matter of combining all the relevant force equations algebraically, eliminating internal forces, before solving them; rather than combining the calculated forces numerically.

...the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces

The way that reality works, as far as we can tell, is that there are basic ingredients, with their properties, which in any given system at any given instant exist in a particular configuration. Now reality is not just the ingredients but also the configuration - a wrong model of the configuration will give wrong predictions just as a wrong model of the ingredients will. The possible configurations include known stable structures. These structures are likewise real because any model of a configuration which cannot be transformed into a model which includes the identified structure in question is in conflict with reality. Physics is I understand it comprises (a) laws that are common to different configurations of the ingredients, and (b) laws that are common to different configurations of the known stable structures. Physicalism implies the belief that laws (b) are always consistent with laws (a) when both are sufficiently accurate.

...The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings

True but the key word here is “additional”. Newton’s laws were undoubtedly laws of physics, and in my school physics lessons were expressed in terms of forces on bodies, rather than on their constituent particles. The laws for forces on constituent particles were then derived from Newton’s laws by a thought experiment in which a body is divided up. In higher education today the reverse process is the norm, but reality is indifferent to which equivalent formulation we use: both give identical predictions.[Original wording edited]

General Relativity contains the additional causal entity known as space-time curvature, which is an aggregate effect of all the massive particles in the universe given their configuration so is not a natural fit in the Procrustean bed of reductionism. [Postscript] Interestingly, I've read that Newton was never happy with his idea of gravitation as a force of attraction between two things because it implied a property shared between the two things concerned and therefore being intrinsic to neither - but failed to find a better formulation.

The critical words are really and see

Indeed, but when you see a wing it is not just in the mind, it is also evidence of how reality is configured. It is the result of the experiment you perform by looking.

.. the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought

What the gunner really thought is pure speculation of course, but this assumption by EY raises an important point about meta-models.

In thought experiments the outcome is determined by the applicable universal laws – that’s meta-model (A). In any real-world case you need a model of the application as well as models of universal laws. That’s meta-model (B). An actual artillery shell will be affected by things like air resistance, so the greater accuracy of Einstein’s laws in textbook cases is no guarantee of it giving more accurate results in this case. EY obviously knew this, but his meta-model excluded it from consideration here. Treating the actual application as a case governed only by Newton’s or Einstein’s laws is itself a case of “Mind Projection Fallacy” – projecting meta-model (A) onto a real-world application. So it’s not a case of the gunner mistaking a model for reality, but of mistaking the criteria for choosing between one imperfect model and another. I imagine gunners are generally practical men, and in the field of the applied sciences it is very common for competing theories to have their own fields of application where they are more accurate than the alternatives – so although he was clearly misinformed, at least his meta-model was the right one.

[Postscript] An arguable version of reductionism is the belief that laws about the ingredients of reality are in some sense "more fundamental" than laws about stable structures of the ingredients. This cannot be an empirical truth, since both laws give the same predictions where they overlap so cannot be empirically distinguished. Neither is any logical contradiction implied by its negation. It can only be a metaphysical truth, whatever that is. Doesn't it come down to believing Einstein's essentialist concept of science against Bohr's instrumentalist version? That science doesn't just describe, but also tells? So pick Bohr as an opponent if you must, not some anonymous gunner.

Comment author: PrawnOfFate 20 April 2013 12:39:30AM -1 points [-]

I don't think so, since the information I would be comparing in this case (the "file contents") would be just a reduction of the information in two regions of space-time.

And under determinsim, all the information in any spatial slice will be reproduced throughout time. Hence the false positives.

Comment author: RogerS 20 April 2013 02:07:30PM 0 points [-]

I'm not clear what you are meaning by "spatial slice". That sounds like all of space at a particular moment in time. In speaking of a space-time region I am speaking of a small amount of space (e.g. that occupied by one file on a hard drive) at a particular moment in time.

Comment author: PrawnOfFate 18 April 2013 01:37:27AM 0 points [-]

OK, not strictly "conserved", except that I understand quantum mechanics requires that the information in the universe must be conserved

..absent collapse..

But what I meant is that if you download a file to a different medium and then delete the original, the information is still the same although the descriptions at quark level are utterly different.

But a 4D descriptions of al the changes involved in the copy-and-delete process would be sufficient to show that the information in the first medium is equivalent to the information in the second. In fact, your problem would be false positives, since determinism will always show that subsequent state contains the same information as a previous one.

Comment author: RogerS 19 April 2013 09:26:34PM 0 points [-]

..absent collapse..

Ah, is that so.

But a 4D descriptions of all the changes involved in the copy-and-delete process would be sufficient..

Yes, I can see that that's one way of looking at it.

In fact, your problem would be false positives

I don't think so, since the information I would be comparing in this case (the "file contents") would be just a reduction of the information in two regions of space-time.

View more: Prev | Next