So, shortly, these are the important things to consider:
Therefore even the best possible measurement in time T1 may not give good results about time T2. Therefore exactly determining the future is not possible (unless we have some subsystem where the errors don't grow).
Yes, that looks like a good summary of my conclusions, provided it is understood that "subsystems" in this context can be of a much larger scale than the subsystems within them which diverge. (Rivers converge while eddies diverge).
It is often claimed that the the Uncertainty Principle of quantum mechanics [10] makes the future unpredictable [11], but in the terms of the above analysis this is far from the whole story.
The predominant interpretation of quantum dynamics on LessWrong seems Many Worldism. I think it would make sense to address it shortly.
I agree, I had thought of mentioning this but it's tricky. As I understand it, living in one of Many Worlds feels exactly like living in a single "Copenhagen Interpretation" world, and the argument is really over which is "simpler" and generally Occam-friendly - do you accept an incredibly large number of extra worlds, or an incredibly large number of reasons why those other worlds don't exist and ours does? So if both interpretations give rise to the same experience, I think I'm at liberty to adopt the Vicar of Bray strategy and align myself with whichever interpretation suits any particular context. It's easier to think about unpredictability without picturing Many Worlds - e.g. do we say "don't worry about driving too fast because there will be plenty of worlds where we don't kill anybody?" But if anybody can offer a Many Worlds version of the points I have made, I'd be most interested!
At the risk of starting yet another quantum interpretation debate, the argument that 'Copenhagen is simpler than MWI' is not a well-presented one. For instance, a quantum computer with 500 qubits will, at some point in processing, be in the superposition of 2^500 states at once. According to Copenhagen, one of these is randomly chosen at measurement. But to know the probability, you still have to keep track of 2^500 states. It's not simple at all. If you could get by with many fewer states (like, say, a polynomial function of the number of qubits), it would be possible to efficiently simulate a quantum computer on a classical one. While the impossibility of this hasn't been proven, the consensus opinion seems to be that it is unlikely.
MWI simply acknowledges this inherent complexity in quantum mechanics, and tries to deal with it directly instead of avoiding it. If it makes you more comfortable, you can consider the whole universe as a big quantum computer, and you're living in it. That's MWI.
And another attractive aspect of MWI is that it is entirely deterministic. While each individual universe may appear random, the mulitverse as a whole evolves deterministically.
you can consider the whole universe as a big quantum computer, and you're living in it
I recall hearing it argued somewhere that it's not so much "a computer" as "the universal computer" in the sense that it is impossible to principle for there to be another computer performing the calculations from the same initial conditions (and for example getting to a particular state sooner). I like that if it's true. The calculations can be performed, but only by existing.
the multiverse as a whole evolves deterministically
So to get back to my question of what predictability means in a QM universe under MW, the significant point seems to be that prediction is possible starting from the initial conditions of the Big Bang, but not from a later point in a particular universe (without complete information about the all other universes that have evolved from the Big Bang)?
Rather, the significant point is that you can predict the future with arbitrary precision, but the prediction will say things like "you are superpositioned into these three states". That result is deterministic, but it doesn't help you predict your future subjective experience when you're facing down the branch point.
(You know that there will be three yous, each thinking "Huh, I was number 1/2/3" -- but until you check, you won't know which you you are.)
So, to get this clear (being well outside my comfort zone here), once a split into two branches has occurred, they no longer influence each other? The integration over all possibilities is something that happens in only one of the many worlds? (My recent understanding is based on "Everything that can happen does happen" by Cox & Forshaw).
There are no discrete "worlds" and "branches" in quantum physics as such. Once two regions in state space are sufficiently separated to no longer significantly influence each other they might be considered split, which makes the answer to your question "yes" by definition.
There are no discrete "worlds" and "branches" in quantum physics as such.
This seems to conflict with references to "many worlds" and "branch points" in other comments, or is the key word "discrete"? In other words, the states are a continuum with markedly varying density so that if you zoom out there is the appearance of branches? I could understand that expect for cases like Schroedinger's cat where there seems to be a pretty clear branch (at the point where the box is opened, i.e. from the point of view of a particular state if that is the right terminology).
Once two regions in state space are sufficiently separated to no longer significantly influence each other...
From the big bang there are an unimaginably large number of regions in state space each having an unimaginably small influence. It's not obvious, but I can perfectly well believe that the net effect is dominated by the smallness of influence, so I'll take your word for it.
In other words, the states are a continuum with markedly varying density so that if you zoom out there is the appearance of branches?
Yes, but it's still continuous. There's always some influence, it can just get arbitrarily small. I'm unsure if this hypothetically allows MWI to be experimentally confirmed.
(The thesis of mangled-worlds seems to be that, in fact, in some cases that doesn't happen - that is, world A's influence on world B stays large.)
Schroedinger's cat
If it helps, think of half-silvered mirrors. Those are actually symmetric, letting through half the light either way; the trick is that the ambient lighting on the "reflective" side is orders of magnitude brighter, so the light shining through from the dark side is simply washed out.
To apply that to quantum mechanics, consider that the two branches - cat dead and not-dead - can still affect each other, but as if through a 99.9-whatever number of nines-silvered mirror. By the time a divergence gets to human scale, it'll be very, very close to an absolute separation.
Thanks, so to get back to the original question of how to describe the different effects of divergence and convergence in the context of MW, here's how it's seeming to me. (The terminology is probably in need of refinement).
Considering this in terms of the LW-preferred Many Worlds interpretation of quantum mechanics, exact "prediction" is possible in principle but the prediction is of the indexical uncertainty of an array of outcomes. (The indexical uncertainty governs the probability of a particular outcome if one is considered at random.) Whether a process is convergent or divergent on a macro scale makes no difference to the number of states that formally need to be included in the distribution of possible outcomes. However, in the convergent process the cases become so similar that there appears to be only one outcome at the macro scale; whereas in a divergent process the "density of probability" (in the above sense) becomes so vanishingly small for some states that at a macro scale the outcomes appear to split into separate branches. (They have become decoherent.) Any one such branch appears to an observer within that branch to be the only outcome, and so such an observer could not have known what to "expect" - only the probability distribution of what to expect. This can be described as a condition of subjective unpredictability, in the sense that there is no subjective expectation that can be formed before the divergent process which can be reliably expected to coincident with observation after the process.
With the caveat that I'm not a physicist, and don't understand much of the math involved - yes, this seems to be correct.
Though note that quantum physics operates on phase space; if two outcomes are the same in every respect, then they really are the same outcome.
I observe the usual "Well, both explanations offer the exact same experimental outcomes, therefore I can choose what is true as I feel".
Furthermore, thinking in the Copenhagen way will constantly cause you to re-remember to include the worlds which you thought had 'collapsed' into your calculations, when they come to interfere with your world. It's easier (and a heck of a lot more parsimonious, but for that argument see the QM Sequence) to have your thoughts track reality if you think Many-Worlds is true.
Well, I didn't quite say "choose what is true". What truth means in this context is much debated and is another question. The present question is to understand what is and isn't predictable, and for this purpose I am suggesting that if the experimental outcomes are the same, I won't get the wrong answer by imagining CI to be true, however unparsimonious. If something depends on the whether an unstable nucleus decays earlier or later than its half life, I don't see how the inhabitants of the world where it has decayed early and triggered a tornado (so to speak) will benefit much by being confident of the existence of a world where it decayed late. Or isn't that the point?
You didn't quite say 'choose what is true', I was just pointing out how closely what you wrote matched certain anti-epistemologies :-)
I'm also saying that if you think the other worlds 'collapse' then your intuitions will collide with reality when you have to account for one of those other worlds decohering something you were otherwise expecting not to decohere. But this is relatively minor in this context.
Also, unless I misunderstood you, your last point is not relevant to the truth-value of the claim, which is what we're discussing here, not it's social benefit (or whatever).
the truth-value of the claim, which is what we're discussing here
More precisely, it's what you're discussing. (Perhaps you mean I should be!) In the OP I discussed the implications of an infinitely divisible system for heuristic purposes without claiming such a system exists in our universe. Professionally, I use Newtonian mechanics to get the answers I need without believing Einstein was wrong. In other words, I believe true insights can be gained from imperfect accounts of the world (which is just as well, since we may well never have a perfect account). But that doesn't mean I deny the value of worrying away at the known imperfections.
It's easier to think about unpredictability without picturing Many Worlds - e.g. do we say "don't worry about driving too fast because there will be plenty of worlds where we don't kill anybody?"
Yes, the problem is that it is easy to imagine Many Worlds... incorrectly.
We care about the ratio of branches where we survive, and yet, starting with Big Bang, the ratio of branches where we ever existed is almost zero. So, uhm, why exactly should we be okay about this almost zero, but be very careful about not making it even smaller? But this is what we do (before we start imagining Many Worlds).
So for proper thinking perhaps it is better to go with collapse intepretation. (Until someone starts making incorrect conclusions about mysterious properties of randomness, in which case it is better to think about Many Worlds for a moment.)
Perhaps instead of immediately giving up and concluding that it's impossible to reason correctly with MWI, it would be better to take the born rule at face value as a predictor of subjective probability.
If someone is able to understand 10% as 10%, then this works. But most people don't. This is why CFAR uses the calibration game.
People buy lottery tickets with chances to win smaller than one in a million, and put a lot of emotions in them. Imagine that instead you have a quantum event that happens with probability one in a million. Would the same people feel correctly about it?
In situations like these, I find Many Worlds useful for correcting my intuitions (even if in the specific situation the analogy is incorrect, because the source of randomness is not quantum, etc.). For example, if I had the lottery ticket, I could imagine million tiny slices of my future, and would notice that in the overwhelming majority of them nothing special happened; so I shouldn't waste my time obsessing about the invisible.
Similarly, if a probability of succeeding in something is 10%, a person can just wave their hands and say "whatever, I feel lucky"... or imagine 10 possible futures, with labels: success, failure, failure, failure, failure, failure, failure, failure, failure, failure. (There is no lucky in the Many Worlds; there are just multiple outcomes.)
Specifically, for a quantum suicide, imagine a planet-size graveyard, cities and continents filled with graves, and then zoom in to one continent, one country, one city, one street, and amoung the graves find a sole survivor with a gigantic heap of gold saying proudly: "We, the inhabitants of this planet, are so incredibly smart and rich! I am sure all people from other planets envy our huge per capita wealth!" Suddenly it does not feel like a good idea when someone proposes you that your planet should do the same thing.
even if in the specific situation the analogy is incorrect, because the source of randomness is not quantum, etc.
This seems a rather significant qualification. Why can't we say that the MW interpretation is something that can be applied to any process which we are not in a position to predict? Why is it only properly a description of quantum uncertainty? I suspect many people will answer in terms of the subjective/objective split, but that's tricky terrain.
If it is about quantum uncertaintly, assuming our knowledge of quantum physics is correct, the calculated probabilities will be correct. And there will be no hidden variables, etc.
If instead I just say "the probability of rain tomorrow is 50%", then I may be (1) wrong about the probability, and my model also does not include the fact that I or someone else (2) could somehow influence the weather. Therefore modelling subjective probabilities as Many Worlds would provide unwarranted feeling of reliability.
Having said this, we can use something similar to Many Worlds by describing a 80% probability by saying -- in 10 situations like this, I will be on average right in 8 of them and wrong in 2 of them.
There is just the small difference that it is about "situations like this", not this specific situation. For example the specific situation may be manipulated. Let's say I feel 80% certainty and someone wants to bet money against me. I may think outside of the box and realise: wait a moment, people usually don't offer me bets, so what is different about this specific situation that this person decided to make a bet? Maybe they have some insider information that I am missing. And by reflecting this I reduce my certainty. -- While in a quantum physics situation, if my model says that with 80% probability something will happen, and someone offers to make bets, I would say: yes, sure.
Thanks, I think I understand that, though I would put it slightly differently, as follows...
I normally say that probability is not a fact about an event, but a fact about a model of an event, or about our knowledge of an event, because there needs to be an implied population, which depends on a model. When speaking of "situations like this" you are modelling the situation as belonging to a particular class of situations whereas in reality (unlike in models) every situation is unique. For example, I may decide the probability of rain tomorrow is 50% because that is the historic probability for rain where I live in late July. But if I know the current value of the North Atlantic temperature anomaly, I might say that reduces it to 40% - the same event, but additional knowledge about the event and hence a different choice of model with a smaller population (of rainfall data at that place & season with that anomaly) and hence a greater range of uncertainty. Further information could lead to further adjustments until I have a population of 0 previous events "like this" to extrapolate from!
Now I think what you are saying is that subject to the hypothesis that our knowledge of quantum physics is correct, and in the thought experiment where we are calculating from all the available knowledge about the initial conditions, that is the unique case where there is nothing more to know and no other possible correct model - so in that case the probability is a fact about the event as well. The many worlds provide the population, and the probability is that of the event being present in one of those worlds taken at random.
Incidentally, I'm not sure where my picture of probability fits in the subjective/objective classification. Probabilities of models are objective facts about those models, probabilities of events that involve "bets" about missing facts are subjective, while what I describe is dependent on the subject's knowledge of circumstantial data but free of bets, so I'll call it semi-subjective until somebody tells me otherwise!
Yeah, that's it. In case of quantum event, the probability (or indexical uncertainty) is in the territory; but in both quantum and non-quantum events, there is a probability in the map, just for different reasons.
In both cases we can use Many Worlds as a tool to visualize what those probabilities in the map mean. But in the case of non-quantum events we need to remember that there can be a better map with different probabilities.
In replying initially, I assumed that "indexical uncertainty" was a technical terms for a variable that plays the role of probability given that in fact "everything happens" in MW and therefore everything strictly has a probability of 1. However, now I have looked up "indexical uncertainty" and find that it means an observer's uncertainty as to which branch they are in (or more generally, uncertainty about one's position in relation to something even though one has certain knowledge of that something). That being so, I can't see how you can describe it as being in the territory.
Incidentally, I have now added an edit to the quantum section of the OP.
I can't see how you can describe it as being in the territory.
I probably meant that the fact that indexical uncertainty is unavoidable, is part of the territory.
You can't make a prediction about what exactly will happen to you, because different things will happen to different versions of you (thus, if you make any prediction of a specific outcome now, some future you will observe it was wrong). This inability to predict a specific outcome feels like probability; it feels like a situation where you don't have perfect knowledge.
So it would be proper to say that "unpredictability of a specific outcome is part of the territory" -- the difference is that one model of quantum physics believes there is intrinsic randomess involved, other model believes that in fact multiple specific outcomes happen (in different branches).
1. Scope
There are two arm-waving views often expressed about the relationship between “determinism/causality” on the one hand and “predetermination/predictability in principle” on the other. The first treats them as essentially interchangeable: what is causally determined from instant to instant is thereby predetermined over any period - the Laplacian view. The second view is that this is a confusion, and they are two quite distinct concepts. What I have never seen thoroughly explored (and therefore propose to make a start on here) is the range of different cases which give rise to different relationships between determinism and predetermination. I will attempt to illustrate that, indeed, determinism is neither a necessary nor a sufficient condition for predetermination in the most general case.
To make the main argument clear, I will relegate various pedantic qualifications, clarifications and comments to [footnotes].
Most of the argument relates to cases of a physically classical, pre-quantum world (which is not as straightforward as often assumed, and certainly not without relevance to the world we experience). The difference that quantum uncertainty makes will be considered briefly at the end.
2. Instantaneous determinism
To start with it is useful to define what exactly we mean by an (instantaneously) determinist system. In simple terms this means that how the system changes at any instant is fully determined by the state of the system at that instant [1]. This is how physical laws work in a Newtonian universe. The arm-waving argument says that if this is the case, we can derive the state of the system at any future instant by advancing through an infinite number of infinitesimal steps. Since each step is fully determined, the outcome must be as well. However, as it stands this is a mathematical over-simplification. It is well known that an infinite number of infinitesimals is indeterminate as such, and so we have to look at this process more carefully - and this is where there turn out to be significant differences between different cases.
3. Convergent and divergent behaviour
To illustrate the first difference that needs to be recognized, consider two simple cases - a snooker ball just about to collide with another snooker ball, and a snooker ball heading towards a pocket. In the first case, a small change in the starting position of the ball (assuming the direction of travel is unchanged) results in a steadily increasing change in the positions at successive instants after impact - that is, neighbouring trajectories diverge. In the second case, a small change in the starting position has no effect on the final position hanging in the pocket: neighbouring trajectories converge. So we can call these “convergent” and “divergent” cases respectively. [1.1]
Now consider what happens if we try to predict the state of some system (e.g. the position of the ball) after a finite time interval. Any attempt to find the starting position will involve a small error. The effect on the accuracy of prediction differs markedly in the two cases. In the convergent case, small initial errors will fade away with time. In the divergent case, by contrast, the error will grow and grow. Of course, if better instruments were available we could reduce the initial error and improve the prediction - but that would also increase the accuracy with which we could check the final error! So the notable fact about this case is that no matter how accurately we know the initial state, we can never predict the final state to the same level of accuracy - despite the perfect instantaneous determinism assumed, the last significant figure that we can measure remains as unpredictable as ever. [2]
One possible objection that might be raised to this conclusion is that with “perfect knowledge” of the initial state, we can predict any subsequent state perfectly. This is philosophically contentious - rather analagous to arguments about what happens when an irresistable force meets an immovable object. For example, philosophers who believe in “operational definitions” may doubt whether there is any operation that could be performed to obtain “the exact initial conditions”. I prefer to follow the mathematical convention that says that exact, perfect, or infinite entities are properly understood as the limiting cases of more mundane entities. On this convention, if the last significant figure of the most accurate measure we can make of an outcome remains unpredictable for any finite degree of accuracy, then we must say that the same is true for “infinite accuracy”.
The conclusion that there is always something unknown about the predicted outcome places a “qualitative upper limit”, so to speak, on the strength of predictability in this case, but we must also recognize a “qualitative lower limit” that is just as important, since in the snooker impact example whatever the accuracy of prediction that is desired after whatever time period, we can always calculate an accuracy of initial measurement that would enable it. (However, as we shall shortly see [3], this does not apply in every case.) The combination of predictability in principle to any degree, with necessary unpredictability to the precision of the best available measurement, might be termed “truncated predictability”.
4. More general cases
The two elemementary cases considered so far illustrate the importance of distinguishing convergent from divergent behaviour, and so provide a useful paradigm to be kept in mind, but of course, most real cases are more complicated than this.
To take some examples, a system can have both divergent parts and convergent parts at any instant - such as different balls on the same snooker table; an element whose trajectory is behaving divergently at one instant may behave convergently at another instant; convergent movement along one axis may be accompanied by divergent movement relative to another; and, significantly, divergent behaviour at one scale may be accompanied by convergent behaviour at a different scale. Zoom out from that snooker table, round positions to the nearest metre or so, and the trajectories of all the balls follow that of the adjacent surface of the earth.
There is also the possibility that a system can be potentially divergent at all times and places. A famous case of such behaviour is the chaotic behaviour of the atmosphere, first clearly understood by Edward Lorentz in 1961. This story comes in two parts, the second apparently much less well known than the first.
5. Chaotic case: discrete
The equations normally used to describe the physical behaviour of the atmosphere formally describe a continuum, an infinitely divisible fluid. As there is no algebraic “solution” to these equations, approximate solutions have to be found numerically, which in turn require the equations to be “discretised”, that is adapted to describe the behaviour at, or averaged around, a suitably large number of discrete points.
The well-known part of Lorenz’s work [4] arose from an accidental observation, that a very small change in the rounding of the values at the start of a numerical simulation led in due course to an entirely different “forecast”. Thus this is a case of divergent trajectories from any starting point, or “sensitivity to initial conditions” as it has come to be known.
The part of “chaos theory” that grew out of this initial insight describes the convergent trajectories from any starting point: they diverge exponentially, with a time constant known as the Kolmogorov constant for the particular problem case [5]. Thus we can still say, as we said for the snooker ball, that whatever the accuracy of prediction that is desired after whatever time period, we can always calculate an accuracy of initial measurement that would enable it.
6. Chaotic case: continuum
Other researchers might have dismissed the initial discovery of sensitivity to initial conditions as an artefact of the computation, but Lorenz realised that even if the computation had been perfect, exactly the same consequences would flow from disturbances in the fluid in the gaps between the discrete points of the numerical model. This is often called the “Butterfly Effect” because of a conference editor's colourful summary that “the beating of a butterfly’s wings in Brazil could cause a tornado in Texas”.
It is important to note that the Butterfly Effect is not strictly the same as “Sensitivity to Initial Conditions” as is often reported [6], although they are closely related. Sensitivity to Initial Conditions is an attribute of some discretised numerical models. The Butterfly Effect describes an attribute of the equations describing a continuous fluid, so is better described as “sensitivity to disturbances of minimal extent”, or in practice, sensitivity to what falls between the discrete points modelled.
Since, as noted above, there is no algebraic solution to the continuous equations, the only way to establish the divergent characteristics of the equations themselves is to repeatedly reduce the scale of discretisation (the typical distance between the points on the grid of measurements) and observe the trend. In fact, this was done for a very practical reason: to find out how much benefit would be obtained, in terms of the durability of the forecast [7], by providing more weather stations. The result was highly significant: each doubling of the number of stations increased the durability of the forecast by a smaller amount, so that (by extrapolation) as the number of imaginary weather stations was increased without limit, the forecast durability of the model converged to a finite value[8]. Thus, beyond this time limit, the equations that we use to describe the atmosphere give indeterminate results, however much detail we have about the initial conditions. [9]
Readers will doubtless have noticed that this result does not strictly apply to the earth’s atmosphere, because that is not the infinitely divisible fluid that the equations assumed (and a butterfly is likewise finitely divisible). Nevertheless, the fact that there are perfectly well-formed, familiar equations which by their nature have unpredictable outcomes after a finite time interval vividly exposes the difference between determinism and predetermination.
With hindsight, the diminishing returns in forecast durability from refining the scale of discretisation is not too surprising: it is much quicker for a disturbance on a 1 km scale to have effects on a 2 km scale than for a disturbance on a 100 km scale to have effects on a 200 km scale.
7. Consequences of quantum uncertainty
It is often claimed that the the Uncertainty Principle of quantum mechanics [10] makes the future unpredictable [11], but in the terms of the above analysis this is far from the whole story.
The effect of quantum mechanics is that at the scale of fundamental particles [12] the laws of physical causality are probabilistic. As a consequence, there is certainly no basis, for example, to predict whether an unstable nucleus will disintegrate before or after the expiry of its half-life.
However, in the case of a convergent process at ordinary scales, the unpredictability at quantum scale is immaterial, and at the scale of interest predictability continues to hold sway. The snooker ball finishes up at the bottom of the pocket whatever the energy levels of its constituent electrons. [13]
It is in the case of divergent processes that quantum effects can make for unpredictability at large scales. In the case of the atmosphere, for example, the source of that tornado in Texas could be a cosmic ray in Colombia, and cosmic radiation is strictly non-deterministic. The atmosphere may not be the infinitely divisible fluid considered by Lorenz, but a molecular fluid subject to random quantum processes has just the same lack of predictability.
[EDIT] How does this look in terms of the LW-preferred Many Worlds interpretation of quantum mechanics?[14] In this framework, exact "objective prediction" is possible in principle but the prediction is of an ever-growing array of equally real states. We can speak of the "probability" of a particular outcome in the sense of the probability of that outcome being present in any state chosen at random from the set. In a convergent process the cases become so similar that there appears to be only one outcome at the macro scale (despite continued differences on the micro scale); whereas in a divergent process the "density of probability" (in the above sense) becomes so vanishingly small for some states that at a macro scale the outcomes appear to split into separate branches. (They have become decoherent.) Any one such branch appears to an observer within that branch to be the only outcome, and so such an observer could not have known what to "expect" - only the probability distribution of what to expect. This can be described as a condition of subjective unpredictability, in the sense that there is no subjective expectation that can be formed before the divergent process which can be reliably expected to coincident with an observation made after the process. [END of EDIT]
8. Conclusions
What has emerged from this review of different cases, it seems to me, is that it is the convergent/divergent dichotomy that has the greatest effect on the predictability of a system’s behaviour, not the deterministic/quantised dichotomy at subatomic scales.
More particularly, in short-hand:-
Convergent + deterministic => full predictability
Convergent + quantised => predictability at all super-atomic scales
Divergent + deterministic + discrete => “truncated predictability”
Divergent + deterministic + continuous => unpredictability
[EDIT] Divergent + quantised => objective predictability of the multiverse but subjective unpredictability
Footnotes
1. The “state” may already include time derivatives of course, and in the case of a continuum, the state includes spatial gradients of all relevant properties.
1.1 For simplicity I have ignored the case between the two where neighbouring trajectories are parallel. It should be obvious how the argument applies to this case. Convergence/divergence is clearly related to (in)stability, and less directly to other properties such as (non)-linearity and (a)periodicity, but as convergence defines the characteristic that matters in the present context it seems better to focus on that.
2. In referring to a “significant figure” I am of course assuming that decimal notation is used, and that the initial error has diverged by at least a factor of 10.
3. In section 6.
4. For example, see Gleick, “Chaos”, "The Butterfly Effect" chapter.
5. My source for this statement is a contribution by Eric Kvaalen to the New Scientist comment pages.
6. E.G by Gleick or Wikipedia.
7. By durability I mean the period over which the required degree of accuracy is maintained.
8. This account is based on my recollection, and notes made at the time, of an article in New Scientist, volume 42, p290. If anybody has access to this or knows of an equivalent source available on-line, I would be interested to hear!
9. I am referring to predictions of the conditions at particular locations and times. It is, of course, possible to predict average conditions over an area on a probabilistic basis, whether based on seasonal data, or the position of the jetstream etc. These are further examples of how divergence at one scale can be accompanied by something nearer to convergence on another scale.
10. I am using “quantum mechanics” as a generic term to include its later derivatives such as quantum chromodynamics. As far as I understand it these later developments do not affect the points made here. However, this is certainly well outside my professional expertise in aspects of Newtonian mechanics, so I will gladly stand corrected by more specialist contributors!
11. E.G. by Karl Popper in an appendix to The Poverty of Historicism.
12. To be pedantic, I’m aware that this also applies to greater scales, but to a vanishingly small extent.
13. In such cases we could perhaps say that predictability is effectively an “emergent property” that is not present in the reductionist laws of the ultimate ingredients but only appears in the solution space of large scale aggregates.
14. Thanks to the contributors of the comments below as at 30 July 2013 which I have tried to take into account. The online preview of "The Emergent Multiverse: Quantum Theory According to the Everett Interpretation" by David Wallace has also been helpful to understanding the implications of Many Worlds.