New Comment
34 comments, sorted by Click to highlight new comments since: Today at 11:00 PM

However, if T is an explanatory theory (e.g. ‘the sun is powered by nuclear fusion’), then its negation ~T (‘the sun is not powered by nuclear fusion’) is not an explanation at all.

The words "explanatory theory" seem to me to have a lot of fuzziness hiding behind them. But to the extent that "the sun is powered by nuclear fusion" is an explanatory theory I would say that the proposition ~T is just the union of many explanatory theories: "the sun is powered by oxidisation", "the sun is powered by gravitational collapse", and so on for all explanatory theories except "nuclear fusion".

Therefore, suppose (implausibly, for the sake of argument) that one could quantify ‘the property that science strives to maximise’. If T had an amount q of that, then ~T would have none at all, not 1-q as the probability calculus would require if q were a probability.

There are lots of negative facts that are worth knowing and that scientists did good work to discover. When Michelson and Morley discovered that light did not travel through luminiferous aether that was a fact worth knowing, and lead to the discovery of special relativity. So even if you don't call ~T an explanatory theory it seems like it still has a lot of "the property that science strives to maximise"

Also, the conjunction (T₁ & T₂) of two mutually inconsistent explanatory theories T₁ and T₂ (such as quantum theory and relativity) is provably false, and therefore has zero probability. Yet it embodies some understanding of the world and is definitely better than nothing.

A Bayesian might instead define theories T₁' = "quantum theory leads to approximately correct results in the following circumstances ..." and T₂' "relativity leads to approximately correct results in the following circumstances ...". Then T₁' and T₂' would both have a high probability and be worth knowing, and so would their conjunction. The original conjunction, T₁ & T₂, would mean "both quantum theory and relativity are exactly true". This of course is provably false, and so has probability 0.

Furthermore if we expect, with Popper, that all our best theories of fundamental physics are going to be superseded eventually, and we therefore believe their negations, it is still those false theories, not their true negations, that constitute all our deepest knowledge of physics.

Right, right. The statement T₁ is false; but the statement T₁' is true.

What science really seeks to ‘maximise’ (or rather, create) is explanatory power.

Does Deutsch write anywhere about what a precise definition of "explanation" would be?

Does Deutsch write anywhere about what a precise definition of "explanation" would be?

Yes, in BoI. http://beginningofinfinity.com/books

In short, explanations typically talk about why/how/because.

The words "explanatory theory" seem to me to have a lot of fuzziness hiding behind them. But to the extent that "the sun is powered by nuclear fusion" is an explanatory theory I would say that the proposition ~T is just the union of many explanatory theories: "the sun is powered by oxidisation", "the sun is powered by gravitational collapse", and so on for all explanatory theories except "nuclear fusion".

Unless you're claiming non-explanatory theories don't exist at all, then ~T includes both explanations and non-explanations. It doesn't consist of a union of many explanations.

A Bayesian might instead define theories T₁' = "quantum theory leads to approximately correct results in the following circumstances ..."

You're changed it to an instrumentalist theory which focuses on prediction instead of explanation. Deutsch refutes instrumentalism in his first book, FoR, also at the link above.

You're changed it to an instrumentalist theory which focuses on prediction instead of explanation.

How so? I think it's still an explanatory theory, it just explains 99% of something instead of 100%.

Where's the explanation? What do you think an explanation is? You said the theory gets "approximately correct results" in some circumstances – doesn't that mean making approximately correct predictions?

(Epistemic status: sufficiently abstract that I can't be very confident without more familiarity with the topic)

(1) the objective of science is, or should be, to increase our ‘credence’ for true theories

I would suggest that it should also decrease our credence in false theories, and allow us to correctly estimate the likelyhood of conjectures not yet proven or disproved.

However, if T is an explanatory theory (e.g. ‘the sun is powered by nuclear fusion’), then its negation ~T (‘the sun is not powered by nuclear fusion’) is not an explanation at all.

Well, no - it's a set of explanations. A very large set, consisting of every explanation other than ‘the sun is powered by nuclear fusion’, but smaller than T | ~T, and therefore somewhat useful, however slightly.

Therefore, suppose (implausibly, for the sake of argument) that one could quantify ‘the property that science strives to maximise’.

Per the first line, we are supposing this property to be 'our credence in true theories'

If T had an amount q of that, then ~T would have none at all, not 1-q as the probability calculus would require if q were a probability.

All else being equal, if we come to believe T, our credence in true theories will be higher by 1 - p, where p is our previous credence in T. If we come to believe ~T, our credence in true theories will be lower than if we were uncertain by p.

I'm not sure that it makes sense in this context to assign a value of ‘the property that science strives to maximise’ to a statement. It's not a property of statements alone but of our belief in them.

If you want to assign a value of q to near-absolute confidence in T, I would say that it's 1 - ϵ. Thus, the ~t has near-zero value as far as the objective of science is concerned, and also has 1-q + ϵ = 0 + ϵ as the laws of probability demand.

Also, the conjunction (T₁ & T₂) of two mutually inconsistent explanatory theories T₁ and T₂ (such as quantum theory and relativity) is provably false, and therefore has zero probability. Yet it embodies some understanding of the world and is definitely better than nothing.

(Assuming for the sake of example that quantum theory and relativity mutually inconsistent, but both likely,) T1 & T2 is provably false, and indeed idea that quantum theory and relativity are both true is nonsense. T1 | T2, on the other hand, embodies some understanding of the world and is definitely better than nothing.

Well, no - it's a set of explanations. A very large set, consisting of every explanation other than ‘the sun is powered by nuclear fusion’, but smaller than T | ~T, and therefore somewhat useful, however slightly.

Infinity minus one isn't smaller than infinity. That's not useful in that way.

It may be useful in some way. But just ruling a single thing out, when dealing with infinity, isn't a road to progress.

indeed idea that quantum theory and relativity are both true is nonsense

He's saying we use them both, and that has value, even though we know there must be some mistake somewhere. Saying "or" misrepresents the current situation. Both of them seem to be partly right. The situation (our current understanding which has value) looks nothing like we'll end up keeping one and rejecting the other.

Infinity minus one isn't smaller than infinity. That's not useful in that way.

The thing being added or subtracted is not the mere number of hypotheses, but a measure of the likelihood of those hypotheses. We might suppose an infinitude of mutually exclusive theories of the world, but most of them are extremely unlikely - for any degree of unlikeliness, there are an infinity of theories less likely than that! A randomly-chosen theory is so unlikely to be true, that if you add up the likelihoods of every single theory, they add up to a number less than infinity.

It is for this reason that it is important when we divide our hypotheses between something likely, and everything else. "Everything else" contains infinite possibilities, but only finite likelihood.

Well, no - it's a set of explanations. A very large set, consisting of every explanation other than ‘the sun is powered by nuclear fusion’, but smaller than T | ~T, and therefore somewhat useful, however slightly.

This was talking about set sizes, which is what I replied about.

You can't quantify your fallibility in the sense of knowing how likely you are to be mistaken in an unexpected way. That's not possible.

He's saying we use them both, and that has value, even though we know there must be some mistake somewhere. Saying "or" misrepresents the current situation. Both of them seem to be partly right. The situation (our current understanding which has value) looks nothing like we'll end up keeping one and rejecting the other.

I haven't much knowledge of physics, and though that he was discussing the idea of two mutually exclusive theories which we use both of. From what you're saying, it sounds more like the crucial point is that they are presumably false, but still useful. Is that a good description of the situation?

As far as partly right theories that have value: if we know quantum theory is not completely right, then we've ruled out the hypothesis 'quantum mechanics' and are now dealing with the hypothesis space of theories relevantly similar to quantum theory. So I agree that I theory known to be inaccurate in some cases can be useful, but by treating it as a piece of evidence towards the truth, which is rather different than how we treated it when we thought it could be true in its own right.

Relativity and QM contradict but we don't know which is mistaken or why. Either one, individually, could be true in its own right.

Relativity and QM contradict but we don't know which is mistaken or why. Either one, individually, could be true in its own right.

The situation (our current understanding which has value) looks nothing like we'll end up keeping one and rejecting the other.

I don't see how these two statements can be consistent. If either one, individually, could be true in its own right, then why wouldn't we won't end up keeping one? If they contradict, then why wouldn't we reject the other?

i expect we'll keep parts of both.

As far as partly right theories that have value: if we know quantum theory is not completely right, then we've ruled out the hypothesis 'quantum theory' and are now dealing with the hypothesis space of theories that share some parts with quantum theory.

T in this case is not atomic; it is itself a conjunction of a lot of statements. So I agree that I theory known to be inaccurate in some cases can be useful, in that it may contain some true components as well as some untrue ones. But this is rather different than how we treated it when we thought it could be true in its own right.

In general, I agree that there are certain ideas in science that aren't propositions in Bayesian sense, and that treating them as if they were is a serious mistake. I don't think that this means that there's something wrong with the probability probability calculus, however.

As far as partly right theories that have value: if we know quantum theory is not completely right, then

But, again, we don't know that. QM could be right.

QM could be right.

Relativity and QM contradict

The situation (our current understanding which has value) looks nothing like we'll end up keeping one and rejecting the other.

I don't see how these statements can be consistent.

...if relativity and QM contradict, and QM turns out to be right, I'd expect us to reject relativity. Do you agree?

But just ruling a single thing out, when dealing with infinity, isn't a road to progress.

We don't deal with infinities. When asked "what sun is powered by?", humans formulate a finite, typically small, set of hypotheses, e.g.

  • By nuclear fusion
  • By a burning woodpile
  • By elven magic
  • By something else

Ruling even a single thing out from this small set is quite useful.

If you manage to rule out everything but something else, that's the most exciting time in science because you're now in uncharted territory (where every true scientist wants to be) and might be on a verge of a major breakthrough.

DD:

However, if T is an explanatory theory (e.g. ‘the sun is powered by nuclear fusion’), then its negation ~T (‘the sun is not powered by nuclear fusion’) is not an explanation at all.

Ideas don't negate to all the alternatives humans are currently interested in. That isn't how logic works.

It is not an explanation, but it is a (potentially) useful statement which leads you closer to an explanation. And I don't see any logical problems here (notice the something else alternative).

In any case, the underlying issue is hypothesis generation and any purely Bayesian view of science is necessarily incomplete because St.Bayes says absolutely nothing about how to generate hypotheses.

I agree that ruling statements like you talk about out is useful – I just don't think it's useful in the Bayesian model. The use is due to the Critical Rationalist approach.

By ‘Bayesian’ philosophy of science I mean the position that (1) the objective of science is, or should be, to increase our ‘credence’ for true theories [...]

Phew, I thought for a moment he was about to refute the actual Bayesian philosophy of science...

Snark aside, as others have noticed, point 1 is highly problematic. From a broader perspective, if Bayesian probability has to inform the practice of science, then a scientist should be wary of the concept of truth. Once a model has reached probability 1, it becomes an unwieldy object: it cannot be swayed by further, contrary evidence, and if we ever encounter an impossible piece of data (impossible for that model), the whole system breaks down. It is then considered good practice to always hedge models with a small probability for 'unknown unknowns', even with our most certain beliefs. After all, humans are finite and the universe is much, much bigger.

On the other hand, I don't think it's fair to say that the objective of science is either to "just explain" or "just predict". Both views are unified and expanded by the Bayesian perspective: "explanation", as far as the concept can be modelled mathematically, is fitness to data and low complexity. On the other hand, predictive power is fitness to future data, which can only be checked once the future data had been acquired. What is one man's prediction can be another man's explanation.

"explanation", as far as the concept can be modelled mathematically, is fitness to data and low complexity

Nope. To explain, e.g. to describe "why" something happened, is to talk about causes and effects. At least that's the way people use that word in practice.

Prediction and explanation are very very different.

To explain, e.g. to describe "why" something happened, is to talk about causes and effects.

I would still say that cause and effect is a subset of the kind of models that are used in statistics. A case in point is for example Bayesian networks, that can accomodate both probabilistc and causal relations.
I'm aware that Judea Pearl and probably others reverse the picture, and think that C&E are the real relations, which are only approximated in our mind as probabilistic relations. On that, I would say that quantum mechanics seems to point out that there is something fundamentally undetermined about our relations with cause and effect. Also, causal relations are very useful in physics, but one may want to use other models where physics is not especially relevant.
From what one may call "instrumentalist" point of view, time is a dimension so universal that any model can compress information by incorporating it, but it is not necessarily so, as relativity shows us: indeed, general relativity shows us you can compress a lot of information by not explicitly talking about time, and thus by sidestepping clean causal relations (what is cause in a reference frame is effect in another).

Prediction and explanation are very very different.

I'm not aware of a theory or a model that uses vastly different entities to explain and to predict. The typical case of a physical law posits an ontology governed by a stable relation, thus using the precise same pieces to explain the past and predict the future. Besides, such a model would be very difficult to tune: any set of data can be partitioned in any way you like between training and test, and it seems odd that a model is so dependent from the experimenter's intent.

I would still say that cause and effect is a subset of the kind of models that are used in statistics.

You would be wrong, then. The subset relation is the other way around. Bayesian networks are not causal models, they are statistical independence models.

Compressing information has nothing to do with causality. No experimental scientist talks about causality like that, in any field. There is a big literature on something called "compressed sensing," for example, but that literature (correctly) does not generally make claims about causality.

I'm not aware of a theory or a model that uses vastly different entities to explain and to predict.

I am.

You can't tune (e.g. trade off bias/variance properly) causal models in any kind of straightforward way, because the parameter of interest is never unobserved, unlike standard regression models. Causal inference is a type of unsupervised problem, unless you have experimental data.

Rather than arguing with me about this, I suggest a more productive use of your time would be to just read some stuff on causal inference. You are implicitly smuggling in some definition you like that nobody uses.

(1) the objective of science is, or should be, to increase our ‘credence’ for true theories

Well, no. Theories are maps, and are by necessity simpler than the territory (the universe is it's own best model). There is no such thing as a "true" theory. There are only theories which predict a larger or smaller subset of future states better or worse than others.

I think this neglects the idea of "physical law," which says that theories can be good when they capture the dynamics and building-blocks of the world simply, even if they are quite ignorant about the complex initial conditions of the world.

Sure. This is true of all maps and models. As simple as possible, but no simpler.

That simplicity ALWAYS comes with a loss of fidelity to the actual state of the universe.

I disagree with viewing theories as predictive. Deutsch calls that instrumentalism and refutes in in his book, The Fabric of Reality, in chapter 1. The basic problem is predictions aren't explanations about what's going on (the causality behind the prediction) or why.

Yet some philosophers — and even some scientists — disparage the role of explanation in science. To them, the basic purpose of a scientific theory is not to explain anything, but to predict the outcomes of experiments: its entire content lies in its predictive formulae. They consider that any consistent explanation that a theory may give for its predictions is as good as any other — or as good as no explanation at all — so long as the predictions are true. This view is called instrumentalism (because it says that a theory is no more than an ‘instrument’ for making predictions). To instrumentalists, the idea that science can enable us to understand the underlying reality that accounts for our observations is a fallacy and a conceit. They do not see how anything a scientific theory may say beyond predicting the outcomes of experiments can be more than empty words. Explanations, in particular, they regard as mere psychological props: a sort of fiction which we incorporate in theories to make them more easily remembered and entertaining.

(Deutsch goes on at too much length to paste.)

To instrumentalists, the idea that science can enable us to understand the underlying reality that accounts for our observations is a fallacy and a conceit

"understand" is doing a lot of work in this. What does it mean beyond "ability to make predictions conditional on future actions"?

teaching you things like what "understand" means is a large task. are you willing to put in effort by e.g. reading a book chapter, and answering questions to identify what you do and don't already understand about the matter?

Almost certainly not. I take this as confirmation that “understand” is the key misleadingly-simple word in your quote.

Not at all. It means the ability to explain, not just say what will happen.

When you say "ability to explain", I hear "communicate a model that says what will happen (under some set of future conditions/actions)".

There is no such thing as "why" in the actual sequence of states of matter in the universe. It just is. Any causality is in the models we use to predict future states. Which is really useful but not "truth".

I hear "communicate a model that says what will happen (under some set of future conditions/actions)".

You're hearing wrong.

it's not, i don't know why you're making a stink about it. i think you just wanted indirect evidence to convince yourself to stop conversing and be able to blame me in your head.