I think that the idea of ‘adding up to normality’ is incoherent, but maybe I don’t understand it. There is a rule of thumb that, in general, a theory or explanation should ‘save the phenomena’ as much as possible. But Egan’s law is presented in the sequences as something more strict than an exceptionable rule of thumb. I’m going to try to explain and formalize Egan’s law as I understand it so that once it’s been made clear, we can talk about how we would argue for it.
If a theory adds up to normality in the strict sense, then there are no true sentences in normal language which do not have true counterparts in a theory. Thus, if it is true to say that the apple is green, a theory which adds up to normality will contain a sentence which describes the same phenomenon as the normal language sentence, and is true (and false if the normal language sentence is false). For example: if an apple is green, then light of such and such wavelength is predominantly reflected from its surface while other visible wavelengths are predominantly absorbed. Let’s call this the Egan property of a theory. A theory would fail to add up to normality either if it denied the truth of true sentences in normal language (e.g. ‘the apple isn’t really green’) or if it could make nothing of the phenomenon of normal language at all (e.g. nothing really has color).
t has the property E = for all a in n, there is an α in t such that a if and only if α
t is a theoretical language and ‘α ‘is a sentence within it, n is the normal language and ‘a’ is a sentence within it. E is the Egan property. Now that we’ve defined the Egan property of a theory, we can move on to Egan’s law.
The way Egan’s law is articulated in the sequences, it seems to be an a priori necessary but insufficient condition on the truth of a theory. So it is necessary that, if a theory is true, it has the Egan property.
If α1, α2, α3..., then Et.
Or alternatively: If t is true, then Et.
That’s Egan’s law, so far as I understand it. Now, how do we argue for it? There’s an inviting, but I think troublesome Tarskian way to argue for Egan’s law. Tarski’s semantic definition of truth is such that some sentence β is true in language L if and only if b, where b is a sentence is a metalanguage. Following this, we could say that for any theory t to be true, all its sentences α must be true, and what it means for any α to be true is that a, where a is a sentence in the metalanguage we call normal language. But this would mean that a and α are strictly translations of one another in two different languages. If a theory is going to be explanitory of phenomena, then sentences like “light of such and such wavelength is predominantly reflected from the apple’s surface while other visible wavelengths are predominantly absorbed” have to have more content than “the apple is green”. If they mean the same thing, as sentences in Tarski’s definition of truth must, then theories can’t do any explaining.
So how else can we argue for Egan’s law?
I don't follow why that distinction is important in this case: Egan's law (combined with some observation) can act both as a falsification criteria for some theories and to adjust the probability of other theories.
Take three theories regarding your stick:
A: There is no real stick, only the illusion of a stick. (Roughly, external world skepticism) B: There is a real stick, and it did not bend but appeared to bend. (Physical realism regarding the properties of sticks and optics) C: You never observed a stick bending.
C is falsified, given Egan's law and the observation. A is not falsified, but A doesn't predict that you'd see a stick bend in water. B does predict this, as B incorporates optics and physical theories of light into itself. If A violated Egan's law, it would violate the "necessary condition" and be falsified. It doesn't, it is just rendered less probable while B is made to be more probable.
Remember that 'falsify' really only means 'renders highly improbable.' So there is no contradiction in saying that Egan's law will sometimes render something highly improbable (reduced it's probability by several sigma) and will sometimes only render something slightly less probable. Even the probability that something is false given that it is a logical contradiction is only the prior probability of that thing being true, times the probability it is true given that it is a logical contradiction, divided by the probability of logic being true. That P(A|B) is very near zero and P(B) is very near 1 in this case does not change this.
I like your analysis here, thanks. I remain unsure about what it means to combine egan's law ith some observation, as opposed to just testing a theory against an observation. Does Egan's law mean nothing more than 'theories ought to be tested against past, as well as future observations'? I admit, I find this hard to disagree with, but I'm not sure what this has to do with adding up to normality. Again, thanks for the excellent explanation.