I am looking for examples of theories that we now know to be correct, but that would have have been unfalsifiable in a slightly different context --- e.g., in the past, or in hypothetical scenarios. (Unsurprisingly, this is motivated by the unfalsifiability of some claims around AI X-risk. For more context, see my sequence on Formalising Catastrophic Goodhart's Law.)

My best example so far is Newton's theory of gravity and the hypothetical scenario where we live in an underground bunker with no knowledge of the outside world: We would probably first come up with the theory that "things just fall down". If we look around, no objects seem to be attracting each other, like Newton would have us believe. Moreover, Newton's theory is arguably weirder and more complex. And Newton's theory doesn't make any experimental predictions that we could realistically verify.

Specifically, I am looking for examples of phenomena with the following properties (examples in footnotes):

  1. The phenomenon is something unambiguous and where, in the present day, virtually nobody[1] has any doubt about it being true.[2] Bonus points if the phenomenon is something that happens very robustly, rather than merely something that can happen.[3]
  2. There is some historical or hypothetical scenario S such that the phenomenon obviously never occurs in S or its past. Bonus points for plausibility.[4]
  3. In the scenario S, it is, obviously, practically impossible to exhibit the phenomenon empirically.[5]
  4. In the scenario S, it is, obviously, practically impossible to gain evidence on the phenomenon through formal analysis (which includes mathematical modelling and the use of computers). Bonus points if the reason for this is that we know some "first principles" from which the phenomenon might be derived, but doing the actual derivation is obviously too complex (as opposed to requiring a clever idea).[6]
  1. ^

    Sure, there are always crazy people, creationists, the Lizardman constant, etc. But hopefully the examples make it clear enough what I am after.

  2. ^

    Examples of "unambiguous and widely agreed-upon" phenomena are: "The Earth orbits the sun", "physics and chemistry can give rise to complex life", or "eating lots of sweets is not good for your health". But not "communism is bad", which is too vague, or "faster-than-light travel is impossible", which is not obvious to everybody.

  3. ^

    An example of phenomena that happen very robustly are "sufficiently dense things form black holes", "stars go out", and "the law of large numbers". In contrast, things that merely can happen are "physics and chemistry giving rise to complex life", "sun eclipse", and "twin primes".

  4. ^

    Examples of phenomena that would obviously not happen in particular scenarios or their history are: "Eating lots of sweets is not good for your health" before 1000 BCE, "sufficiently powerful AI would cause human extinction" today, or "any two particles attract each other via gravity" if you live in a bunker and don't know about the outside world. But not "heavier-than-air is impossible" in 1000 BCE, because birds can fly. And not "eating lots of sweets is not good for your health" in 1000 CE, because it's not obvious enough that there weren't problems with sugar before then.

  5. ^

    Examples of phenomena that are, obviously, practically impossible to observe experimentally are: "Humans can harness nuclear energy" in 1700, or "physics and chemistry can give rise to complex life" if you can't rely on materials from Earth. But not "eating lots of sweets is not good for your health" once you have sugar, or "smoking isn't healthy" anytime; at least not unless you ban unethical experiments.

  6. ^

    Examples of phenomena on which it is, obviously, practically impossible to gain evidence by formal analysis, are: "Physics and chemistry can give rise to complex life" or "eating lots of sweets is not good for your health"; both of these get the bonus points. "Riemann hypothesis" and "P vs NP" are debatable, but definitely don't get the bonus points. Phenomena like "if you don't eat, you will die" , "if you aim a rocket straight at the Moon, it will fail to land there" if you only know high-school math, "CO2 causes global warming", and "nukes could cause nuclear winter" do not count, since we can demonstrate these phenomena in models that are simplified, but for which many people would agree that the model is at least somewhat accurate and informative of the real thing.

New Answer
New Comment

3 Answers sorted by

Hastings

40

Phenomenon: The cosmological principle
Situation where it seems false and unfalsifiable: The distant future after galaxies outside of the local group depart the cosmic event horizon
 

According to a widely held understanding of the far future (~100 Billion years), the distant galaxies will fade completely from view and the local group will likely merge into one galaxy. For civilizations that arise in this future orbiting trillion year old red dwarfs, the hypothesis that there are billions of galaxies just like the one they are in will be unfalsifiable. The evidence will point to all mass in the universe living in one lump with a reachable center.

This isn't my example, it's sort of the canonical scenario to use as a metaphor for how inflation-based-multiverse theories could be true yet undetectable. For example, see the afterword to "A Universe from Nothing" https://www.google.com/books/edition/A_Universe_from_Nothing/TGpbASdsIW4C?hl=en&gbpv=1&dq=A%20universe%20from%20nothing%20dawkins&pg=PA187&printsec=frontcover

cubefox

20

In an interview, Elon Musk said that if gravity on Earth had been only slightly stronger, it would have been impossible to build orbital rockets. If this is true, presumably certain astronomical observations which are filtered out by the atmosphere or the magnetic field of the earth couldn't have been made. Though I don't know any specifics.

VojtaKovarik

10

Some partial examples I have so far:
 

Phenomenon: For virtually any goal specification, if you pursue it sufficiently hard, you are guaranteed to get human extinction.[1]
Situation where it seems false and unfalsifiable: The present world.
Problems with the example: (i) We don't know whether it is true. (ii) Not obvious enough that it is unfalsifiable.


Phenomenon: Physics and chemistry can give rise to complex life.
Situation where it seems false and unfalsifiable: If Earth didn't exist.
Problems with the example: (i) if Earth didn't exist, there wouldn't be anybody to ask the question, so the scenario is a bit too weird. (ii) The example would be much better if it was the case that if you wait long enough, any planet will produce life.

Phenomenon: Gravity -- all things with mass attract each other. (As opposed to "things just fall in this one particular direction".)
Situation where it seems false and unfalsifiable: If you lived in a bunker your whole life, with no knowledge of the outside world.[2]
Problems with the example: The example would be even better if we somehow had some formal model that: (a) describes how physics works, (b) where we would be confident that the model is correct, (c) and that by analysing that model, we will be able to determine whether the theory is correct or false, (d) but the model would be too complex to actually analyse. (Similarly to how chemistry-level simulations are too complex for studying evolution.)

Phenomenon: Eating too much sweet stuff is unhealthy.
Situation where it seems false and unfalsifiable: If you can't get lots of sugar yet, and only rely on fruit etc.
Problems with the example: The scenario is a bit too artificial. You would have to pretend that you can't just go and harvest sugar from sugar cane and have somebody eat lots of it.

  1. ^

    See here for comments on this. Note that this doesn't imply AI X-risk, since "sufficiently hard" might be unrealistic, and also we might choose not to use agentic AI, etc.

  2. ^

    And if you didn't have any special equipment, etc.

8 comments, sorted by Click to highlight new comments since:

(Egan's Incandescence is relevant and worth checking out - though it's not exactly thrilling :))

I'm not crazy about the terminology here:

  • Unfalsifiable-in-principle doesn't imply false. It implies that there's a sense in which the claim is empty. This tends to imply [it will not be accepted as science], but not [it is false].
  • Where something is practically unfalsifiable (but falsifiable in principle), that doesn't suggest it's false either. It suggests it's hard to check.
    • It seems to me that the thing you'd want to point to as potentially suspicious is [practically unfalsifiable claim made with high confidence].
  • The fact that it's unusual and inconvenient for something predictable to be practically unfalsifiable does not inherently make such prediction unsound.
  • I don't think it's foolish to look for analogous examples here, but I guess it'd make more sense to make the case directly:
    • No, a hypothesis does not always need to make advance predictions (though it's convenient when it does!).
      • Claims predicting AI disaster are based on our not understanding how things will work concretely. Being unable to make many good predictions in this context is not strange.
    • Various AI x-risk claims concern patterns with no precedents we'd observe significantly before the end. This, again, is inconvenient - but not strange: they're dangerous in large part because they're patterns without predictable early warning signs.

I agree with all of this. (And good point about the high confidence aspect.)

The only thing that I would frame slightly differently is that:
[X is unfalsifiable] indeed doesn't imply [X is false] in the logical sense. On reflection, I think a better phrasing of the original question would have been something like: 'When is "unfalsifiability of X is evidence against X" incorrect?'. And this amended version often makes sense as a heuristic --- as a defense against motivated reasoning, conspiracy theories, etc. (Unfortunately, many scientists seem to take this too far, and view "unfalsifiable" as a reason to stop paying attention, even though they would grant the general claim that [unfalsifiable] doesn't logically imply [false].)

I don't think it's foolish to look for analogous examples here, but I guess it'd make more sense to make the case directly.

That was my main plan. I was just hoping to accompany that direct case by a class of examples that build intuition and bring the point home to the audience.

When is "unfalsifiability of X is evidence against X" incorrect?'

In some sense this must be at least half the time, because if X is unfalsifiable, then not-X is also unfalsifiable, and it makes little sense to have this rule constitute evidence against X and also evidence against not-X.

I would generally say that falsifiability doesn't imply anything about truth value.  It's more like "this is a hypothesis that scientific investigation can't make progress on".  Also, it's probably worth tracking the category of "hypotheses that you haven't figured out how to test empirically, but you haven't thought very hard about it yet".

There may be useful heuristics about people who make unfalsifiable claims.  Some of which are probably pretty context-dependent.

Probably not what you want, but there is a risk that people use "unfalsifiable" where a better word would be "illegible" or "unable to verify using today's technology".

For example, the original behaviorists rejected the concept of thoughts and emotions as unscientific. Generally, people in the past assumed that thoughts are immaterial, and therefore not a possible object of science.

This is controversial, but from certain perspective, the entire concept of probability is... well, can you define probability in a way that is not circular? The Popperian idea of "falsification" seems black and white. If a theory predicts that something happens with probability 90%, how many experiments do you need to do until the theory is "falsified"?

The hypothetical bunker people could easily perform the Cavendish experiment to test Newtonian gravity, there just (apparently) isn't any way they'd arrive at the hypothesis.

Good point. Also, for the purpose of the analogy with AI X-risk, I think we should be willing to grant that the people arrive at the alternative hypothesis through theorising. (Similarly to how we came up with the notion of AI X-risk before having any powerful AIs.) So that does break my example somewhat. (Although in that particular scenario, I imagine that sceptic of Newtonian gravity would came up with alternative explanations for the observation. Not that this seems very relevant.)

Wait, is there some common belief that "unfalsiable implies false" is correct or incorrect?  I think unfalsifiable implies irrelevant (or at least unknowable), not false nor true.  

Also, there's a big difference between "unfalsifiable today" (these propositions can have a truth value, even if it's not knowable by us just yet) and "unfalsifiable even in theory" (which it's arguable what "truth value" even means").  For many of your examples, we'd not usually use the word "unfalsifiable", just "we can't currently test them, but we can figure out how with a bit more effort".

I agree that "we can't test it right now" is more appropriate. And I was looking for examples of things that "you can't test right now even if you try really hard".