Counterfactual resiliency is a plausibility test for predictive models, particularly models that do not provide causal explanations for their predictions, proposed by Stuart Armstrong. If if there are likely possible worlds in which the inputs to the model stay constant but the event the model attempts to predict happens differently, or in which the event happens the same way but the inputs change in a way that would have led the model to make a different prediction, then the model must be flawed. Armstrong argues that Robin Hanson's model predicting changes in the rate of economic growth, and Ray Kurzweil's Law of Accelerating Returns, both fail the counterfactual resiliency test, because both rely heavily on trends from the distant past which could easily change without significantly changing our current and future situation. And conversely, that AI and brain uploading could be made easier or more difficult without significantly altering the past, changing the outcomes which their models attempt to predict without changing the inputs that go into the models. However, he argues that Moore's law passes the counterfactual resiliency test, because there are enough factors that contribute to the 18-month doubling time that changing any one of them would have a limited effect on the overall trend.
http://lesswrong.com/lw/ea8/counterfactual_resiliency_test_for_noncausal/
http://lesswrong.com/lw/ea8/counterfactual_resiliency_test_for_noncausal/Counterfactual resiliency test for non-causal models