Many learning-theoretic setups (especially in the Frequentist camp) make an IID assumption: that data can be split up into samples (sometimes called episodes or data-points) which are independently sampled from identical distributions (hence "IID").
This assumption sometimes allows us to prove that our methods generalize well; see especially PAC learning. However, in real life, when we say that a model "generalizes well", we really mean that it works well on new data which realistically has a somewhat different distribution. This is called a distributional shift or a non-stationary environment.
This framework (in which we initially make an IID assumption, but then, model violations of it as "distributional shifts") has been used extensively to discuss robustness issues relating to AI safety -- particularly, inner alignment. We can confidently anticipate that traditional machine learning systems (such as deep neural networks) will perform well on average, so long as the deployment situation is statistically similar to the training data. However, as the deployment distribution gets further from the training distribution, catastrophic behaviors which are very rare on the original inputs can become probable....