If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
The usual plain-vanilla way is to use out-of-sample testing -- check the model on data that neither the model nor the researchers have seen before. It's common to set aside a portion of the data before starting the modeling process explicitly to serve as a final check after the model is done.
In the cases where the stability of the underlying process in in doubt, it may be that there is no good way other than waiting for a while and testing the (fixed in place) model on new data as it comes in.
The characteristics of the model's fit are not necessarily a good guide to the model's predictive capabilities. Overfitting is still depressingly common.