it changes the issue to the probability of the model.
To throw out an idea I never followed up on, I think the "probability of a model" is a category error. Most models we deal with, and particularly in the context of assigning probabilities to models, are not propositions that are true or false, but maps that are more or less accurate.
I'm not sure what the implications to model testing and generalization theory would be in that, but I expect there would be some, and it always just irked me to see things like P(M1).
I think 4 generalizes better as
Impossible under certain assumptions does not mean impossible.
Remembering Jaynes' "background information I" is often helpful.
Another way to generalize 4 is
Always correct your probability estimates for the possibility that you've made an incorrect assumption.
I don't think "changes the issue" is the best way to say this, because there is always a probability that your model won't work even if it doesn't say something is impossible.
I don't know about this being a category error though. I think "map 1 is accurate with respect to X" is a valid proposition.
Too many people attempt to use logic when they should be using probabilities - in fact, when they are using probabilities, but don't mention it. Here are some of the major fallacies caused by misusing logic and probabilities this way:
Common fallacies in probability