Too many people attempt to use logic when they should be using probabilities - in fact, when they are using probabilities, but don't mention it. Here are some of the major fallacies caused by misusing logic and probabilities this way:
- "It's not certain" does not mean "It's impossible" (and vice versa).
- "We don't know" absolutely does not imply "It's impossible".
- "There is evidence against it" doesn't mean much on its own.
- Being impossible *in a certain model*, does not mean being impossible: it changes the issue to the probability of the model.
In the Bayesian setting where probabilities are subjective beliefs there shouldn't be too many problems with the "probability of a model" expression.
There is a related concept of "model error" which is easier to clarify. To give a simple example, imaging you're trying to model a relationship between two variables which is actually well-described by a log curve, but you are using linear regression without any tranformations. Even if your sample size goes to infinity, your fit will still have a particular error component which is known as model error.
What if you define "probability of a model" as 1 - (probability that replacing it with a different model will improve things)? Or, in simpler terms, that the current model is the appropriate one for the task at hand.
In Jaynes' bayesian setting, a probability is a number you assign to a proposition. Models as generally used are not propositions.
Don't like that one. For any model, you can generally conceive of an infinite number of slightly tweaked, slightly better versions, so that for any particular model P(model is the appropriate one) i... (read more)