Maybe "Is accurate enough that it doesn't change our answer by an unacceptable amount"? The level of accuracy we want depends on context.
How would you measure the accuracy of a model, other than by its probability of giving accurate answers? "Accurate" depends on what margin of error you accept, or you can define it with increasing penalties for increased divergence from reality.
Too many people attempt to use logic when they should be using probabilities - in fact, when they are using probabilities, but don't mention it. Here are some of the major fallacies caused by misusing logic and probabilities this way:
Common fallacies in probability