Review

In a model, often people have the intuition that removing a transformation is 'removing the need for assuming a particular value', while in reality, it simply means they are imposing a trivial/extreme value.

Example, Joe: "Let's take the unweighted sum of the next 20 years' expenditures, instead of their present-value discounted sum, so we don't have to decide on a discount rate". In reality, Joe has, once again, sneaked-in a 0%p.a. discount rate, a value neither Joe nor most others would want to defend as a palatable rate. Joe would not defend that, per se, not discounting gives a clearer picture; and we would be readily able to agree with Joe on some substantially non-zero rate to be at least more realistic than 0%. Joe really only wants the 'not discount' such as to, seemingly, 'avoid having to pick a value'.

A common fallacy that I think might have a name but I'm not sure anymore whether I've seen it anywhere, and don't know how to search for it. Help anyone?

(Let me know if this an unworthily trivial question spamming the forum, happy to remove it in this case)

New Answer
New Comment
1 comment, sorted by Click to highlight new comments since:

It mainly looks like simplification, rather than any definite fallacy. The real world is complex, and it is necessary to simplify it for purposes of deriving a conclusion. 0% discount rate is very much easier to model than any nonzero discount rate, especially in an informal verbal discussion. Picking any nonzero discount rate means choosing a more complex model, and so choosing 0% is defensible on this basis. The question is mainly whether the simplifications are known (or should be known) to seriously change the conclusion.

If either you or Joe know that the outcome of the simpler model will be meaningfully wrong as a result, then it becomes indefensible. Here "meaningfully wrong" means not just different from reality - no model will be perfect - but wrong enough that something meaningful such as a decision, a more general belief, or a quality of outcome depends upon that difference.

There is a humorous term "spherical cow", used of physics problems in which one might do somethign comparable to modelling a cow as a uniform sphere in order to derive some property without accounting for the complexity of an actual cow's shape. In many cases, this will still give good approximations with radically less complex math! In others, it can yield absurd results.

You could consider over-simplifying to be a type of fallacy, in the case where the simplification discards information that makes a critical difference to the conclusion. There are quite a lot of fallacies of oversimplification, but discarding a conclusion because the model isn't perfect is also a fallacy!