Maybe rationalization is even neccessary for dealing rationally with real life (the word kind of gives it away).
Only in the sense that lying can be called "truthization".
I read that. I agree with the argument. But it doesn't really address my intuition behind my argument.
The idea is that you have concurrent processes creating partial models of partial but overlapping aspects of reality. These models a) help making predictions for each aspect (descriptively), b) may help acting in the context of the aspect (operational/prescriptively) and c) may be on the symbolic layer inconsistent.
Do you want to kick out all the benefits to gain consistency? It could be that you can't achieve consistency of overlapping models at all without some super all encompassing model. Or it could be that such a super-model is horribly big and slow.
Another month, another rationality quotes thread. The rules are: