If we’re pretending that free will is both silly and surprising, then why aren’t we more surprised by stronger biases towards more accurate notions like causality?
If there was no implicit provision like this, there’s no sense to asking any question like “why would brains tend to believe X and not believe not X?” To entertain the question, first we entertain a belief that our brains were “just naïve enough” to allow surprise at finding any sort of cognitive bias. Free will indicates bias--this is the only sense I can interpret from the question you asked.
Obviously, it is irrational to believe strongly either way if no evidence is commonly admitted. Various thought experiments could be made to suggest free will is not among those beliefs we hold by evaluation of log-likelihoods over hypotheses given evidence. And so, if “free will” is significantly favored while also baseless, then a cognitive bias remains one of the better possible explanations for the provisional surprise we claim about observing belief in free will.
At least it is so in my general, grossly naïve understanding. And in lieu of a stack trace, I'll say this: cognitive biases seem like heuristic simplifications that cause general errors in the calculation of inference. They favor improper scoring when betting expectations in certain contexts. Assuming any reason exists, the motivation is most likely as with over-fitting in any other model—it’s a sampling bias. And, since engineering mistakes into our brain sounds generally harmful, each type of over-fitting must pay off tremendously in some very narrow scope of high risk, high reward opportunities.
The need to reason causally isn’t any more apparent than free will, but it just sounds less mysterious because it fits the language of mathematics. Causality and free will are related, but learning causality seems such a necessary objective to a brain that I doubt we’d get so many other biases without getting causality ensured first. I doubt we’re built without an opinion on either issue.
If we’re pretending that free will is both silly and surprising, then why aren’t we more surprised by stronger biases towards more accurate notions like causality?
If there was no implicit provision like this, there’s no sense to asking any question like “why would brains tend to believe X and not believe not X?” To entertain the question, first we entertain a belief that our brains were “just naïve enough” to allow surprise at finding any sort of cognitive bias. Free will indicates bias--this is the only sense I can interpret from the question you asked.
Obviously, it is irrational to believe strongly either way if no evidence is commonly admitted. Various thought experiments could be made to suggest free will is not among those beliefs we hold by evaluation of log-likelihoods over hypotheses given evidence. And so, if “free will” is significantly favored while also baseless, then a cognitive bias remains one of the better possible explanations for the provisional surprise we claim about observing belief in free will.
At least it is so in my general, grossly naïve understanding. And in lieu of a stack trace, I'll say this: cognitive biases seem like heuristic simplifications that cause general errors in the calculation of inference. They favor improper scoring when betting expectations in certain contexts. Assuming any reason exists, the motivation is most likely as with over-fitting in any other model—it’s a sampling bias. And, since engineering mistakes into our brain sounds generally harmful, each type of over-fitting must pay off tremendously in some very narrow scope of high risk, high reward opportunities.
The need to reason causally isn’t any more apparent than free will, but it just sounds less mysterious because it fits the language of mathematics. Causality and free will are related, but learning causality seems such a necessary objective to a brain that I doubt we’d get so many other biases without getting causality ensured first. I doubt we’re built without an opinion on either issue.