Half-closing my eyes and looking at the recent topic of morality from a distance, I am struck by the following trend.
In mathematics, there are no substantial controversies. (I am speaking of the present era in mathematics, since around the early 20th century. There were some before then, before it had been clearly worked out what was a proof and what was not.) There are few in physics, chemistry, molecular biology, astronomy. There are some but they are not the bulk of any of these subjects. Look at biology more generally, history, psychology, sociology, and controversy is a larger and larger part of the practice, in proportion to the distance of the subject from the possibility of reasonably conclusive experiments. Finally, politics and morality consist of nothing but controversy and always have done.
Curiously, participants in discussions of all of these subjects seem equally confident, regardless of the field's distance from experimental acquisition of reliable knowledge. What correlates with distance from objective knowledge is not uncertainty, but controversy. Across these fields (not necessarily within them), opinions are firmly held, independently of how well they can be supported. They are firmly defended and attacked in inverse proportion to that support. The less information there is about actual facts, the more scope there is for continuing the fight instead of changing one's mind. (So much for the Aumann agreement of Bayesian rationalists.)
Perhaps mathematicians and hard scientists are not more rational than others, but work in fields where it is easier to be rational. When they turn into crackpots outside their discipline, they were actually that irrational already, but have wandered into an area without safety rails.
I definitely agree that evolutionary stories can become non-explanatory just-so stories. The point of my remark was not to give the mechanism in detail, though, but just to distinguish the following two ways of acquiring causal concepts:
(1) Blind luck plus selection based on fitness of some sort. (2) Reasoning from other concepts, goals, and experience.
I do not think that humans or proto-humans ever reasoned their way to causal cognition. Rather, we have causal concepts as part of our evolutionary heritage. Some reasons to think this is right include: the fact that causal perception (pdf) and causal agency attributions emerge very early in children; the fact that other mammal species, like rats (pdf), have simple causal concepts related to interventions; and the fact that some forms of causal cognition emerge very, very early even among more distant species, like chickens.
Since causal concepts arise so early in humans and are present in other species, there is current controversy (right in line with the thesis in your OP) as to whether causal concepts are innate. That is one reason why I prefer the Adam thought experiment to babies: it is unclear whether babies already have the causal concepts or have to learn them.
EDIT: Oops, left out a paper and screwed up some formatting. Some day, I really will master markdown language.
Yes, it's (2) that I'm interested in. Is there some small set of axioms, on the basis of which you can set up causal reasoning, as has been done for probability theory? And which can then be used as a gold standard against which to measure our untutored fumblings that result from (1)?