For the question about human attributions, I would expect an evolutionary story: the world has causal structure, and organisms that correctly represent that structure are fitter than those that do not; we were lucky in that somewhere in our evolutionary history, we acquired capacities to observe and/or infer causal relations, just as we are lucky to be able to see colors, smell baking bread, and so on.
This is not an explanation: it is simply saying "evolution did it". An explanation should exhibit the mechanism whereby the concept is acquired.
It's more like Hume's story: imagine Adam, fully formed with excellent intellectual faculties but with neither experience nor a concept of causation. How could such a person come to have a correct concept of causation?
That is one way of presenting the thought experiment.
Since we are now imagining a creature that has different faculties than an ordinary human
Another way of presenting the thought experiment is to ask how a baby arrives at the concept. Then we are not imagining a creature that has different faculties than an ordinary human.
Another way is to imagine a robot that we are building. How can the robot make causal inferences? Again, "we design it that way" is no more of an answer than "God made us that way" or "evolution made us that way". Consider the question in the spirit of Jaynes' use of a robot in presenting probability theory. His robot is concerned with making probabilistic inferences but knows nothing of causes; this robot is concerned with inferring causes. How would we design it that way? Pearl's works presuppose an existing knowledge of causation, but do not tell us how to first acquire it.
I want to know what resources we are giving this imaginary Adam. Adam has no concept of causation and no ability to perceive causal relations directly. Can he perceive spatial relations directly? Temporal relations? Does he represent his own goals? The goals of others? ...
That is part of the question. What resources does it need, to proceed from ignorance of causation to knowledge of causation?
I definitely agree that evolutionary stories can become non-explanatory just-so stories. The point of my remark was not to give the mechanism in detail, though, but just to distinguish the following two ways of acquiring causal concepts:
(1) Blind luck plus selection based on fitness of some sort. (2) Reasoning from other concepts, goals, and experience.
I do not think that humans or proto-humans ever reasoned their way to causal cognition. Rather, we have causal concepts as part of our evolutionary heritage. Some reasons to think this is right include: the ...
Half-closing my eyes and looking at the recent topic of morality from a distance, I am struck by the following trend.
In mathematics, there are no substantial controversies. (I am speaking of the present era in mathematics, since around the early 20th century. There were some before then, before it had been clearly worked out what was a proof and what was not.) There are few in physics, chemistry, molecular biology, astronomy. There are some but they are not the bulk of any of these subjects. Look at biology more generally, history, psychology, sociology, and controversy is a larger and larger part of the practice, in proportion to the distance of the subject from the possibility of reasonably conclusive experiments. Finally, politics and morality consist of nothing but controversy and always have done.
Curiously, participants in discussions of all of these subjects seem equally confident, regardless of the field's distance from experimental acquisition of reliable knowledge. What correlates with distance from objective knowledge is not uncertainty, but controversy. Across these fields (not necessarily within them), opinions are firmly held, independently of how well they can be supported. They are firmly defended and attacked in inverse proportion to that support. The less information there is about actual facts, the more scope there is for continuing the fight instead of changing one's mind. (So much for the Aumann agreement of Bayesian rationalists.)
Perhaps mathematicians and hard scientists are not more rational than others, but work in fields where it is easier to be rational. When they turn into crackpots outside their discipline, they were actually that irrational already, but have wandered into an area without safety rails.