Followup to: The Bedrock of Morality, Abstracted Idealized Dynamics
Tim Tyler comments:
Do the fox and the rabbit disagree? It seems reasonable so say that they do if they meet: the rabbit thinks it should be eating grass, and the fox thinks the rabbit should be in the fox's stomach. They may argue passionately about the rabbit's fate - and even stoop to violence.
Boy, you know, when you think about it, Nature turns out to be just full of disagreement.
Rocks, for example, fall down - so they agree with us, who also fall when pushed off a cliff - whereas hot air rises into the air, unlike humans.
I wonder why hot air disagrees with us so dramatically. I wonder what sort of moral justifications it might have for behaving as it does; and how long it will take to argue this out. So far, hot air has not been forthcoming in terms of moral justifications.
Physical systems that behave differently from you usually do not have factual or moral disagreements with you. Only a highly specialized subset of systems, when they do something different from you, should lead you to infer their explicit internal representation of moral arguments that could potentially lead you to change your mind about what you should do.
Attributing moral disagreements to rabbits or foxes is sheer anthropomorphism, in the full technical sense of the term - like supposing that lightning bolts are thrown by thunder gods, or that trees have spirits that can be insulted by human sexual practices and lead them to withhold their fruit.
The rabbit does not think it should be eating grass. If questioned the rabbit will not say, "I enjoy eating grass, and it is good in general for agents to do what they enjoy, therefore I should eat grass." Now you might invent an argument like that; but the rabbit's actual behavior has absolutely no causal connection to any cognitive system that processes such arguments. The fact that the rabbit eats grass, should not lead you to infer the explicit cognitive representation of, nor even infer the probable theoretical existence of, the sort of arguments that humans have over what they should do. The rabbit is just eating grass, like a rock rolls downhill and like hot air rises.
To think that the rabbit contains a little circuit that ponders morality and then finally does what it thinks it should do, and that the rabbit has arrived at the belief that it should eat grass, and that this is the explanation of why the rabbit is eating grass - from which we might infer that, if the rabbit is correct, perhaps humans should do the same thing - this is all as ridiculous as thinking that the rock wants to be at the bottom of the hill, concludes that it can reach the bottom of the hill by rolling, and therefore decides to exert a mysterious motive force on itself. Aristotle thought that, but there is a reason why Aristotelians don't teach modern physics courses.
The fox does not argue that it is smarter than the rabbit and so deserves to live at the rabbit's expense. To think that the fox is moralizing about why it should eat the rabbit, and this is why the fox eats the rabbit - from which we might infer that we as humans, hearing the fox out, would see its arguments as being in direct conflict with those of the rabbit, and we would have to judge between them - this is as ridiculous as thinking (as a modern human being) that lightning bolts are thrown by thunder gods in a state of inferrable anger.
Yes, foxes and rabbits are more complex creatures than rocks and hot air, but they do not process moral arguments. They are not that complex in that particular way.
Foxes try to eat rabbits and rabbits try to escape foxes, and from this there is nothing more to be inferred than from rocks falling and hot air rising, or water quenching fire and fire evaporating water. They are not arguing.
This anthropomorphism of presuming that every system does what it does because of a belief about what it should do, is directly responsible for the belief that Pebblesorters create prime-numbered heaps of pebbles because they think that is what everyone should do. They don't. Systems whose behavior indicates something about what agents should do, are rare, and the Pebblesorters are not such systems. They don't care about sentient life at all. They just sort pebbles into prime-numbered heaps.
I disagree. Rabbits have the "should" in their algorithm, they search for plans that "could" be executed and converge to the plans satisfying their sense of "good", it is a process similar to one operating in humans or fruit flies and very unlike one operating in rocks. The main difference is that it seems difficult to persuade a rabbit of anything, but it's equally difficult to persuade a drunk Vasya from 6th floor that flooding the neighbors is really bad. Animals (and even fruit flies) can adapt, can change their behavior in response to the same context as a result of being exposed to training context, can start selecting different plans in the same situations. They don't follow the same ritual as humans do, they don't exchange the moral arguments on the same level of explicitness as humans, but as cognitive algorithms go, they have all the details of "should". Not all of humans are able to be persuaded by valid moral arguments, and need really deep reconfiguration before such arguments would start working, in ways not yet accessible to modern medicine, in ways equally unlike the normal rituals of moral persuasion. What would a more intelligent, more informed rabbit want? Would rabbits uploaded in rabbit-Friendly environment experience moral progress?
Reconstructing the part of the original argument that I think is valid, I agree that rabbits don't posses the facilities for moral argument in the same sense as humans do, but it is a different phenomenon from the fundamental should of cognitive algorithms. Discussing this difference requires understanding the process of human-level moral argument, and not just process of goal-directed action, or process of moral progress. There are different ways in which the behavior changes, and I'm not sure that there is a meaningful threshold at which adaptation becomes moral progress; there might be.
This is cogent and forceful, but still wrong I think. There's something to morality beyond the presence of a planning algorithm. I can't currently imagine what that might be tho, so maybe you're right that the difference is one of degree and not kind.
I think part of the confusion is that Elizier is distinguishing morality as a particular aspect of human decision-making. A lot of the comments seem to want to include any decision-making criteria as a kind of generalized morality.
Morality may just be a deceptively simple word that covers extremely complex aspects of how humans choose, and justify, their actions.