Your suggestion of moral ensemble modeling sounds essentially the same to me as the final stage in Kegan's model of psychological development. David Chapman has a decent summary of it here: Developing ethical, social, and cognitive competence | Vividness.
I don't think I had noticed the relationship with statistical modeling, though. I particularly like the analogy of overfitting.
Rob Bensinger writes that the search for true moral systems is quixotic, that ethics is so complicated that we cannot hope to capture it in a neat, rule-based system & that our project ought instead to be to construct, starting from observations about what humans think, do & feel, a conglomerate of systems, or maybe an amalgam, that together capture these observations as well as possible, though necessarily imperfectly.
He writes this in a post about things he would like to change about contemporary philosophy, which seems a little off, in a way, because my impression is that philosophers have been mostly anti-realist during the past century. That is to say, they don't think moral claims (e.g. "it is wrong to murder") are things that can be true or false, or that moral claims don't even pretend to be such things. Though I suppose it is possible to be both anti-realist & non-syncretic. Anyway, Rob would probably say that it doesn't matter, because even if moral claims can be true, there is no way for us to find out which of them are true – morality is just too complicated. The point is that we should not get tunnel vision on seeking the one true moral system, because no moral system could possibly describe the complex nature of human morality.
This is similar to the sentiment expressed here by Bernard Williams:
I use moral system here to mean a set of procedures & principles that, given a certain act in a certain situation, tells us whether that act is right or wrong. Think the kind of system somebody has for picking racehorse winners, except applied to ethics.
Golems
Richard McElreath, in his textbook on Bayesian statistics, likens statistical models to golems:
Moral systems are a little bit like statistical models in that they are constructs that take some input, do some computation & produce some output. The input is the proposed action & the situation in which it happens; the output is the rightness or wrongness of that action.
The point is that the golem is dumb. It produces answers that are true only in its own small world. This small world is necessarily limited. It's limited by the assumptions baked into the model, by the data that is fed into it & so on.
The golem will always do what it's told. You can improve it, perhaps, by giving it more detailed instructions. But then you are tethering it to a specific environment. If you transport the golem to a different place, it will probably do worse with the more detailed instructions & better with the simpler instructions, in the same way that more complex statistical models can produce worse predictions when moved to a different context due to overfitting.
You see where I am going with this. I think moral systems can overfit, too, just like statistical models can. Say Eithne has philosophised herself a moral system. Now she comes across an interesting thought experiment. When she applies her moral system to the thought experiment (this is the new context), it produces an outlandish outcome which she cannot possibly accept. Maybe it fails, in the new situation, to judge wanton murder to be wrong. So she adjusts her moral system to account for the new data. It now produces sensible outcomes on both the new example & the old examples that she has tested it against. Neat! But Eithne has also made her system more complex. If she keeps doing that, it may become closely tied to the contexts she has considered & won't transfer well to new contexts.
In other words, there are simpler moral systems & there are more complex ones. The simple systems have the advantage of – like statistical golems – generalising better, though the complex ones may perhaps do better on the training data.
I take Rob Bensinger to suggest that Eithne ought to keep her moral system simple & that, rather than extend it, she ought to complement it with new, additional systems. She should not search for the one true moral system, she should seek many moral systems that each have a truth in them. Of course these would need to be combined somehow, e.g. with a higher-order system that merges them or selects between them. That might work well because some moral systems may do better in certain contexts, so that, if she can successfully select between them, this conglomerate of systems can do well on specific tasks without being too closely tied to a specific context. This is ensemble modelling, but with moral systems.
But there's a crucial difference between statistical models & moral systems. Statistical models are used to answer questions about the world as it is, descriptively. But moral systems are meant to answer what we ought to do, normatively. The problem here is that we have no observational data when it comes to morality. Or do we?
Intuition
When constructing & evaluating moral systems, we often use our intuitions about morality. Eithne, for example, adjusted her moral system because it told her that, in some certain situation, wanton murder was not wrong. But she knew that wanton murder was wrong. How did she know it? Intuitively.
I've written before about reconciling intuitive morality with intellectual moral systems. The example I used was that of Leo Tolstoy organising famine relief work in Ryazan despite his philosophical opposition, derived from the moral teachings of Jesus of Nazareth, to private charity. One of my proposed solutions in the Tolstoy post was that we use intuition to adjust & recalibrate our moral system. I wrote:
Could this lead to the kind of extreme complicating of a system with contingencies that would cause a system to lose its ability to generalise, in other words to overfit? I don't know. Maybe there is so little data involved here that there's no risk of it. Or maybe, instead of trying to extend & modify the simple moral systems, we ought to keep them simple & acknowledge that they will misfire in some edge cases. We could then identify some of these misses using our intuition.[4]
All this is similar to the double-level theory that Christine Korsgaard proposes for dealing with evil in Kantian ethics.[5] Kant famously wrote that one oughtn't ever to lie, even if it was exceedingly likely to save somebody's life.[6] Korsgaard argues (with more subtlety than I'll do justice here) that these maxims are an ideal to live up to in ideal circumstances, but that, when faced with evil, & only when faced with evil, they can become a goal to seek instead.[7] What she's doing, it seems to me, is presenting two moral systems with a meta-system for deciding which to apply in a given situation. That is like the rabbi animating two specialised golems along with a third golem to supervise & coordinate the two.
If anything matters at all, objectively, then our task must be to find a moral system that maps as closely as possible to what is objectively good. It seems probable that we haven't found such a moral system yet. Maybe such a system is not even possible to express in human language or to understand using the human intellect. If so, the project of ethics is not dissimilar from that of the social sciences, say, in that it tries to produce answers by modelling the essential characteristics of very complicated human structures.
Bensinger, What I'd change about different philosophy fields. ↩︎
Williams, Utilitarianism. ↩︎
McElreath, Statistical Rethinking: A Bayesian Course with Examples in R and Stan. ↩︎
But ... if it's correct to sometimes override our moral system(s), or to combine them in some way, then how do we know that what we're doing is good at all? If a moral system is not correct in all situations, how can we trust its output in any one particular situation? Don't we need some solid fundament on which to stand? Otherwise we are like Baron Münchhausen, pulling ourselves out of the mud by our own hair.
You might be convinced that Kantianism is the good & true moral system. If so, it should be surprising to be unsettled by some of its more controversial outcomes, like the impermissibility of lying to prevent murder. After all, while those outcomes may seem intuitively bad, if (that form of) Kantianism is correct, then they are in fact good.
This is the problem of moral realism that I referenced early on in this post. ↩︎
Korsgaard, Creating the Kingdom of Ends. ↩︎
ibid. ↩︎
ibid. ↩︎