"Optimal" by what value?
Well, I intended it in the minimal sense of "maximizing an optimization problem", if the moral quandary could be seen in that way. I was not asserting that consequentialism is the optimal way to find a solution to a moral problem, I stated that it seems to me that consequentialism is the only way to find an optimal solution to a moral problem that our previous morality cannot cover.
Since we don't have an objective morality here, a person only has their Wants (whether moral or not) to decide what counts as optimal.
But we do have an objective morality (in Eliezer's metaethics): it's morality! As far as I can understand, he states that morality is the common human computation to assign values to states of the world around us. I believe that he asserts these two things, besides others:
morality is objective in the sense that it's a common fundamental computation, shared by all humans;
even if we encounter an alien way to assign value to states of the world (e.g. pebblesorters), we could not call that morality, because we cannot go outside of our moral system; we should call it another way, and it would not be morally understandable.
That is: human value computation -> morality; pebblesorters value computation -> primality, which is not: moral, fair, just, etc.
One option would be the best from a consequentialist perspective, taking all consequences into accont. However, taking this option would make the option's taker not only feel very guilty (for whatever reason- there are plenty of possibilities) but harm their selfish interests in the long run.
I agree that a direct conflict between a deontological computation and a consequentalist one cannot be solved normatively by metaethics. At least, not by the one exposed here or the one I ascribe to. However, I believe that it doesn't need to: it's true that morality, if confronted with truly alien value computations like primality or clipping, it's rather monolithic, however, if zoomed in it can be rather confused.
I would say that in any situation where there's such a conflict, only the individual computation present in the actor's mind could determine the outcome. If you want, computational metaethics is descriptive and maybe predictive, rather than prescriptive.
My apologies if this doesn't deserve a Discussion post, but if this hasn't been addresed anywhere than it's clearly an important issue.
There have been many defences of consequentialism against deontology, including quite a few on this site. What I haven't seen, however, is any demonstration of how deontology is incompatible with the ideas in Elizier's Metaethics sequence- as far as I can tell, a deontologist could agree with just about everything in the Sequences.
Said deontologist would argue that, to the extent a human universial morality can exist through generalised moral instincts, said instincts tend to be deontological (as supported through scientific studies- a study of the trolley dilemna v.s the 'fat man' variant showed that people would divert the trolley but not push the fat man). This would be their argument against the consequentialist, who they could accuse of wanting a consequentialist system and ignoring the moral instincts at the basis of their own speculations.
I'm not completely sure about this, but figure it an important enough misunderstanding if I indeed misunderstood to deserve clearing up.