The word "morality" needs to be made more specific for this discussion. One of the things you seem to be talking about is mental behavior that produces value judgments or their justifications. It's something human brains do, and we can in principle systematically study this human activity in detail, or abstractly describe humans as brain activity algorithms and study those algorithms. This characterization doesn't seem particularly interesting, as you might also describe mathematicians in this way, but this won't be anywhere close to an efficient route to learning about mathematics or describing what mathematics is.
"Logic" and "mathematics" are also somewhat vague in this context. In one sense, "mathematics" may refer to anything, as a way of considering things, which makes the characterization empty of content. In another sense, it's the study of the kinds of objects that mathematicians typically study, but in this sense it probably won't refer to things like activity of human brains or particular physical universes. "Logic" is more specific, it's a particular way of representing and processing mathematical ideas. It allows describing the things you are talking about and obtaining new information about them that wasn't explicit in the original description.
Morality in the FAI-relevant sense is a specification of what to do with the world, and as such it isn't concerned with human cognition. The question of the nature of morality in this sense is a question about ways of specifying what to do with the world. Such specification would need to be able to do at least these two things: (1) it needs to be given with much less explicit detail than what can be extracted from it when decisions about novel situations need to be made, which suggests that the study of logic might be relevant, and (2) it needs to be related to the world, which suggests that the study of physics might be relevant.
This question about the nature of morality is separate from the question of how to pinpoint the right specification of morality to use in a FAI, out of all possible specifications. The difficulty of finding the right morality seems mostly unrelated to describing what kind of thing morality is. If I put a note with a number written on it in a box, it might be perfectly accurate to say that the box contains a number, even though it might be impossible to say what that number is, precisely, and even if people aren't able to construct any interesting models of the unknown number.
What do I mean by "morality isn't logical"? I mean in the same sense that mathematics is logical but literary criticism isn't: the "reasoning" we use to think about morality doesn't resemble logical reasoning. All systems of logic, that I'm aware of, have a concept of proof and a method of verifying with high degree of certainty whether an argument constitutes a proof. As long as the logic is consistent (and we have good reason to think that many of them are), once we verify a proof we can accept its conclusion without worrying that there may be another proof that makes the opposite conclusion. With morality though, we have no such method, and people all the time make moral arguments that can be reversed or called into question by other moral arguments. (Edit: For an example of this, see these posts.)
Without being a system of logic, moral philosophical reasoning likely (or at least plausibly) doesn't have any of the nice properties that a well-constructed system of logic would have, for example, consistency, validity, soundness, or even the more basic property that considering arguments in a different order, or in a different mood, won't cause a person to accept an entirely different set of conclusions. For all we know, somebody trying to reason about a moral concept like "fairness" may just be taking a random walk as they move from one conclusion to another based on moral arguments they encounter or think up.
In a recent post, Eliezer said "morality is logic", by which he seems to mean... well, I'm still not exactly sure what, but one interpretation is that a person's cognition about morality can be described as an algorithm, and that algorithm can be studied using logical reasoning. (Which of course is true, but in that sense both math and literary criticism as well as every other subject of human study would be logic.) In any case, I don't think Eliezer is explicitly claiming that an algorithm-for-thinking-about-morality constitutes an algorithm-for-doing-logic, but I worry that the characterization of "morality is logic" may cause some connotations of "logic" to be inappropriately sneaked into "morality". For example Eliezer seems to (at least at one point) assume that considering moral arguments in a different order won't cause a human to accept an entirely different set of conclusions, and maybe this is why. To fight this potential sneaking of connotations, I suggest that when you see the phrase "morality is logic", remind yourself that morality isn't logical.