(Edited to add: See also this addendum.)
I commented on Facebook that I think our ethics is three-tiered. There are the things we imagine we consider right, the things we consider right, and the things we actually do. I was then asked to elaborate between the difference of the first two.
For the first one, I was primarily thinking about people following any idealized, formal ethical theories. People considering themselves act utilitarians, for instance. Yet when presented with real-life situations, they may often reply that the right course of action is different than what the purely act utilitarian framework would imply, taking into account things such as keeping promises and so on. Of course, a rule utilitarian would avoid that particular trap, but in general nobody is a pure follower of any formal ethical theory.
Now, people who don't even try to follow any formal ethical systems probably have a closer match between their first and second categories. But I recently came to view as our moral intuitions as a function that takes the circumstances of the situation as an input and gives a moral judgement as an output. We do not have access to the inner workings of that function, though we can and do try to build models that attempt to capture its inner workings. Still, as our understanding of the function is incomplete, our models are bound to sometimes produce mistaken predictions.
Based on our model, we imagine (if not thinking about the situations too much) that in certain kinds of situations we would arrive at a specific judgement, but a closer examination of them reveals that the function outputs the opposite value. For instance, we might think that maximizing total welfare is always for the best, but then realize that we don't actually want to maximize total welfare if the people we consider our friends would be hurt. This might happen even if you weren't explicitly following any formal theory of ethics. And if *actually* faced with that situation, we might end up acting selfishly instead.
This implies that people pick the moral frameworks which are best at justifying the ethical intuitions they already had. Of course, we knew that much already (even if we sometimes fail to apply it - I was previously puzzled over why so many smart people reject all forms of utilitarianism, as ultimately everyone has to perform some sort of expected utility calculations in order to make moral decisions at all, but then realized it had little to do with utilitarianism's merits as such). Some of us attempt to reprogram their moral intuitions, by taking those models and following them even when they fail to predict the correct response of the moral function. With enough practice, our intuitions may be shifted towards the consciously held stance, which may be a good or bad thing.
Addendum: a response to a person who asked what, in this theory, makes ethics different from any other kind of preference.
I consider ideologies to be a belief structure that lies somewhere halfway between ethics and empirical beliefs, heavily blending in parts of both. In an ideology, empirical beliefs are promoted to a level where they gain a moral worth by themselves.
To answer your actual point, I would say that ethics really are just a special case of ordinary preferences. Normatively, there's no reason why a preference for a hamburger would be more important than a preference for not killing. Of course, ethics-related preferences tend to be much stronger than others, giving them extra worth.
What makes ethics special is their functional role for the organism. (From now on, I'll reference the original moral intuitions as "morals", and the theoretical structure an organism builds to explain them as "ethics".) Morals tend to be rather strongly experienced preferences, driving behavior quite strongly. In order to better plan for the future, an organism needs to know how it will react in different situations, so over time it observes its moral reactions in a variety of circumstances and builds an ethical model that best fits the data. (This is basically a variant of the "the self is a self-model" idea from philosophy of mind, applied to ethics: see e.g. http://xuenay.livejournal.com/318670.html )
Of course, we humans tend to confuse models for the real thing. "I experience moral repugnance at this situation, which could be explained if my moral intuitions thought that killing was wrong" becomes "killing is objectively wrong". Eventually we forget that the model was a model at all, and it becomes an ideology - a system where empirical beliefs about the nature of our morals have taken a moral value by themselves. Our morals aren't entirely untouchable black boxes, of course, and this kind of confusion may serve to actually shift our morals in the direction of the theory. And I'm not saying that the models must be mistaken - they may very well be correct.
How is it possible to discuss ethics in such a scenario? Well, it needs to be noted that there's also an additional reason for ethics are likely to have evolved. Building ethical models that predict moral behavior is useful not only for predicting your own behavior, but also that of others. I suspect that part of the instinctive dislike many people feel towards hypocrites is the fact that inconsistencies between theory and behavior means the hypocrites' behavior is harder to predict, thus making alliances with them less safe. This drives people towards adopting ethical theories which are more consistent internally, or that at least appear such to others. (This theory is closely related to Robin Hanson's theory of identity: http://www.overcomingbias.com/2009/08/a-theory-of-identity.html ) And since ethical theories also take a moral worth for the individuals themselves, this provides another method for how discussion can actually modify our ethical systems.