Oh, no, I think you misunderstand what parts of the question-problem I was talking about. To better characterize the bogus example, let's flesh it out a bit:
Q: Which is healthier?
( ) Bleggs
( ) Rubes
( ) Both
( ) Neither
Now obviously, the first four chains of reasoning go as follow:
In most specific cases, presented an arbitrary thought-experiment-style choice of being handed a blegg or a rube, neither is good. Owning either a Blegg or a Rube will make you less physically healthy. So among the four options, "Neither" is clearly better. This is pretty certain, though some crack scientists do claim conclusive evidence that owning both at the same time can be healthy. But I don't put much faith in their suspicious results.
"But!", screams the more logically-minded, "the question isn't about which of the four choices presented is better - it's clear that the fourth option is intended to mean 'neither bleggs nor rubes are healthier', not that you should pick neither. So the thought experiment implied means you have to pick one of the two, and in that case Bleggs are clearly marginally better!" Okay, fine. So Bleggs are most likely healthier if you have to choose one of the two - they're unlikely to be equally unhealthy or healthy, after all.
But let's take a step back for a moment. If you look at the grand scheme of things, at a macro scale, Rubes do reduce the total amount of Bleggs and Rubes, because each Rube will destroy at least five Bleggs. So in the grand scheme of things, having Rubes is healthier than having Bleggs, if we can't attack the source! Clearly, both of the previous chains of reasoning are too narrow-minded and don't think of the big picture. On a large scale, the Rubes are indeed healthier-per-unit than the Bleggs. Probably.
Ah, but what if it is implied that this is an all-or-nothing paradigm, and what if others interpret it this way? Then, obviously, the complete absence of both Bleggs and Rubes would be a Very Bad Thing™, since we require Bleggs and Rubes to produce Tormogluts, a necessary component of modern human prosperity! Thus, both are (probably) healthier than only having one or the other (and obviously better than neither).
...
On the other hand, Bleggs and Rubes are unnatural, unsustainable in the long term, and we will soon need to research new ways to produce Tormogluts. Most people who see you advocating for them will automatically match you as The Enemy, so you should pick "Neither", even though that's not what the question implies. But this is a shitty situation, and if someone reading my answer to this question interprets it this way, I don't care to befriend them anyway. So I reject this answer.
And let's not even think of what the Kurgle fanatics have to say about this question. The horror.
Assuming all of the above went through your mind in a few seconds very rapidly when you first read the question... what answer do you choose? Do you also put a preference filter for other people's answers? Just choosing the higher or most confident probability from the above isn't going to cut it if this question matters to you a lot.
I used to pick the “least bad” answer in such cases, but then I decided to clear all my previous answers, and now when I see a question to which the answer I wish I could give is “Mu” or “ADBOC” or “Taboo $word” or “Avada Ked--[oh right, new censorship policy, sorry]”, I just skip it.
(This is a semi-serious introduction to the metaethics sequence. You may find it useful, but don't take it too seriously.)
Meditate on this: A wizard has turned you into a whale. Is this awesome?
"Maybe? I guess it would be pretty cool to be a whale for a day. But only if I can turn back, and if I stay human inside and so on. Also, that's not a whale.
"Actually, a whale seems kind of specific, and I'd be suprised if that was the best thing the wizard can do. Can I have something else? Eternal happiness maybe?"
Meditate on this: A wizard has turned you into orgasmium, doomed to spend the rest of eternity experiencing pure happiness. Is this awesome?
...
"Kindof... That's pretty lame actually. On second thought I'd rather be the whale; at least that way I could explore the ocean for a while.
"Let's try again. Wizard: maximize awesomeness."
Meditate on this: A wizard has turned himself into a superintelligent god, and is squeezing as much awesomeness out of the universe as it could possibly support. This may include whales and starships and parties and jupiter brains and friendship, but only if they are awesome enough. Is this awesome?
...
"Well, yes, that is awesome."
What we just did there is called Applied Ethics. Applied ethics is about what is awesome and what is not. Parties with all your friends inside superintelligent starship-whales are awesome. ~666 children dying of hunger every hour is not.
(There is also normative ethics, which is about how to decide if something is awesome, and metaethics, which is about something or other that I can't quite figure out. I'll tell you right now that those terms are not on the exam.)
"Wait a minute!" you cry, "What is this awesomeness stuff? I thought ethics was about what is good and right."
I'm glad you asked. I think "awesomeness" is what we should be talking about when we talk about morality. Why do I think this?
"Awesome" is not a philosophical landmine. If someone encounters the word "right", all sorts of bad philosophy and connotations send them spinning off into the void. "Awesome", on the other hand, has no philosophical respectability, hence no philosophical baggage.
"Awesome" is vague enough to capture all your moral intuition by the well-known mechanisms behind fake utility functions, and meaningless enough that this is no problem. If you think "happiness" is the stuff, you might get confused and try to maximize actual happiness. If you think awesomeness is the stuff, it is much harder to screw it up.
If you do manage to actually implement "awesomeness" as a maximization criteria, the results will be actually good. That is, "awesome" already refers to the same things "good" is supposed to refer to.
"Awesome" does not refer to anything else. You think you can just redefine words, but you can't, and this causes all sorts of trouble for people who overload "happiness", "utility", etc.
You already know that you know how to compute "Awesomeness", and it doesn't feel like it has a mysterious essence that you need to study to discover. Instead it brings to mind concrete things like starship-whale math-parties and not-starving children, which is what we want anyways. You are already enabled to take joy in the merely awesome.
"Awesome" is implicitly consequentialist. "Is this awesome?" engages you to think of the value of a possible world, as opposed to "Is this right?" which engages to to think of virtues and rules. (Those things can be awesome sometimes, though.)
I find that the above is true about me, and is nearly all I need to know about morality. It handily inoculates against the usual confusions, and sets me in the right direction to make my life and the world more awesome. It may work for you too.
I would append the additional facts that if you wrote it out, the dynamic procedure to compute awesomeness would be hellishly complex, and that right now, it is only implicitly encoded in human brains, and no where else. Also, if the great procedure to compute awesomeness is not preserved, the future will not be awesome. Period.
Also, it's important to note that what you think of as awesome can be changed by considering things from different angles and being exposed to different arguments. That is, the procedure to compute awesomeness is dynamic and created already in motion.
If we still insist on being confused, or if we're just curious, or if we need to actually build a wizard to turn the universe into an awesome place (though we can leave that to the experts), then we can see the metaethics sequence for the full argument, details, and finer points. I think the best post (and the one to read if only one) is joy in the merely good.