"Failing to reward deliberate ignorance" doesn't equal "Punishing deliberate ignorance." The issue here is not the ignorance, the issue is in making ignorance a superior moral state to knowledge.
Take ethics out of it: Suppose you were the server admin for the Universe Server Company, where all existing universes are simulated for profit. Suppose that happy universes cost more resources to run than unhappy universes, and cost our imaginary company more money than they make; "lukewarm" universes, which are neither happy nor unhappy, make just as much money as unhappy universes. If the USC were required by law to make any universes it discovered to be less-than-Happy universes Happy, what do you suppose company policy would be about investigating the happiness level of simulated universes?
How do you suppose people who feel obligations to those worse-off than they are cope with this sense of obligation?
"Failing to reward deliberate ignorance" doesn't equal "Punishing deliberate ignorance."
The practical effect of this system amounts to punishing ignorance. Someone who remains ignorant takes a risk that he is being unknowingly immoral and therefore will be punished, and he can only alleviate that risk by becoming less ignorant.
In your analogy, we would "fail to reward deliberate ignorance" by requiring the Universe Server Company to make all the universes happy whether they discovered that or not. That would indeed imp...
[CW: This post talks about personal experience of moral dilemmas. I can see how some people might be distressed by thinking about this.]
Have you ever had to decide between pushing a fat person onto some train tracks or letting five other people get hit by a train? Maybe you have a more exciting commute than I do, but for me it's just never come up.
In spite of this, I'm unusually prepared for a trolley problem, in a way I'm not prepared for, say, being offered a high-paying job at an unquantifiably-evil company. Similarly, if a friend asked me to lie to another friend about something important to them, I probably wouldn't carry out a utilitarian cost-benefit analysis. It seems that I'm happy to adopt consequentialist policy, but when it comes to personal quandaries where I have to decide for myself, I start asking myself about what sort of person this decision makes me. What's more, I'm not sure this is necessarily a bad heuristic in a social context.
It's also noteworthy (to me, at least) that I rarely experience moral dilemmas. They just don't happen all that often. I like to think I have a reasonably coherent moral framework, but do I really need one? Do I just lead a very morally-inert life? Or have abstruse thought experiments in moral philosophy equipped me with broader principles under which would-be moral dilemmas are resolved before they reach my conscious deliberation?
To make sure I'm not giving too much weight to my own experiences, I thought I'd put a few questions to a wider audience:
- What kind of moral dilemmas do you actually encounter?
- Do you have any thoughts on how much moral judgement you have to exercise in your daily life? Do you think this is a typical amount?
- Do you have any examples of pedestrian moral dilemmas to which you've applied abstract moral reasoning? How did that work out?
- Do you have any examples of personal moral dilemmas on a Trolley Problem scale that nonetheless happened?
The Username/password anonymous account is, as always, available.