Consequentialism traditionally doesn't distinguish between acts of commission or acts of omission. Not flipping the lever to the left is equivalent with flipping it to the right.
But there seems one clear case where the distinction is important. Consider a moral learning agent. It must act in accordance with human morality and desires, which it is currently unclear about.
For example, it may consider whether to forcibly wirehead everyone. If it does so, they everyone will agree, for the rest of their existence, that the wireheading was the right thing to do. Therefore across the whole future span of human preferences, humans agree that wireheading was correct, apart from a very brief period of objection in the immediate future. Given that human preferences are known to be inconsistent, this seems to imply that forcible wireheading is the right thing to do (if you happen to personally approve of forcible wireheading, replace that example with some other forcible rewriting of human preferences).
What went wrong there? Well, this doesn't respect "conversation of moral evidence": the AI got the moral values it wanted, but only though the actions it took. This is very close to the omission/commission distinction. We'd want the AI to not take actions (commission) that determines the (expectation of the) moral evidence it gets. Instead, we'd want the moral evidence to accrue "naturally", without interference and manipulation from the AI (omission).
I'm not sure commission/omission distinction is really the key here. This becomes clearer by inverting the situation a bit:
Some third party is about to forcibly wirehead all of humanity. How should your moral agent reason about whether to intervene and prevent this?
That's interesting - basically here we're trying to educate an AI into human values, but human values are going to swiftly be changed to something different (and bad from our perspective).
I think there's no magical solution - either we build a FAI properly (which is very very hard), and it would stop the third party, or we have an AI that we value load and try and prevent our values from changing while it's happening.
The omission/commission thing applies to value loading AIs, not to traditional FAI. But I admit it's not the best analogy.