Consequentialism traditionally doesn't distinguish between acts of commission or acts of omission. Not flipping the lever to the left is equivalent with flipping it to the right.
But there seems one clear case where the distinction is important. Consider a moral learning agent. It must act in accordance with human morality and desires, which it is currently unclear about.
For example, it may consider whether to forcibly wirehead everyone. If it does so, they everyone will agree, for the rest of their existence, that the wireheading was the right thing to do. Therefore across the whole future span of human preferences, humans agree that wireheading was correct, apart from a very brief period of objection in the immediate future. Given that human preferences are known to be inconsistent, this seems to imply that forcible wireheading is the right thing to do (if you happen to personally approve of forcible wireheading, replace that example with some other forcible rewriting of human preferences).
What went wrong there? Well, this doesn't respect "conversation of moral evidence": the AI got the moral values it wanted, but only though the actions it took. This is very close to the omission/commission distinction. We'd want the AI to not take actions (commission) that determines the (expectation of the) moral evidence it gets. Instead, we'd want the moral evidence to accrue "naturally", without interference and manipulation from the AI (omission).
That doesn't work because to not wirehead humanity is not the same as doing it and has different implications and whether it was right or wrong as you say won't matter when it is done.. Whereas if you decide to stop someone from doing that NOT stopping them is morally worse (ostensibly) than agreeing with it. Inaction, is itself an action. If there is wrong occurring, to decide to not stop it is just as bad as doing it. However, choosing not to do something bad is not the same as doing it.
I'm arguing against some poorly thought out motivations; eg "don't do anything that most people would disagree with most of the time". This falls apart if you can act to change "people would disagree with" through some means other than preference satisfaction.