There's a more practical one, or at least one that doesn't require being deliberately set up by a diabolical figure, that's quite similar to the trolley one. Details here http://www.friesian.com/valley/dilemmas.htm but briefly and edited to be more clear-cut:
An underwater tunnel is being constructed despite an almost certain loss of several lives. At a critical moment when a fitting must be lowered into place, a workman is trapped in a section of the partly laid tunnel. If it is lowered, it will surely crush the trapped workman to death. Yet, if it is not and a time consuming rescue of the workman is attempted, the tunnel will have to be abandoned and the whole project begun anew. Ten workmen have already died in the project as a result of anticipated and unavoidable conditions in the building of the tunnel. What should be done? Was it a mistake to begin the tunnel in the first place? But don't we take such risks all the time?
The strong temptation here is to say 'we shouldn't build the tunnel', but I don't think that's a practical response.
I don't see how exactly a tunnel would have such a critical piece that failure to land it at critical time would require the project to start anew. Such circumstances are actively avoided in engineering.
It is really interesting that people who try to make up such 'kill to save a life' scenario invariably end up having some major error with regards to how something works, which they try to disguise as a trivial low level detail which they urge us to ignore. Normally, if you aren't trying to trick someone into some fallacy, it is quite easy to come up with a...
I was discussing utilitarianism and charitable giving and similar ideas with someone today, and I came up with this hybrid version of the trolley problem, particularly the fat man variation, and the article by Scott Alexander/Yvain about using dead children as a unit of currency. It's not extremely original, and I'd be surprised if no-one on LW had thought of it before.
You are offered a magical box. If you press the button on the box, one person somewhere in the world will die, you get $6000, and $4,000 is donated to one of the top rated charities on GiveWell.org. According to the $800 per life saved figure, this charity gift would save five lives, which is a net gain of four lives and $6,000 to you. Is it moral to press the button?
All of the usual responses to the trolley problem apply. To wit: It's good to have heuristics like "don't kill." There's arguments about establishing Schelling points with regards to not killing people. (This Schelling point argument doesn't work as well in a case like this, with anonymity and privacy and randomization of the person who gets killed.) Eliezer argued that for a human, being in the trolley problem is extraordinarily unlikely, and he would be willing to acknowledge that killing the fat man would be appropriate for an AI in the situation to do, but not a human.
There's also lots of arguments against giving to charity, too. See here for some discussion of this on LessWrong.
I feel that the advantage of my dilemma is that in the original extreme altruism faces a whole lot of motivated cognition against it, because it implies that you should be giving much of your income to charity. In this dilemma, you want the $6,000, and so are inclined to be less skeptical of the charity's effectiveness.
Possible use: Present this first, then argue for extreme altruism. This would annoy people, but as far as I can tell, pretty much everyone gets defensive and comes up with a rationalization for their selfishness when you bring up altruism anyway.
What would you people do?
EDIT: This $800 figure is probably out of date. $2000 is probably more accurate. However, it's easy to simply increase the amount of money at stake in the thought experiment.
Edit 2: I fixed some swapped-around values, as kindly pointed out by Vaniver.