I don't see how exactly a tunnel would have such a critical piece that failure to land it at critical time would require the project to start anew. Such circumstances are actively avoided in engineering.
It is really interesting that people who try to make up such 'kill to save a life' scenario invariably end up having some major error with regards to how something works, which they try to disguise as a trivial low level detail which they urge us to ignore. Normally, if you aren't trying to trick someone into some fallacy, it is quite easy to come up with a thought experiment which does not have tunnels that are built cardhouse style and have to be abandoned and started afresh due to failure to lower 1 piece in time.
There's the very simple scenario for you guys to ponder: you have $100 000 , you can donate $10 000 to charity without noticeable dip in your quality of life, and that could easily save someone's life for significant timespan. Very realistic, happens all the time, likely is happening right now to you personally.
You don't donate.
Nonetheless you spend inordinate time conjecturing scenarios where it'd be moral to kill someone, instead of idk working at some job for same time, making $ and donating it to charity.
Ponder this for a while, do some introspection with regards to own actions. Are you moral being that can be trusted with choosing a path of action that's the best for common good? Hell no, and neither am I. Are you even trying to do moral stuff correctly? No evidence of this happening, either. If you ask me to explain that kill-1-to-save-N scenario inventing behaviour, I'd say, probably some routine deep inside is simply interested in coming up with advance rationalization for homicide for money, or the like, to broaden the one's, hmm, let's say, opportunities. For this reason, rather than coming up with realistic scenarios, people come up with faulty models where killing is justified, because deeply inside they are working for the purpose of justifying a killing using a faulty model.
I was discussing utilitarianism and charitable giving and similar ideas with someone today, and I came up with this hybrid version of the trolley problem, particularly the fat man variation, and the article by Scott Alexander/Yvain about using dead children as a unit of currency. It's not extremely original, and I'd be surprised if no-one on LW had thought of it before.
You are offered a magical box. If you press the button on the box, one person somewhere in the world will die, you get $6000, and $4,000 is donated to one of the top rated charities on GiveWell.org. According to the $800 per life saved figure, this charity gift would save five lives, which is a net gain of four lives and $6,000 to you. Is it moral to press the button?
All of the usual responses to the trolley problem apply. To wit: It's good to have heuristics like "don't kill." There's arguments about establishing Schelling points with regards to not killing people. (This Schelling point argument doesn't work as well in a case like this, with anonymity and privacy and randomization of the person who gets killed.) Eliezer argued that for a human, being in the trolley problem is extraordinarily unlikely, and he would be willing to acknowledge that killing the fat man would be appropriate for an AI in the situation to do, but not a human.
There's also lots of arguments against giving to charity, too. See here for some discussion of this on LessWrong.
I feel that the advantage of my dilemma is that in the original extreme altruism faces a whole lot of motivated cognition against it, because it implies that you should be giving much of your income to charity. In this dilemma, you want the $6,000, and so are inclined to be less skeptical of the charity's effectiveness.
Possible use: Present this first, then argue for extreme altruism. This would annoy people, but as far as I can tell, pretty much everyone gets defensive and comes up with a rationalization for their selfishness when you bring up altruism anyway.
What would you people do?
EDIT: This $800 figure is probably out of date. $2000 is probably more accurate. However, it's easy to simply increase the amount of money at stake in the thought experiment.
Edit 2: I fixed some swapped-around values, as kindly pointed out by Vaniver.