As with most trolley problems, I have to separate out "what's the right thing to do?" from "what would I do?"
In the situation as you described it, with the added proviso that there is no other unstated-but-relevant gotcha, the right thing to do is press the button. (By "gotcha" here I mean, for example, it doesn't turn out that by pressing the button I also cause milions of other people to suffer, and you just didn't mention that part.)
Were I in that situation, of course, I would be solving an entirely different problem, roughly statable as "I am given a box with a button on it and am told, by a not-particularly-reliable source, that these various conditions apply, yadda yadda." I would probably conclude that I'm in some kind of an ethical research lab, and try to decide how likely it was that the researchers would actually kill someone, and how likely it was that they would actually give me money, and how likely it was that video of me pressing the "kill a random person" button would appear on YouTube, etc. Not really sure what I would ultimately conclude.
If I were in the situation and somehow convinced the situation was as you described it and no gotchas existed, I probably wouldn't press the button (despite believing that pressing the button was the right thing to do) because I'd fear the discomfort caused by my irrational conscience, especially if the person who died turned out to be someone I cared about. But it's hard to say; my actual response to emotional blackmail of this sort is historically very unpredictable, and I might just say "fuck it" and press the button and take the money.
the right thing to do is press the button.
Why? Do we really need more people on this planet? I would be more likely to press the button in a net-neutral case (one saved, one dies, more money for me), provided your other conditions (not a research, not a joke, full anonymity, etc.) hold.
I was discussing utilitarianism and charitable giving and similar ideas with someone today, and I came up with this hybrid version of the trolley problem, particularly the fat man variation, and the article by Scott Alexander/Yvain about using dead children as a unit of currency. It's not extremely original, and I'd be surprised if no-one on LW had thought of it before.
You are offered a magical box. If you press the button on the box, one person somewhere in the world will die, you get $6000, and $4,000 is donated to one of the top rated charities on GiveWell.org. According to the $800 per life saved figure, this charity gift would save five lives, which is a net gain of four lives and $6,000 to you. Is it moral to press the button?
All of the usual responses to the trolley problem apply. To wit: It's good to have heuristics like "don't kill." There's arguments about establishing Schelling points with regards to not killing people. (This Schelling point argument doesn't work as well in a case like this, with anonymity and privacy and randomization of the person who gets killed.) Eliezer argued that for a human, being in the trolley problem is extraordinarily unlikely, and he would be willing to acknowledge that killing the fat man would be appropriate for an AI in the situation to do, but not a human.
There's also lots of arguments against giving to charity, too. See here for some discussion of this on LessWrong.
I feel that the advantage of my dilemma is that in the original extreme altruism faces a whole lot of motivated cognition against it, because it implies that you should be giving much of your income to charity. In this dilemma, you want the $6,000, and so are inclined to be less skeptical of the charity's effectiveness.
Possible use: Present this first, then argue for extreme altruism. This would annoy people, but as far as I can tell, pretty much everyone gets defensive and comes up with a rationalization for their selfishness when you bring up altruism anyway.
What would you people do?
EDIT: This $800 figure is probably out of date. $2000 is probably more accurate. However, it's easy to simply increase the amount of money at stake in the thought experiment.
Edit 2: I fixed some swapped-around values, as kindly pointed out by Vaniver.