"Nine innings and three outs" works much better to elicit "baseball".
Let me restate: Two boxes appear. If you touch box A, the contents of box B are vaporized. If you attempt to open box B, box A and it's contents are vaporized. Contents as previously specified. We could probably build these now.
Experimentally, how do we distinguish this from the description in the main thread? Why are we taking Omega seriously when if the discussion dealt with the number of angels dancing on the head of pin the derision would be palpable? The experimental data point to taking box B. Even if Omega is observed delivering the boxes, and making the specified claims regarding their contents, why are these claims taken on faith as being an accurate description of the problem?
"... whenever a tester finds a user input that crashes your program, it is always bad - it reveals a flaw in the code - even if it's not a user input that would plausibly occur; you're still supposed to fix it. "Would you kill Santa Claus or the Easter Bunny?" is an important question if and only if you have trouble deciding. I'd definitely kill the Easter Bunny, by the way, so I don't think it's an important question."
I write code for a living; I do not claim that it crashes the program. Rather the answer is irrelevant as I don't think that the question is important or insightful regarding our moral judgements since it lacks physical plausibility. BTW, since one can think of God as "Santa Claus for grown-ups", the Easter Bunny lives.
Why is this a serious question? Given the physical unreality of the situation, the putative existence of 3^^^3 humans and the ability to actually create the option in the physical universe - why is this question taken seriously while something like is it better to kill Santa Claus or the Easter Bunny considered silly?
Encouraging your children to believe in Santa Claus teaches that you will lie to them because you think it's cute. I promised my daughter that I would never lie to her -- I might refuse to answer, but never lie.
Rather than "I don't know", I like to use either "no data" or "insufficient data". I am enough of a geek that it is considered - for me - a "reasonable utterance", and it's easier to qualify a quantitative answer if I'm later pressed. And BTW, not having seen that tree, zero is much better lower bound.
A problem in moving from game-theoretic models to the "real world" is that in the latter we don't always know the other decision maker's payoff matrix, we only know - at best! - his possible strategies. We can only guess at the other's payoffs; albeit fairly well in social context. We are more likely to make a mistake because we have the wrong model for the opponent's payoffs than because we make poor strategic decisions.
Suppose we change this game so that the payoff matrix for the paperclips is chosen from a suitably defined random distribution. How will that change your decision whether to "cooperate" or to "defect"?