I am Andrew Hyer, currently living in New Jersey and working in New York (in the finance industry).
The dungeon is laid out as depicted; Room 3 does not border Room 4, and does border Room 6. You don't, however, know what exactly the adventurers are going to do in your dungeon, or which encounters they are going to do in which order. Perhaps you could figure that out from the dataset.
(I've edited the doc to make this clearer).
I think you may have mixed up the ordering halfway through the example: in the first and third tables 'Emma and you' is $90 while 'Emma and Liam'is $30, but in the second it's the other way around, and some of the charts seem odd as a result?
I don't think you should feel bad about that! This scenario was pretty complicated and difficult, and even if you didn't solve it I think "tried to solve it but didn't quite manage it" is more impressive than "didn't try at all"!
There is a problem I want solved.
No-one, anywhere in the world, has solved it for me.
Therefore, Silicon Valley specifically is bad.
Were whichever markets you're looking at open at this time? Most stuff doesn't trade that much out of hours.
I think this is just an unavoidable consequence of the bonus objective being outside-the-box in some sense: any remotely-real world is much more complicated than the dataset can ever be.
If you were making this decision at a D&D table, you might want to ask the GM:
I can't realistically explain all of these up front in the scenario! And this is just the questions I can think of - in my last scenario (linked comment contains spoilers for that if you haven't played it yet) the players came up with a zany scheme I hadn't considered myself.
Overall, I think if you realized that the +4 Boots in your inventory came from the Elf Ninja you can count yourself as having accomplished the Bonus Objective regardless of what you decided to do with them. (You can imagine that you discussed the matter with the GM and your companions, asked all the questions above, and made a sensible decision based on the answers).
ETA: I have finally tracked down the trivial coding error that ended up distorting my model: I accidentally used kRace in a few places where I should have used kClass while calculating simon's values for Speed and Strength.
Thanks for looking into that: I spent most of the week being very confused about what was happening there but not able to say anything.
Yeah, my recent experience with trying out LLMs has not filled me with confidence.
In my case the correct solution to my problem (how to use kerberos credentials to authenticate a database connection using a certain library) was literally 'do nothing, the library will find a correctly-initialized krb file on its own as long as you don't tell it to use a different authentication approach'. Sadly, AI advice kept inventing ways for me to pass in the path of the krb file, none of which worked.
I'm hopeful that they'll get better going forward, but right now they are a substantial drawback rather than a useful tool.
Ah, sorry to hear that. You can still look for a solution even if you aren't in time to make it on the leaderboard!
Also, if you are interested in these scenarios in general, you can subscribe to the D&D. Sci tag (click the 'Subscribe' button on that page) and you'll get notifications whenever a new one is posted.
I was suspicious of the methodology here (e.g. the difference between 'when the home team loses violence goes up by 9% if and only if gambling is legalized' and 'when the home team loses violence goes up by 10% if gambling is not legalized but by 10.9% if it is legalized' is something that I don't trust sociology to track honestly).
I went to take a look at the paper, and do not think it really supports the argument at all.
The relevant charts I believe are on p26 here. The first one shows how intimate partner violence (IPV) varies with 'expected outcome of game' and 'actual outcome of game':
Note that 'expected outcome of game' is the thing that actually seems predictive, not 'actual outcome of game'. When the home team is expected to lose, domestic violence is high even if they win. When the home team is expected to win, domestic violence is low even if they lose (though even lower if they win).
This looks to me like a study that's been massively confounded by other effects. Perhaps good sports teams tend to be favored to win, and also to be in wealthy regions with little domestic violence? Regardless of the reason, though, this makes me very suspicious of anything this study claims to show.
The second chart shows how IPV varies with the outcome of the game based on whether sports betting is legal:
This does, indeed, show that areas with legalized sports betting had higher rates of domestic violence when the home team lost (~0.45 vs ~0.43). However, it also showed that they had lower rates of domestic violence when the home team won, by more. (~0.38 vs ~0.42). If we assume that half of games are wins and half are losses (seems...pretty reasonable?), I believe this chart depicts legalized sports betting lowering domestic violence (though again I don't know if I believe that either due to how obviously confounded this data is).
Somehow we seem to have gone from "a clearly confounded paper that (if you believe it) shows sports betting on average lowering domestic violence" to "there is strong evidence of sports betting increasing domestic violence".
I find this somewhat depressing.