Meta musing:
It looks like the optimal allocation is borderline fraudulent. When I think of in-universe reasons for the TAE to set up Cockatrice Eye rebates the way they did, my best guess is "there's a bounty on these monsters in particular, and the taxmen figure someone showing up with n Cockatrice Eyes will have killed ceil(n/2) of them". This makes splitting our four eyes (presumably collected from two monsters) four ways deceptive; my only consolation is that the apparently-standard divide-the-loot-as-evenly-as-possible thing most other adventuring teams seem to be doing also frequently ends up taking advantage of this incentive structure.
framing contradictory evidence as biased or manipulated
Most contradictory evidence is, to some extent (regardless of what it's contradicting).
dismissing critics as [...] deluded, or self-interested
Most critics are, to some extent (regardless of what they're criticizing).
Assuming I didn't make any mistakes in my deductions or decisions, optimal plan goes like this:
Give everyone a Cockatrice Eye (to get the most out of the associated rebate) and a Dragon Head (to dodge the taxing-you-twice-on-every-Head-after-the-first thing).
Give the mage and the rogue a Unicorn Horn and a Zombie Hand each, and give the cleric four Zombie hands; this should get them all as close to the 30sp threshold as possible without wrecking anything else.
Give literally everything else to the fighter, allowing them to bear the entire 212sp cost; if they get mad about it, analogize it to being a meatshield in the financial world as well as the physical.
Thanks for your reply, and (re-)welcome to LW!
My conclusion is that I'm pretty sure you're wrong in ways that are fun and useful to discuss!
I hope so! Let's discuss.
(Jsyk you can spoiler possible spoilers on Desktop using ">!" at the start of paragraphs, in case you want to make sure no LWers are spoiled on the contents of a most-of-a-century-old play.)
Regarding the witnesses:
I agree - emphatically! - that eyewitness testimony is a lot less reliable than most people believe. I mostly only brought the witnesses up in my discussion because I thought the jury dismissed them for bad reasons, instead of a general policy of "eyewitnesses are unreliable". (In retrospect, I could have been a lot clearer on this point.)
Regarding the knife:
I agree that the knife being unique would have made things a lot more clear-cut, but disagree about the implications.
If no-one is deliberately trying to frame the accused, the odds of the real killer happening to use the same brand of knife as the one he favors are very low. (What fraction of knives* available to potential suspects are of that exact type? One in a hundred, maybe? If we assume no frame-up or suicide and start with your prior probability of 10% then a naive Bayesian update and a factor of 100 moves that to >90% even without other evidence**.)
If he is actively being framed . . . that's not overwhelmingly implausible, since it's not a secret what kind of knife he uses, and the real killer would be highly motivated to shift blame. However, the idea that he'd have lost his knife, by coincidence, at the same time that someone was using an exact duplicate to frame him (and then couldn't find it afterwards, even though it would be decisive for his defense) . . . strains credulity. I'm less sure about how to quantify the possibility a real killer took his knife without him knowing, got into the victim's apartment, and performed the kill all while the accused was out at the movies; but I feel pretty confident the accused's knife was the murder weapon.
*I'm ignoring the effects of the murder weapon being a knife at all because they're surprisingly weak. The accused owns a knife and favors using it, but so would many alternative suspects; and the accused cohabiting with the victim implies he also has easy access to many alternative methods - poison, arranging an accident - that Hypothetical Killer X wouldn't.
**Full disclosure, I didn't actually perform the calculation until I started writing this post; I admit to being surprised by how little a factor of ~100 changes a ~10% prior probability, though I still feel it's a stronger effect than you're accounting for, and for that matter think your base rates are too low to start with (the fight wasn't just a fight, it was the culmination of years of persistent abuse).
Regarding my conspiracy theories:
I agree that the protagonist having ideological or personal reasons to make the case turn out this way is much more likely than him having been successfully bribed or threatened; aside from anything else, the accused doesn't seem terribly wealthy or well-connected.
I also agree with your analysis of the racist juror's emotional state as presented, though I continue to think it's slightly suspicious that things happened to break that conveniently (the Doylist explanation is of course that the director wanted the bigot to come off as weak and/or needed things to wrap up satisfyingly inside a two-hour runtime, but I'm an incorrigible Watsonian.)
One last, even more speculative thought:
Literally everything the racist juror does in the back half of the movie is weird and suspicious. It's strange that he expects people to be convinced by his bigoted tirade; it's also strangely convenient that he's willing to vote not guilty by the end even though he A) hasn't changed his mind and B) knows a hung jury would probably eventually lead to the death of the accused, which he wants.
I don't think it's likely, but I'd put maybe a ~1% probability on . . .
. . . him being in league with the protagonist, and them running a two-man con on the other ten jurors to get the unanimous verdict they want.
I recently watched (the 1997 movie version of) Twelve Angry Men, and found it fascinating from a Bayesian / confusion-noticing perspective.
My (spoilery) notes (cw death, suspicion, violence etc):
From all the above, I conclude:
The accused is very likely to have committed the murder.
and
The protagonist probably has some kind of agenda: either he takes issue with capital punishment, knows the defendant personally, strongly dislikes the carceral justice system, is being bribed, or is trying to arrange acquittal for a guilty party just to see if he can.
However
I still think a case can be made for the existence of reasonable doubt.
if and only if
You consider the possibility it was a suicide.
(trigger warning for detailed discussion of that thing I just mentioned)
If I knew for a fact the defendant was innocent, most of my probability mass would be on some variation of the following sequence of events.
This hypothesis makes sense of the paramedic's claim about the type of knife, makes sense of the silent evidence of neither the accused nor the corpse having any injuries mentioned aside from the single stab wound (a person comfortable with violence yells an explicit verbal warning at another person comfortable with violence, and then stabs him to death, but there's no sign of a struggle?), and is supported by base rates (suicide is significantly more common than homicide in first-world nations).
. . . to be clear, I'd still say murder is much more likely, but I consider the above possibility just possible enough to be conflicted about the reasonableness of reasonable doubt in this case.
I'm curious what other LW users think.
Can't believe I missed that; edited; ty!
True. But if things were opened up this way, realistically more than one person would want to get in on it. (Enough to cover an entire percentage point of the bid? I have no idea.)
. . . Is there a way a random punter could kick in, say, $100k towards Elon's bid? Either they end up spending $100k on shares valued at somewhere between $100k and $150k; or, more likely, they make the seizure of OpenAI $100k harder at no cost to themselves.
Reflections on my performance:
There's an interesting sense in which we all failed this one. Most other players used AI to help them accomplish tasks they'd personally picked out; I eschewed AI altogether and constructed my model with brute force and elbow grease; after reaching a perfect solution, I finally went back and used AI correctly, by describing the problem on a high level (manually/meatbrainedly distilled from my initial observations) and asking the machine demiurge what approach would make most sense[1]. From this I learned about the fascinating concept of Symbolic Regression and some associated python libraries, which I eagerly anticipate using to (attempt to) steamroll similarly-shaped problems.
(There's a more mundane sense in which I specifically failed this one, since even after building a perfect input-output relation and recognizing the two best archetypes as rebatemaxxing and corpsemaxxing, I still somehow fell at the last hurdle and failed to get a (locally-)optimal corpsemaxxing solution; if the system had followed the original plan, I'd be down a silver coin and up a silver medal. Fortunately for my character's fortunes and fortune, Fortune chose to smile.)
Reflections on the challenge:
A straightforward scenario, but timed and executed flawlessly. In particular, I found the figuring-things-out gradient (admittedly decoupled from the actually-getting-a-good-answer gradient) blessedly smooth, starting with picking up on the zero-randomness premise[2] and ending with the fun twist that the optimal solution doesn't involve anything being taxed at the lowest rate[3].
I personally got a lot out of this one: for an evening's exacting but enjoyable efforts, I learned about an entire new form of model-building, about the utility and limits of modern AI, and about Banker's Rounding. I vote four-out-of-five for both Quality and Complexity . . . though I recognize that such puzzle-y low-variance games are liable to have higher variance in how they're received, and I might be towards the upper end of a bell curve here.
For a lark, I also tried turning on all ChatGPT's free capabilities and telling it to solve the problem from scratch. It thought for ~30 seconds and then spat out a perfect solution; I spent ~30 further seconds with paperclips dancing before my eyes; I then discovered it hadn't even managed to download the dataset, and was instead applying the not-unreasonable heuristic "if abstractapplic and simon agree on an answer it's probably true".
There's something fun about how "magic", "games", "bureaucracy", and "magical game bureaucracy" are equally good justifications for a "wait, what paradigm am I even in here?" layer of difficulty.
I know that part wasn't intentional, but I think rebatemaxxing>corpsemaxxing is nontrivially more compelling than the other way round.