Consider two realistic scenarios:
A) I'm talking to someone and they tell me they have two children. "Oh, do you have any boys?" I ask, "I love boys!". They nod.
B) I'm talking to someone and they tell me they have two children. One of the children then runs up to the parent. It's a boy.
The chance of two boys is clearly 1/3 in the first scenario, and a half in the second.
The scenario in the question as asked is almost impossible to answer. Nobody would ever state "I have two children, at least one of whom is a boy." in real life, so there's no way to update in that situation. We have no way to generate good priors. Instead people make up a scenario that sounds similar but is more realistic, and because everyone does that differently they'll all have different answers.
I think Harberger taxes are inherently incompatible with Georgian taxes as Georgian taxes want to tax only the land and Harberger taxes inherently have to tax everything.
That said see my somewhat maverickal attempt to combine them here: https://www.lesswrong.com/posts/MjBQ8S5tLNGLizACB/combining-the-best-of-georgian-and-harberger-taxes. Under that proposal we would deal with this case by saying that if anyone outbid me for the land they would not be allowed to extract the oil until the arranged a separate deal with me, but could use the land for any other purpose.
My assumption for an LVT is that the tax is based on the value of the land sans any improvements the landowner has made to the land. This would thus exclude from the tax an increase in value due to you discovering oil or you building nearby, but include in the tax any increase in value due to your neighbours discovering oil or building on their land.
That said I don't know how this would be calculated in practice, especially once we get to more complicated cases (a business I'm a minority owner of discovers oil on my land, I split my plot of land into 2 and sell them to 2 different people, etc.).
On the other hand, most taxes have all sorts edge cases too, and whilst they're problematic, we muddle through them. I doubt that this couldn't be muddled through in a similar way.
Or to put it another way: these SAT scores are compatible with an average IQ anywhere between + 1.93 to + 3.03 SD. Insofar as your prior lies somewhere between these two numbers, and you don't have a strong opinion on what precisely Lesswrong selects for it's not going to update you very much in either direction.
Indeed if rationalists were entirely selected by IQ and nothing else, and there were no other confounders, and height was +1.85 SD, IQ would be +9.25 SD. In the real world this instead provides a Bayesian update that you were wrong in assuming rationalists were purely selected for by IQ, and not e.g. gender.
The fact that going from 2.42 SD to 3.03 SD is nonsensical does not in anyway make it more sensible to go from 2.42 to 1.93. Your response to faul_sname is completely irrelevant because it assumes rationalists are selected for on SAT, which is clearly false. The correct calculation is impossible to make accurately given we are missing key information, but we can make some estimates by assuming that rationalists are selected for something that correlates with by IQ and SATs and guessing what that correlation is.
Here’s the breakdown: a median SAT score of 1490 (from the LessWrong 2014 survey) corresponds to +2.42 SD, which regresses to +1.93 SD for IQ using an SAT-IQ correlation of +0.80. This equates to an IQ of 129.
I don't think that works unless Less wrong specifically selects for high SAT score. If it selects for high IQ and the high SAT is as a result of the high IQ then you would have to go the other way and assume an SD of 3.03.
If, as seems more likely, Less wrong correlates with both IQ and SAT score, then the exact number is impossible to calculate, but assuming it correlates with both equally we would estimate IQ at 2.42 SD.
Note this requires market failure by definition - otherwise if an action provides me a a small gain for a huge loss to you, you would be willing to pay me some amount of money not to take that action, benefiting us both.
As a concrete example of how this plays out in practice. If you require Bob to wear a tuxedo costing 5000 dollars, and other similar companies don't, in a perfect market for labour you would need to pay Bob 5000 dollars more than other companies to cover the tuxedo or he'd just work for them himself.
The fact that he doesn't suggests that other things are going on - for example finding an alternative job might haven take more than the amount of time it takes to earn 5000 dollars, or he didn't know when he signed the contract that a tuxedo was required, and the contract makes it difficult for him to switch.
Most murder mysteries on TV tend to have a small number of suspects, and the trick is to find which one did it. I get the feeling that real life murders the police either have absolutely no idea who did it, or know exactly who did it and just need to prove that it was them to the satisfaction of the court of law.
That explains why forensic tests (e.g. fingerprints) are used despite being pretty suspect. They convince the jury that the guilty guy did it, which is all that matters.
See https://issues.org/mnookin-fingerprints-evidence/ for more on fingerprints.
Interesting paper!
I'm worried that publishing it "pollutes" the training data and makes it harder to reproduce in future LLMs - since their training data will include this paper and discussions of it, they'll know not to trust the setup.
Any thoughts on this?
(This leads to further concern that me publishing this comment makes it worse, but at some point it ought to be discussed and better do that early with less advanced techniques than later with more sophisticated ones).
I don't really see how? A frequentist would just run this a few times and see that the outcome is 1/2.
In practice, for obvious reasons, frequentists and bayesians always agree on the probability of anything that can be measured experimentally. I think the disagreements are more philosophical about when it's appropriate to apply probability to something at all, though I can hardly claim to be an expert in non-bayesian epistemology.