And of course there is correlation between math knowledge and success at test. The problem is that at the extremes the tails come apart. Kids that are better at multiplication will be better at the tests. But the kids who score maximum at the tests and the kids who win math olympiads, are probably two distinct groups. I am saying this as a former kid who did well at math olympiads, but often got B's at high school, because of some stupid numeric mistake.
This is culture-dependent; e.g. in (some parts of Russian grading culture) making a numeric mistake might be counted similarly to other gaps in the reasoning. An attitude of mistake is a mistake is quite prevalent in grading for university admissions (in at least one of the few places they remain supplementing the standardized tests). Unsure how prevalent that was in olympiad grading, I never was competitive there (although did encounter this outlook a bit in my small stints of grading olympiad tasks).
[There is usually a bit of leniency if miscalculation happened to be in the last few calculations to be done. Otherwise, you are out of luck.]
In this culture poor calculation skills are punished, and you basically have to have them at a certain level in order to stay competitive. Another example of this phenomenon is competitive/olympiad programming -- grading is done automatically, your solution has to pass the tests. It does not matter that you understand the idea (which is most of the time the difficult part); the implementation has to be correct.
[Depending on the competition: some give partial points for working on parts of the test set. Reiterating: this is a culture some popular competitions subscribe to and which I've had quite a bit of exposure to.]
Sidetrack: a coapproach to the mistake is a mistake is it doesn't matter how you got the answer if it is correct. This is difficult to implement in grading because this makes cheating significantly easier, but has led to some of the most fun solutions I've had to math/programming problems. I vehemently despise the "you have to complete the task in exactly this way" problems.
The post I commented on is about a justification of induction (unless I have commited some egregious misreading, which is a surprisingly common error mode of mine, feel free to correct me on this part). It seemed natural for me that I would respond with linking the the strongest justification I know -- although again, might have misread myself into understanding this differently from what words were written.
[This is basically the extent to which I mean that the question is resolved; I am conceding on ~everything else.]
I find the questions of "how and when can you apply induction" vastly more interesting than the "why it works" question. I am more of a "this is a weird trick which works sometimes, how and when does it apply?" kind of guy.
Bayesianism is probably the strongest argument for the "it works" part I can provide: here are the rules you can use to predict future events. Easily falsifiable by applying the rules, making a prediction and observing the outcomes. All wrapped up in an elegant axiomatic framework.
[The answer is probabilistic because the nature of the problem is (unless you possess complete information about the universe, which coincidentally makes induction redundant).]
I might have an unusual preference here, but I find the "why" question uninteresting.
It's fundamentally non-exploitable, in a sense that I do not see any advantage to be gained from knowing the answer (not a straightforward /empirical way of finding which one out of the variants I should pay attention to).
Bayesian probability theory fully answers this question from a philosophical point of view, and answers a lot of it from the practical point of view (doing calculations on probability distributions is computationally intensive and can get intractable pretty quick, so it's not a magic bullet in practice).
It extends logic to handle be able to uniformly handle both probabilistic statements and statements made with complete certainty. I recommend Jaynes's "Probability Theory: The Logic of Science" as a good guide to the subject in case you are interested.
Oh, so human diseases in the form of bacteria/viruses! And humans working on gain-of-function research.
Reminds me of a discussion I've had recently about whether humans solve complex systems of [mechanical] differential equations while moving. The counter-argument was "do you think that a mercury thermometer solves differential equations [while 'calculating' the temperature]?"
This one is a classic, so I can just copy-paste the solution from Google. The more interesting point is that this is one of those cases where math doesn't correspond to reality.
In the spirit of "trying to offer concrete models and predictions" I propose you a challenge: write a bot which would consistently beat my Rob implementation over the long run enough that it would show on numerical experiments. I need some time to work on implementing it (and might disappear for a while, in which case consider this one forfeited by me).
One of the rules I propose that neither of us are allowed to use details of others' implementation, in order to uphold the spirit of the original task.
It does work for negative bases. Representation of a number in any base is in essence a sum of base powers multiplied by coefficients. The geometric series just has all coefficients equal to 1 after the radix point (and a 1 before it, if we start addition from the 0th power).
I think your list of conditions is very restrictive; to the point at which it's really difficult to find something matching it.
Most (all?) modern
difficult
strategy games rely on some version of "game knowledge" as part of it's difficulty, expecting you to experiment with different approaches to find out what works best -- this is a core part of the game loop, something specifically designed into the game to make it more fun. This is baked into the design on a fundamental level and is extremely difficult to separate out.Combine that with the one-shot nature and a strict time limit (so strict that in a large amount of cases this isn't even enough to start making meaningful strategic decisions: either because learning all the systems takes longer or because you need to progress through the game enough to be aware of the later payoffs) and you are basically rolling a die on whenever you can land on a solution or not via some combination of prior gaming knowledge and luck -- I don't think you can meaningfully "try harder" to guess the decisions the game designers have made about the systems and obstacles you've not seen yet (besides playing more games beforehand of course, collecting game design principles trivia along the way).
Yes, you can bias the roll in your favor with genre savviness and experience; one of my friends has an absolutely uncanny ability of picking excellent builds in games with no prior information. He has also played games for some inordinate amount of time in order to build this intuition up.