You might want to rethink the definition of "hardest problem" because I wouldn't expect the IMO committee to care particularly much that their problems are hard for machines as well as humans.
For example, if you look at the 2005 IMO, but suppose that problem 6 was geometry, so that the problem 3 was the "hardest", then you're in trouble. Problem 3 has a one-line proof "by sum_of_squares", assuming that someone has written a sum-of-squares tactic in time and that the sum-of-squares certificate is small enough to find within the time limit.
Alternatively, the IMO committee might set a problem without realizing that it's close to a special case of something already in mathlib.
As defined, I would put higher probability on solving the "hardest problem" than getting a gold medal and would bet on that.
Data from NSW, Australia. Delta was stable at 150-300 cases/day, so recent spike is likely omicron. 93% of people aged 16 or over are double-vaccinated. https://covidlive.com.au/report/daily-cases/nsw
Re. the NBA study, say I go get my antibody levels tested (which is easy here in Australia), do we know what counts as a high result?
Thank you very much for producing these. As someone who's rather time poor but trying to become more informed, they are very helpful.
Here is some alternative code for building an HN clone: https://github.com/jcs/lobsters (see https://lobste.rs/about for differences to HN).
It's also one night before full moon (which is at 4:50am on June 15), which should make the sky quite bright.
On a related note, consider what the moon looks like one night before it's full. Would you describe this as "over three-quarters full"? While that's technically correct, I wouldn't. I'd maybe describe a June 11-12 moon as "over three-quarters full" but I'd say a June 13-14 moon is "almost full". So we should up the probability that we're in a story/simulation/mirror.
Observation: If the purpose of this exercise is to run an AI box experiment, with EY as gatekeeper and the internet hivemind as the AI, then the ability to speak in parseltongue is problematic: It appears to make the game easier for the AI, thereby preventing the results from being generalized to a standard AI box experiment.
So why did Eliezer include the parseltongue constraint?
Maybe parseltongue is meant to introduce the concept of provability in a way that everyone can understand. To speak in parseltongue in real life, you just speak in logic statements and supply a proof with any statement you make. It seems reasonable (modulo computational complexity and provability concerns) for an AI to be able and/or required to supply proofs in an AI box experiment and parseltongue enables that in version of the game in the story.
I don't understand the constraint to speak only in parseltongue. Is that there to force us to focus on a solution set that is somehow of interest for friendly AI research?
Here's a flawed solution, but maybe someone can fix it.
Harry performs partial transfiguration on his brain, to transform it into a state where he thinks that he's booby-trapped the universe (for example, by transfiguring some strangelets along with a confinement field that will expire before the strangelets do). Then he just explains honestly to Voldemort why the universe will end if he dies.
It can’t get through metal, but this fentanyl-detecting machine can detect fentanyl in packages: