This might be downstream of a deliberate decision by designers.
An LLM has been trained on data through February 2025.
A user asks it a question in June 2025 about 'what happened in May?'
How should the LLM respond?
(I wish to register that I didn't miss this scenario, and intend to get around to playing it this weekend...it's just that you made the awful indie-game blunder of releasing your game the day after Silksong came out, and my free time has been somewhat spoken for for the past few days).
In some other world somewhere, the foremost Confucian scholars are debating how to endow their AI with filial piety.
You can be a moderate by believing only moderate things. Or you can be a moderate by adopting moderate strategies. These are not necessarily the same thing.
This piece seems to be mostly advocating for the benefits of moderate strategies.
Your reply seems to mostly be criticizing moderate beliefs.
(My political beliefs are a ridiculous assortment of things, many of them outside the Overton window. If someone tells me their political beliefs are all moderate, I suspect them of being a sheep.
But my political strategies are moderate: I have voted for various parties' candidates at various times, depending on who seems worse lately. This seems...strategically correct to me?)
If you ever do it, please be sure to try to confuse archaeologists as much as possible. Find some cave, leave all your flint tools there, and carve images of space aliens onto the wall.
This might be a cultural/region-based thing. Stop by a bar in Alabama, or even just somewhere rural, and I think there might be more use of bars as matchmaking.
Here is a list of numbers. Which two of these numbers are closest together?
815
187
733
812
142
312
I think the obvious approach is comparably neat until you get to the point of proving
that k=2 won't work
at which point it's a mess. The Google approach manages to prove that part in a much nicer way as a side effect of its general result.
I looked at the Q1/4/5 answers[1]. I think they would indeed most likely all get 7s: there's quite a bit of verbosity, and in particular OpenAI's Q4 answer spends a lot of time talking its way around in circles, but I believe there's a valid proof in all of them.
Most interesting is Q1, where OpenAI produces what I think is a very human answer (the same approach I took, and the one I'd expect most human solvers to take) while Google takes a less intuitive approach but one that ends up much neater. This makes me a little bit suspicious about whether some functionally-identical problem showed up somewhere in Google's training, but if it didn't that is extra impressive.
IMO Q3 and Q6 are generally much harder: the AI didn't solve Q6, and I haven't gone through the Q3 answers. Q2 was a geometry one, which is weirder to look through and which I find very unpleasant.
To the extent that these things are problems, they are both problems today. There are insular Amish communities that shut out as much modern culture as they can, and hikkikomori living alone with their body pillows.
AI may exacerbate the existing issues, but on the whole I don't feel like the world is drastically worsened by the presence of these groups.