Just like your mind can only see the rightside-up-stairs (w/ a blue wall closer to you) or the upside-down-stairs (w/ a green wall closer to you) (but never both of them at the same time)
If I unfocus my eyes, I can see-double with a different mode in each eye.
(Those two had, specifically, asked an automatic result-filtering algorithm to select that fleshling of the highest discernible intelligence class up to measurement noise, whose Internet traces suggested the greatest ability to quickly adapt to being seized by aliens without disabling emotional convulsions. And if this was, itself, an odd sort of request-filter by fleshling standards -- liable to produce strange and unexpected correlations to its oddness -- neither of those two aliens had any way to know that.)
My read was that natural human variation plus a few dozen bits of optimization was sufficient explanation.
But have you considered... pointing them all at a disco ball?
The question is whether restrictions on AI speech violate the first amendment rights of users or developers
I'm assuming this means restrictions on users/developers being legally allowed to repeat AI-generated text, rather than restrictions built into the AI on what text it is willing to generate.
Either I'm misunderstanding what you wrote, or you didn't mean to write what you did.
Suppose A is a human and B is a shrimp.
The value of adding a shrimp to a world where A exists is small.
The value of replacing the shrimp with A is large.
Could this be the result of a system prompt telling them that the COT isn't exposed? Similarly to how they denied that events after their knowledge cutoff could have occurred?
You would also observe this after a failed attempt to turn the ratchet forward. If it turned less than a full notch, you would see it sliding back afterward.