We know that a good optimizer of outcomes over systems' states should have a model of the system inside of itself.
I re-read this, and wanted to strong-upvote it, and was disappointed that I already had. This is REALLY good. Way better than the thing it parodies (which was already quite good). I wish it were 10x as long.
Epistemis status: an obvious parody.
— You won't believe me. I've found them.
— Whom?
— Remember that famous discovery by Professor Prgh'zhyne about pockets of baryonic matter in open systems that minimize the production of entropy within them? They went further and claimed that goal-oriented systems could emerge within these pockets. Crazy idea, but... it seems I've found them near this yellow dwarf!
— You're kidding. We know that a good optimizer of outcomes over systems' states should have a model of the system inside of itself. We have entire computable universes within ourselves and still barely make sense of this chaos. How can they fit valuable knowledge inside tiny sequences of 1023 atoms?
— They repeat patterns of behavior. They have multiple encodings of them and slightly change them over time in response to environmental changes in a simple mechanistic way.
— But that generalizes horribly!
— Indeed. When a pattern interacts with a new aspect of the environment, it degrades with high probability. Their first mechanism for generating patterns was basically "throw a bunch of random numbers in the environment, keep those that survived, slightly change, repeat".
— ...
— Yeah, it's horrible from their perspective, I think.
— How do they exist without an agent-environment boundary? I'd be pretty worried if some piece of baryonic matter could smash into my thoughts at any moment.
— They kind of pretend they have an agent-environment boundary, using lipid layers.
— Those "lipid layers" have such strong bonds that they don't let any piece of matter inside? That's impressive!
— No, I was serious about them pretending. They need to pass matter through themselves; they're open systems and can't survive without external sources of free energy. They usually have specialized members of their population, an "immune system", that checks for alien patterns.
— Like we check for signatures of malign hypotheses in the universal prior?
— No, there's not enough computing power. They just memorize a bazillion meaningless patterns, and the immune system kills everyone who can't recite them.
— WHAT? But what if the patterns are corrupted, as happens in the world of baryonic matter?
— You can guess: if your memory of the patterns is corrupted, you're dead.
— What if the reference pattern of immune system gets corrupted?
— Then the immune system starts to kill indiscriminately.
— Okay, I'm depressed now. But what should we do with them? Could they become dangerous?
— ...I don't really think so? If we converted all baryonic matter into something like the most complex members of their population, it might be worrying. But there's no way they can get here on their own. See, they become less agentic as they organize into complex structures; too much agency destroys them. They need to snipe out their most active members.
— Well, that's still icky. Remember that famous example — the Giant Look-Up Policy Table generated from an evaporating black hole? Would we consider it agentic if it displayed seemingly agentic behavior?
— Heh, obviously not. Agents like us exist for ontological reasons—if we want to exist, we rearrange realityfluid in a way that makes us more encounterable in the multiverse. If something is not created by agency, it's not agentic.