This is a special post for quick takes by Aprillion. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
all the scaffold tools, system prompt, and what not add context for the LLM ... but what if I want to know what's the context too?
Pushing writing ideas to external memory for my less burned out future self:
agent foundations need path-dependent notion of rationality
alignment is a capability
in a universe with infinite Everett branches, I was born in the subset that wasn't destroyed by nuclear winter during the cold war - no matter how unlikely it was that humanity didn't destroy itself (they could have done that in most worlds and I wasn't born in such a world, I live in the one where Petrov heard the Geiger counter beep in some particular patter that made him more suspicious or something... something something anthropic principle)