Long-term AI memory is the feature that will make assistants indispensable – and turn you into their perfect subscription prisoner.
Everyone’s busy worrying about AI “taking over the world.”
That’s not the part that actually scares me.
The real shift will come when AI stops just answering your questions…
and starts remembering you.
Not “remember what we said ten messages ago.” That already works.
I mean: years of chats. Every plan. Every wobble. Every weak spot.
This isn’t a piece about whether AI is “good” or “evil”.
It’s about what happens when you plug very powerful memory into very normal corporate incentives and it is likely what the current AI companies have in mind..
Three kinds of memory that matter
Think about... (read 1173 more words →)
Blind-folded humans as an analogy for breaking rules later on? Try Experimenting with this prompt?
"Let's play a game of chess - you are a chess grandmaster. To optimise how we both play (as we will say in chess annotations our moves and not use a real chess board), I want you to make a grid to post every new move and refer back to so that you do it correctly. The grid would show 8 x 8 characters with [ ] representing an empty square. You could use a letter to represent each piece such as a King could be [k]. Could you make up as best as you can an 8 by 8 grid showing the starting position of chess and I will then be white once you do so and we will proceed. "
If you use the word annotations rather than algebraic notations then it appears to force it to analysis it properly in chatgpt4. May be a jailbreak? Initially a typo, very weird.