When studying the provided 30-page thought-annotated sample, I thought about the <Yo be real> command a little more. In my opinion it should be applied in the training data a little differently than how it's done. Here are my thoughts:
In the sample, there are some places where the authors carefully tried to construct “AI nonsense” that matches what we regularly see in the current tech AI dungeon prompts. The player then responses with “<Yo be real>” plus some explanation on what the AI did wrong.
(obvious example: page 17 in this sample: https:/... (read more)
I find this project very interesting and thought a lot about it in the last 2 weeks. The way I understand the main goal of the project is the following:
providing us (AI researchers) with a model that has an additional output dimension (the "thoughts")
training the model in such a way that this new dimension is semantically linked directly to the primary output dimension (the "prompt")
especially linked in some kind of temporal causality ("early" thoughts producing the prompt), not too close the the primary output (so that it contains semantic meaning that ca
I was "archiving" the link to this page and thought I'd see what's been going on. Updates seem to only be on the discord. Anyway, since they allowed me to post longer thoughts there, figured it would be fine for me to drop it here as well. https://sd-marlow.medium.com/slaying-the-ml-dragon-7ce0a2e4e3a6
From your post, you're looking at this in much the same way I was when I attempted to do a short run (to work the bugs out and really understand whats involved). However, "actual thoughts of the DM" is the wrong explanation for what they want. The examples of of what they are accepting look to be nothing more than the "common sense" stuff current ML models fail to capture (thus, explicitly stated in the runs). Also, from comments in the discord, it seems like the info captured is post-process, despite the desire for pre-prompt thoughts. Not trying to discourage; just showing my thinking on the process, and that it wasn't what they wanted.
When studying the provided 30-page thought-annotated sample, I thought about the <Yo be real> command a little more. In my opinion it should be applied in the training data a little differently than how it's done. Here are my thoughts:
In the sample, there are some places where the authors carefully tried to construct “AI nonsense” that matches what we regularly see in the current tech AI dungeon prompts. The player then responses with “<Yo be real>” plus some explanation on what the AI did wrong.
(obvious example: page 17 in this sample: https:/... (read more)