I have not done any work directly on it. The LLMs have kept improving so rapidly since then, especially at coding, that it has not seemed like a good idea to work on it.
Instead, I've been thinking more about how to use LLMs for creative writing or personalization (cf. my Dwarkesh Patel interview, "You should write more online"). To review the past year or two of my writings:
So for example, my meta-learning LLM interviewing proposal is about how to teach a LLM to ask you useful questions about your psychology so it can better understand & personalize (based on my observations that LLMs can now plan interviews by thinking about possible responses and selecting interesting questions, as a variant of my earlier "creativity meta-prompt" idea/hierarchical longform training); "Quantifying Truesight With SAEs" is an offline version about distilling down 'authors' to allow examination and imitation. And my draft theory of mathematicians essay is about the meta-RL view of math research suggesting that 'taste' reduces down to a relatively few parameters which are learned blackbox style as a bilevel optimization problem and that may be how we can create 'LLM creative communities' (eg. to extract out small sets of prompts/parameters which all run on a 'single' LLM for feedback as personas or to guide deep search on a prompt).
My "Manual of Style" is an experiment in whether you can iteratively, by asking a LLM to read your writings, extract out an explicit manual of style about how to 'write like you'
It includes a new denoising/backtranslation prompt-engineering trick I am currently calling "anti-examples" where you have the LLM make editing suggestions (which turn it into ChatGPTese) and then you reverse that to fix the chatbot prior*.
So given how gargantuan context windows have become, and the existence of prompt caching, I think one may be able to write a general writing prompt, which includes a full MoS, a lot of anti-examples for several domains, some sample Q&As (optimized for information gain), instructions for how to systematically generate ideas, and start getting a truly powerful chatbot assistant persona with the scaled-up base models like GPT-5 which should start landing this year.
"Virtual comments" is another stab at thinking about how 'LLM writing support' can work, as well as reinventing the idea of 'seriation', and better semantic search via tree-shaped embeddings for both LLM & human writers (and the failed experiment with E-positive).
"Towards better RSS feeds" is about an alternative to Nenex commands: can you reframe writing as a sequence of atomic snippets which the LLM rewrites at various levels of abstraction/detail, which enables reading at those same levels, rather than locking people into a single level of detail, which inevitably suits few?
"October The First Is Too Late", "Bell, Crow, Moon: 11 Poetic Variations", "Area Man Outraged AI Has Not Solved Everything Yet", "Human Cannibalism Alignment Chart"/"Hacking Pinball High Scores", "Parliament of Rag & Bone", "A Christmas Protestation", "Second Life Sentences", "On the Impossibility of Superintelligent Rubik’s Cube Solvers" were tests of how useful the LLMs are for iterative variation and selection using a 'brainstorm' generate-rank-select prompt and/or for hierarchical generation; they finally seem at the point where you can curate good stuff out of them and are genuinely starting to become useful for my nonfiction essays like "'you could have invented Transformers' tutorial"/"Cats As Horror Movie Villains"/typesetting HTML fractions/Rock-Paper-Scissors optimality (and demonstrate my views on acceptable use of generative media).
"Adding Bits Beats AI Slop" is about my observations about how this kind of intensive search + personalization seems critical to taking generative model outputs from mediocre slop to genuinely good.
"LLM Challenge: Write Non-Biblical Sentences" is an observation that for creativity, "big model smell" may be hard to beat, and you may just need large LLMs for high-end intellectual work, so one should beware false economies; similarly, "Towards Benchmarking LLM Diversity & Creativity" is about avoiding the LLMs getting ever worse for search purposes (mode-collapsed small models being a danger for Nenex uses - they are the ones that will be easy and tempting to run, but will hamstring you, and you have to go into it with eyes open).
"AI Cannibalism Can Be Good" is a quick explainer to try to overcome the intuition that there are no gains from 'feeding AI inputs back into AI' - if you don't understand how this can be a good thing or why it's not a perpetual motion machine, much of the foregoing will seem like nonsense or built on sand.
Obviously, I've also been doing a lot of regular writing, and working on the Gwern.net website infrastructure - adding the 'blog' feature has been particularly important, but just getting the small details right on things like "October The First" takes up plenty of time. But the overall through-line is, "how can we start getting meaningful creative work out of LLMs, rather than sleepwalking into the buzzsaw of superhuman coders creating Disneyland-without-children where all the esthetics is just RLHF'd AI slop?"
* This seems particularly useful for fiction. I'm working on a write up of an example with a Robin Sloan microfic where the LLM suggestions get better if you negate them, and particularly if you order them to think about why the suggestions were bad and what that implies before they make any new suggestions - which suggests, in conjunction with the success of the 'brainstorm' prompt, that a major failing of LLMs right now is just that they tend to treat corrections/feedback/suggestions in a 'superficial' manner because the reasoning-mode doesn't kick in when it should. Interestingly, 'superficial' learning may be why dynamic-evaluation/finetuning seems to underperform: https://arxiv.org/abs/2505.01812 https://arxiv.org/abs/2505.00661#google Because adding paraphrases or Q&A to the finetuning data, although it cannot add any new information, improves performance; reminiscent of engrams/traces in human memory - you can have memorized things, but not be able to recall them, if there aren't enough 'paths' to a memory.
I was trying out a hierarchical approach when I stopped, because I wasn't sure if I could trust a LLM to rewrite a whole input without dropping any characters or doing unintended rewrites, and aside from being theoretically more scalable and potentially better by making each step easier and propagating the sorting top-down, if you explicitly turn it into a tree, you can easily check that you get back an exact permutation of the list each time and so that the rewrite was safe. I think that might be unnecessary at this point, given the steady improvement in prompt adherence, so maybe the task is now trivial.
There's no explicit distances calculated: just asking the LLM to sort the list meaningfully.
Very funny, but the OA embeddings were always bad at sentence embedding, specifically, compared to other NN sentence-specialized embeddings; and as the original OA embedding paper somewhat defensively argues, it's not even clear a priori what a sentence embedding should do because a sentence is such a cut-down piece of text, and doing well at a sentence embedding task may only be overfitting or come at the cost of performance on more meaningful text embedding tasks. (Similar to a word embedding: they are so poly-semantic or context-dependent that it seems like they have to have substantial limits - which is part of the motivation for Transformers in the first place, after all...)
That's why I was experimenting with prompting a LLM to do seriation rewrites (instead of just splitting on punctuation to reuse my existing greedy-pairwise approach, and having done with it). A prompted LLM is taking full context and purpose into consideration, and avoid the issues with bad embeddings on very small text. So the seriation outputs aren't crazily random, but sensible.
(Which makes sense, because if you ask a LLM to sort a list of items in a freeform normal way, like a chat session, they are capable of it; in my poetry selection the other day, "Bell, Crow, Moon: 11 Variations", I had Claude/Gemini/GPT suggest how exactly to sort the 11 poems we curated into a pleasing sequence, and they did come up with a much nicer poetry sequence than the original random one. And why wouldn't they be able to do that, when they were good enough to write most of the poems in the first place?)
Yeah, it's limited by what kind of structure you have. It did seriate your list successfully, sounds like, it's just you have a lot of structure in the list that you don't care about, and so no embedding is going to prioritize the other stuff and the distances aren't useful to you in general. This will hurt any embedding-related use-case, not just seriation - presumably your k-NN lookups aren't terribly useful either and they mostly just pull up hits which have superficial syntactic similarities.
This is probably less of a problem with my annotations because I reformat them before embedding and add in all available metadata (not just the tags or the titles of links in it as a link-bibliography, but also tricks like including the titles of reverse-citations of it, so the more an annotation gets linked, the more the embedding of it reflects its usage), so the formatting is uniform (nothing like "half of them start with 'what is X' and half don't") and there's a lot of very semantic information.
As I've said before, I think you greatly overrate the difficulty of putting search into neural nets, and this is an example of it. It seems to me like it is entirely possible to make a generic LLM implement an equivalent to AlphaZero and be capable of expert iteration, without an elaborate tree scaffolding. A tree search is just another algorithm which can be reified as a sequence, like all algorithms (because they are implemented on a computer).
All AlphaZero is, is a way of doing policy iteration/Newton updates by running a game state forward for a few plies, evaluating, and updating estimates. It's not magic, and can obviously be encoded into a LLM's generative process.
Here's a concrete example of how in-principle I think a LLM can do AlphaZero-style expert iteration for Go: A LLM can serialize a board with value estimates as simply a few hundred tokens (361 points, 361 value estimates, miscellaneous metadata); this means in a frontier LLM like Claude-4-opus with 200k ctx, you can fit in easily 200 board states; so you can serialize out the lookahead of a bunch of possible moves and resulting board states (eg. take the top 14 moves and imagine the resulting board state and then imagine their next 14 top moves, for comparison, TD-Gammon looked forward like 1 move); and can back-propagate an updated value estimate, and spit out the original board state with better value estimates. "Move #4 was better than it looked, so I will +0.01 to the value estimate for it." This improved board is now in context, and can be dynamically-evaluated to update the LLM: now it has to predict the new board state with the final improved estimates, and that improves the policy. The LLM finishes by setting up the next planning step: pick a deeper board state to evaluate next, and if the next board state is the end of the game, then it starts over with a fresh game. Run this indefinitely.
It repeatedly iterates through a possible game, evaluating each position to a certain depth, updating its weights to incorporate the policy improvement from the evaluation, and restarting with a fresh game. All serialized out as a long array/sequence, the tree just being implicitly represented by successive board states. (And then now that you have that in mind, you can imagine how to do things like deep rollouts: 200 moves is around a normal game of Go, so random rollouts are doable from most board states, and the LLM can just toggle between a shallow tree search and deep randomized rollouts if necessary eg by adding a 0/1 token prefix.)
At no point do you need explicit tree scaffolding as you bootstrap from a LLM clueless about playing Go to the high performance that we know LLMs trained by imitation learning on board states/values/policies can reach, and at no point have I invoked a cognitive operation which is not easier than a lot of things we see LLMs do routinely, or where it's implausible that they could do it. It is probably a lot less efficient and has other practical issues like how you integrate the rules of Go akin to AlphaZero/MuZero, etc, but in principle I think this algorithm is well-defined, concrete, and would work.
My earlier commentary on what I think note-taking tools tend to get wrong: https://gwern.net/blog/2024/tools-for-thought-failure
Here is another way to defend yourself against bot problems:
Turned out to be fake, BTW. His friend just pranked him.
for text, you might realize that different parts of the text refer to each other, so need a way to effectively pass information around, and hence you end up with something like the attention mechanism
If you are trying to convince yourself that a Transformer could work and to make it 'obvious' to yourself that you can model sequences usefully that way, it might be a better starting point to begin with Bengio's simple 2003 LM and MLP-Mixer. Then Transformers may just look like a fancier MLP which happens to implement a complicated way of doing token-mixing inspired by RNNs and heavily tweaked empirically to eke out a bit more performance with various add-ons and doodads.
(AFAIK, no one has written a "You Could Have Invented Transformers", going from n-grams to Bengio's LM to MLP-Mixer to RNN to Set Transformer to Vaswani Transformer to a contemporary Transformer, but I think it is doable and useful.)
Or just clipped out. It takes 2 seconds to clip it out and you're done. Or you just fast forward, assuming you saw the intro at all and didn't simply skip the first few minutes. Especially as 'incest' becomes universal and viewers just roll their eyes and ignore it. This is something that is not true of all fetishes: there is generally no way to take furry porn, for example, and strategically clip out a few pixels or frames and make it non-furry. You can't easily take a video of an Asian porn star and make them white or black. And so on and so forth.
Idea: "Conferences as D&D tabletops": you may be able to better organize a conference or convention by borrowing a tool from tabletop roleplaying games - players collaborate by directly manipulating or modifying a 2D map. It seems to me like this could be low-friction and flexibly handles a lot of things that existing 'conware' design patterns don't handle well.