Interesting work! Could this be fixed in training by giving it practice at repeating each token when asked?
Another thing I’ve wondered is how substring operations can work for tokenized text. For example, if you ask for the first letter of a string, it will often get it right. How does that happen, and are there tokens where it doesn’t work?
You could do this, if you wanted. I suspect that when ChatGPT was patched, they instead just patched the tokenizer to no longer create these tokens, which is significantly easier and would also allow the model to repeat them without too much trouble.
I think that substring operations would mainly work with tokens that are used a fair bit. My model of the situation is, there is some loss that it would leave on the table if it didn't know some facts about substrings of common tokens, so it learns it. For instance, it would help it be able to complete more acronyms, and if people prefer or avoid alliteration in certain contexts, it would help to predict text. If it was trained on social media, sometimes people will spell things out in ALL CAPITAL LETTERS, or do iNtErCaPs or whatever you call that, which would let it know all sorts of facts about the innards of tokens.
Many of these tokens are unprintable (i.e., they don't display and I don't know what they are).
The first 256 characters are the 256 ASCII characters (each 1 byte). A bunch of them are basically never used (they exist so that an arbitrary string of bytes can be broken down into valid tokens)
Slightly off tangent, but I am confused about the reasons and assumptions that underpin the current tokenizer used for GPT-3.
I get that reality has more words than could be packed into 50400 tokens (and that limit comes from hardware).
I also get that the token space needs to be big, so you can't just go to character level tokenization, you would end up with a space that it too small.
But why on earth did the tokens end up like this? A lot of them look like garbage, a lot of them look like repeats of the same word, but with added white space or unprintable characters.
Surely there is some middle ground that better matches the reality of how me (humans) use words - And I think the confusing part for me is here, I mean why wouldn't we construct a map that looks a lot like the features we see in the territory ? (really a map builder and not a map).
Confused I am, knowledge I seek.
You're overthinking it. OA BPEs are merely a quick hack by Radford or someone back in 2017 or so. They spent 5 seconds thinking about it: "I need to tokenize somehow, my Transformer has a context window of like 512 so character-based is out, word-level is too inflexible for multilingual multitask training like I hope the model will learn to do and would lead to lots of UNKs, so the only thing off-the-shelf in a convenient library on Github is BPEs; there, trained it on my dump of uncleaned Internet garbage data, done! Now on to something that actually matters..." No one was sitting down and debating the merits of BPEs vis Unigram-LM or BPE-dropout or whether it'd sabotage poetry. (BPEs are not optimal in any sense for anything. They don't even guarantee optimality for what they do, as they just create tokens greedily, IIRC. And there are many viable choices: you can have a few thousand BPEs or you can push it to several hundred thousand like Jurassic, or to a million like FB recently did. You can use wordpieces or character-encoding (ByT5), you can expand the vocab & redo on specialized corpuses like OA did with Github for Codex, or their new c100k
tokenization etc.) It was just one of innumerable minor engineering decisions made along the way. It was never supposed to be important or still matter 6+ years later because the models turned out to be so important & worth keeping backwards-compatibility for. And they definitely weren't thinking about anything remotely like unspeakability.
Sure. I listed a bunch of improvements right there. They just tend to always be relatively small on economically-important usecases, compared to other things like RLHF. (Sadly, no matter how much I whine about ChatGPT's poetry being super basic and bland because of BPEs+mode-collapse, better poetry wouldn't make OA much money compared to working harder on RL tuning to prioritize Q&A or reasoning or investing in more GPUs to serve more users.)
I think so. If someone could show that BPEs were changing the scaling laws on an important task end-users will pay for, then it wouldn't be hard to change that: for example, I noted that Codex induced OA to change BPEs, because that substantially increased the effective context window when you generate BPEs optimized for programming language syntax, which matters to big paying customers like Github (the larger the ctx, the more the variables & definitions inside a specific project are available for relevant completion). Otherwise, the general attitude seems to be to shrug and it'll fix itself at some point when GPT-4 or GPT-5 or god knows what system uses some new tokenization or byte-level encoding or a fancy new attention/history mechanism with near-unlimited ctx motivated by other concerns and then the problems go away and become a minor historical footnote. And they are probably right to, as much as it annoys me to see the bad poetry or see people running into blatantly BPE-caused problems and declare 'deep learning has hit a wall!'...
BPEs are one of the simplest schemes for producing a large, roughly-fairly-weighted-by-frequency set of tokens that compresses arbitrary bytes drawn from a written language training dataset. That's about all you need to explain things in ML, typically.
Subword tokenization, the linguistically-guided pre-LLM approach, has a history but is comparatively complex, and I don't think it compresses as well for a given token budget even on fairly normal-looking text.
Finally, we give a simple approach to verify that a particular token is unspeakable rather than just being hard-to-speak.
You're using an optimization procedure to find an embedding that produces an output, and if you cannot find one you say it is unspeakable. How confident are you that the optimization is strong enough? I.e. what are the odds that a god-mode optimizer in this high-dimensional space could actually find an embedding that produces the unspeakable token, it's just that linprog wasn't strong enough?
Just checking here, I can totally imagine that the optimizer is an unlikely point of failure. Nice work again!
This post seems to have an important bug in its code that changes its conclusions, as pointed out to me by user @ckkissane . At a high level, there are probably more hard to speak tokens and fewer truly unspeakable tokens than the current text suggests. I will update it soon with corrections; currently I am traveling.
Produced as part of SERI-MATS winter cohort. Thanks to Eric Neyman and Neel Nanda for helpful discussions on this topic.
See this post for context; in brief, there are a small number of tokens that GPT3 and related large language models have a lot of trouble saying, and attempting to elicit these tokens from the model causes erratic and interesting behavior.
In the current post, we show that some of the same and related tokens have similar behavior in GPT2 (in all four sizes: small, medium, large, and xl), and moreover, because we have the weights of GPT2, we can explain why this is happening, at a mechanistic level.
Some of these tokens are unspeakable because their unembedding vectors are not maximal along any direction. Thus, there is no internal activation that the model can generate that will cause the token to be emitted when decoding at zero temperature. Some of them are, rather, hard-to-speak, because they are not maximal along their own direction, and thus one needs to point slightly away from them in order to emit them. Both phenomena are related to the phenomena laid out in the original post. The hard-to-speak tokens are plausibly very hard to speak, because most tokens that a transformer emits will be most effectively signaled by pointing roughly directly at them. (And recall that the most plausible explanation for how this situation arose in the first place is that these tokens were never seen even once during training; thus, the model has zero practice at having to point to them in weird ways, and is thus unlikely to be able to do so.)
In this post, we will make extensive use of Neel Nanda's TransformerLens library. This library is amazingly helpful in avoiding various rough edges and footguns in mechanistic interpretability, and I cannot recommend it highly enough.
First, we will show how to identify hard-to-speak tokens using TransformerLens:
We can then examine these hard-to-speak tokens, and see some that are very familiar from the original SolidGoldMagikarp post:
Many of these tokens are unprintable (i.e., they don't display and I don't know what they are). The remainder mostly appear in the original SolidGoldMagikarp post.
Finally, we give a simple approach to verify that a particular token is unspeakable rather than just being hard-to-speak.
Briefly, we look for a direction x such that x minimizes the negative logit of the token under consideration (in this case, 30208, or " externalTo"), while satisfying the constraint that the logit of the token is higher than the logit of any other token by at least a small margin. When we do this, we find that small margins (like 1.01e-7 for the case of 30208) lead to infeasible linear programs (and thus no such direction exists), and even smaller margins (such as 1.00e-7) lead to "feasible" linear programs that are only feasible because of numerical errors in the calculation. That is, if you take the direction output by the solver, it is not actually a solution, because unembedding it still leads to a different token being ever-so-slightly higher. (In the case of 30208, it is the token 24973, " exting".)
Here are the results of testing whether something is hard-to-speak or truly unspeakable for the printable hard-to-speak tokens in GPT2-small:
Here are the printable hard-to-speak tokens from GPT2-xl:
And here are the corresponding tests for unspeakability:
Discussion and Future Directions
Given that GPT2-xl (compared to GPT2-small) has more printable hard-to-speak tokens, of which a smaller fraction and a smaller absolute number are unspeakable, one guess would be that these trends continue for larger models, and a model the size of GPT-3 davinci has many hard-to-speak tokens but very few or even no unspeakable tokens. It is certainly intuitive to believe that, in high dimensions, with a fixed vocabulary size, very few unembedding vectors are not on the convex hull of the unembedding vector polytope. (h/t to Eric Neyman for this last point)
It seems like it might be interesting to try to measure the volume of the feasible region (or more precisely, the (n-1) dimensional surface area of the portion of the unit sphere in the feasible region), to ascertain how "precise" the model would have to be to produce a particular token.
Edit: there are way more unspeakable tokens than I thought
I forgot to include the bias term! All of these tokens in GPT2-xl are actually unspeakable!