I think this could be a big boon for mechanistic interpretability, since it's can be a lot more straightforward to interpret a bunch of {-1, 0, 1}s than reals. Not a silver bullet by any means, but it would at least peel back one layer of complexity.
It could also be harder. Say that 10 bits of current 16 bit parameters are useful; then to match the capacity you would need 6 ternary parameters, which would potentially be hard to find or interact in unpredictable ways.
The paper is not about post-training quantization, instead it's quantization aware training (this is more clearly discussed in the original BitNet paper). The representation is ternary {-1, 0, 1} from the start, the network learns to cope with that constraint throughout pre-training instead of getting subjected to brain damage of quantization after training.
Compare this with
where the Microscaling block number format is used to train a transformer at essentially 4 bits per weight, achieving the same perplexity as with 32 bit floating point weights, see Figure 4 on page 7. If perplexity doesn't change for quantization aware training when going down to 4 bits, it's not too shocking that it doesn't significantly change at 1.6 bits either.
This is applied to training. It’s not a quantization method.
I don't think it can be patched for training to make training itself 1.58 bit (95% confident). I think training (not inference) is where most the money goes to and comes from, so hardware market will not be affected (90%).
Even in the small inference market, chip companies already have 4-8 bit inference accelerators in the oven (99%); they will not estimate the benefits of 1.58 bit to justify the risk of such specialized hardware, so nobody will build more than 100 1-bit or 1.58-bit inference chips (80%).
Old fashioned CPUs have at most 32 threads so will still be slow as heck to run NNs (90%).
I think your question is quite important.
If I understand correctly (I very well might not), A "one bit LLM" has to be trained as a "one bit LLM" in order to then run inference on it as a "one bit LLM". I.e this isn't a new Quantization scheme.
So I think training and inference are tied together here, meaning; if this replicates, works, etc. we will probably have new hardware for both stages
https://arxiv.org/abs/2402.17764 claims that 1 bit LLMs are possible.
If this scales, I'd imagine there is a ton of speedup to unlock since our hardware has been optimized for 1 bit operations for decades. What does this imply for companies like nvidia and the future of LLM inference/training?
Do we get another leap in LLM capabilities? Do CPUs become more useful? And can this somehow be applied to make training more efficient?
Or is this paper not even worth considering for some obvious reason I can't tell.
Edit: this method is applied to training already