Hi,
I am trying to understand the difference in the cost of producing a single input token vs output token.
Based on some articles, I came to the following conclusion:
- Input tokens processing scales quadratically, there’s no way around it, you have to compute attention (K and V by passing it through encoders) for each token with each other token.
- Output tokens scale linearly thanks to using KV cache (otherwise quadratic without KV cache, as linear tokens do) which is what everyone seems to do when hosting these models. I.e. you trade compute for memory, and try to be clever about how you store and compute all these stuff (like sparse, cache eviction…). I believe this is simply an empirically practical way around the quadratic scaling.
- Therefore, realistically, it’s really the memory capacity and bandwidth being the problem for output tokens rather than raw FLOPS. KV cache grows linearly with the sequence length regardless input/output tokens ratio.
- I am confused here why input tokens are almost universally cheaper - 2 to 5 times - to output tokens. Also, is the evidence that providers are pricing things linearly for memory being the problem here (as they do now, you pay the same price for 11th token as for 10001)? If not and they were FLOPS bounded, I would expect the providers to price stuff quadratically, not linearly. Or is it just for client-pricing simplicity/marketing?
I want to be able to estimate the cost of processing a single token, and I cannot wrap my head around this. I theoretically estimated based on GPU rent price and separately based on power consumption (assuming some utilization such as 10%), and I believe I somehow need to differentiate between input/output tokens here.
In one tweet from LeptonAI who hosts these LLM, I also saw that there are usually 3-10 times more input tokens than output tokens. Again, if input tokens dominate the sequence and it was FLOPS the issue, I would expect to reflect that in the pricing. Not sure what role it plays in these calculations so far.
Any help is appreciated, thanks!
Thanks for the answer, I appreciate it!
I agree with the intuition, but I think that's where I am confused. Thanks to the KV cache we do not run the new input sequence (previous sequence + last generated token) through the encoders (as we do for the input sequence during prefill). It's all cached (from prefill + from the last token generation for that sequence+token). So... I don't know - it doesn't feel like the output tokens are more expensive in this case (you run "once", the same way as you run "once" for every input token)?
Do you mind saying more about this? I am not sure what you mean. I.e. some pay more and some pay less (e.g. heavy hitters pay less while small prompters pay comparatively more per token?)