Using the notation from here: A Mathematical Framework for Transformer Circuits

The attention pattern for a single attention head is determined by , where softmax is computed for each row of .

Each row of  gives the attention pattern for the current token. Are these rows (post softmax) typically close to one-hot? I.e. are they mainly dominated by a single attention (per current token).

I'm interested in knowing this for various types of transformers, but mainly for LLM and/or frontier models. 

I'm asking because I think this has implication for computations in super-position. 

New Answer
New Comment

1 Answers sorted by

Buck

40

IIRC, for most attention heads the max attention is way less than 90%, so my answer is "no". It should be very easy to get someone to make a basic graph of this for you.