I was suspecting it was more of a fable, but I hoped it was historical (there are many true cryptographic stories in this style, though I don't now any about this "proxy" problem). I think it is a bit dangerous to draw conclusions from a fictional story, though the fable made this post more engaging, and I think I mostly agree with its conclusion.
Why is using a fable to construct an argument dangerous ? Suppose Aesope wrote a fable on some goose laying golden eggs, and people draw the conclusion that you should not experiment around positive phenomen...
Is there a reference on the events with the Bell labs ? I can imagine some scenarii where the military transmits some information an can sort of engineer what the adverse party can read (for example Eve can read the power supply of some device, Alice must then add sufficient noise on the power supply), but none seems realistic in the context.
I don't agree with the argument on the variance :
"Any other such measure will indeed be isomorphic to variance when restricted to normal distributions."
It's true, but you should not restrict to normal distributions in this context. It is possible to find some distributions X1 and X2 with different variances but same value E(|x-mean|^p) for . Then X1 and X2 looks the same to this p-variance, but their normalized sample average will converge to different normal distributions. Hence variance is indeed the right and only measure of spreadout-ness...
I think the merit of Shannon was not to define entropy, but to understand the operational meaning of entropy in terms of coding a message with a minimal number of letters, leading to the notion of the capacity of a channel of communication, of error-correcting code and of "bit".
Von Neumann's entropy was introduced before Shannon's entropy (1927, although the only reference I know is von Neumann's book from 1932). It was also von Neumann how suggested the name "entropy" for the quantity that Shannon found. What Shannon could've noticed was that ... (read more)