The normal methods of explanation, and the standard definitions, for 'information', such as the 'resolution of uncertainty' are especially difficult to put into practice.
As these presuppose having knowledge already comprised, and/or formed from, a large quantity of information. Such as the concepts of 'uncertainty' and 'resolution'.
How does one know they've truly learned these concepts, necessary for recognizing information, without already understanding the nature of information?
This seems to produce a recursive problem, a.k.a, a 'chicken and egg' problem.
Additionally, the capability to recognize information and differentiate it from random noise must already exist, in order to recognize and understand any definition of information, in fact to understand any sentence at all. So it's a multiply recursive problem.
Since, presumably, most members of this forum can understand sentences, how does this occur?
And since presumably no one could do so at birth, how does this capability arise in the intervening period from birth to adulthood?
Ah, no - what I specified is, effectively, a definition of information and how it can be recognized in theory. In practice, humans "run" a bunch of imprecise approximate algorithms that evolution stumbled upon. Just because the definition is recursive, does not mean that following that recursive definition is the optimal algorithm for a very bounded reasoner to arrive at the best approximation quickly. And humans are not particularly optimal either. But there is a lot of evidence that a lot of what is going in humans are effectively prediction error minimizers. (E.g. https://www.lesswrong.com/posts/Cu7yv4eM6dCeA67Af/minimization-of-prediction-error-as-a-foundation-for-human - first relevant Google result, no particular enforcement of that post).