We have probabilistic models of the weather; ensemble forecasts. They're fairly accurate. You can plan a picnic using them. You can not use probabilistic models to predict the conversation at the picnic (beyond that it will be about "the weather", "the food", etc.)
What I mean by computable probability distribution is that it's tractable to build a probabilistic simulation that gives useful predictions. An uncomputable probability distribution is intractable to build such a simulation for. Knightian Uncertainty is a good name for the state of not being able to model something, but not a very quantitative one (and arguably I haven't really quantified what makes a probabilistic model "useful" either).
I think the computability of probability distributions is probably the right way to classify relative agency but we also tend to recognize agency through goal detection. We think actions are "purposeful" because they correspond to actions we're familiar with in our own goal-seeking behavior: searching, exploring, manipulating, energy-conserving motion, etc. We may even fail to recognize agency in systems that use actions we aren't familiar with or whose goals are alien (e.g. are trees agents? I'd argue yes, but most people don't treat them like agents compared to say, weeds). The weather's "goal" is to reach thermodynamic equilibrium using tornadoes and other gusts of wind as its actions. It would be exceedingly efficient at that if it weren't for the pesky sun. The sun's goal is to expand, shed some mass, then cool and shrink into its own final thermodynamic equilibrium. It will Win unless other agents interfere or a particularly unlikely collision with another star happens.
Before modern science no one would have imagined those were the actual goals of the sun and the wind and so the periodic, meaningful-seeming actions suggested agency toward an unknown goal. After physics the goals and actions were so predictable that agency was lost.
(Epistemic status: often discussed in bits in pieces, haven't seen it summarized in one place anywhere.)
Do you feel that your computer sometimes has a mind of its own? "I have no idea why it is doing that!" Do you feel that, the more you understand and predict someone's action, the less intelligent and more "mechanical" they appear?
My guess is that, in many cases, agency (as in, the capacity to act and make choices) is a manifestation of the observer's inability to explain and predict the agent's actions. To Omega in the Newcomb's problem humans are just automatons without a hint of agency. To a game player some NPCs appear stupid and others smart, and the more you play and the more you can predict the NPCs, the less agenty they appear to you.
Note that randomness is not the same as uncertainty, since if you can predict that someone or something behaves randomly, it is still a prediction. What I mean is more of a Knightian uncertainty, where one fails to make a useful prediction at all. Something like a tornado may appear to intentionally go after you if you fail to predict where it will be going and you have trouble escaping.
If you are a user of a computer program, and it does not behave as you expect it to, you often get a feeling of there being a hostile intelligence opposing you, occasionally resulting in an aggressive behavior toward it, usually with verbal violence, though occasionally getting physical, the way we would confront an actual enemy. On the other hand, if you are the programmer who wrote the code in question, you think of the misbehavior as bugs, not intentional hostility, and treat the code by debugging or documenting. Mostly. Sometimes I personalize especially nasty bugs.
I was told by a nurse that this is also how they are taught to treat difficult patients: you don't get upset at someone's misbehavior and instead treat them not as an agent, but more like an algorithm in need of debugging. Parents of young children are also advised to take this approach.
This seems to also apply to self-analysis, though to a lesser degree. If you know yourself well, and can predict what you would do in a specific situation, you may feel that your response is mechanistic or automatic and not agenty or intelligent. Or maybe not. I am not sure. I think if I had the capacity for full introspection, not just the surface level understanding of my thoughts and actions, I would ascribe much less agency to myself. Probably because it would cease to be a useful concept. I wonder if this generalizes to a superintelligence capable of perfect or near perfect self-reflection.
This leads us to the issue of feelings, deliberate choices, free will and ability to consent and take responsibility. These seem to be useful, if illusory, concepts for when you live among your intellectual peers and want to be treated at least as having as much agency as you ascribe to them. But this is a topic for a different post.