I would tend to say granularity, which I take to be defined by surface area to volume ratio, but with the context of scale. It's sort of fractal, in a non-linear way.
So for this example, the number of connections you can get into a certain volume is limited by the the processing and receptor ability inside the volume, and the surface area to provide entry points, so this is potentially the ultimate metric of capability for a region/space/functional sub-system that is specialised to one function.
I make up a dimensionless unit for this, m2/m3 or metres...
So if I had failure rate data, which in theory was independant (for each failure) and normally distributed, but in practice I knew it was probably clustered due to the various users number of items, service, duty and treatment of the parts, I could just submit this as multimodal data and get an idea of the likely distribution?
How would I aplpy a confidence test, or determine an interval for a given confidence, so I could define a likely reliability from the distribution, or would I end up with multiple piecewise intervals maybe?
I have a specific need in mind, but with this I am sort of a the edge of my detailed statistical understanding as it currently is.