I did some work in this direction when I wrote about phenomenological complexity classes. I don't lay it out in much detail in that post, but I believe we can build on the work I do there to construct a topology of mindspace based on the assumption of a higher-order theory of consciousness and a formal model of the structure of consciousness grounded in intentionality by (and here's where I'm not sure what model will really work) possibly treating minds as sets within a topological space or as points on manifolds and then being able to say something about the minds we do and don't find in topological spaces with particular properties.
Alas this is all currently speculation and I haven't needed to go further than pointing in this general direction to do any of the work I care about, but it is at least one starting point towards work in this direction.
Epistemological status: Babbling.
Let a map each mind in mindspace to how aligned it is. We are trying to optimize a. To that end, lemmata are helpful which talk about the shape of mindspace. That's why we try to call it a space even before defining what category C it lives in.
To optimize a function, start with a diverse enumeration of its domain. The deontological enumeration covers all others with constant-factor overhead, but the consequentialist enumeration gives us more properties to work with.
Every mind m has an implicit utility function u(m). a factors through u as a function, but not as a continuous function, let alone a C-morphism. That's why we've recently moved away from explicit utility maximizers.
Use mathematical language to tell our story! Then we might guess where it's going.