Preface: I am just noting that we people seem to be basing our morality on some rather ill defined intuitive notion of complexity. If you think it is not workable for AI, or something like that, such thought clearly does not yet constitute a disagreement with what I am writing here.
More preface: The utilitarian calculus is an idea that what people value is described simply in terms of summation. The complexity is another kind of f(a,b,c,d) that behaves vaguely like a 'sum' , but is not as simple as summation. If the a,b,c,d are strings, and it is a programming language, the above expression would often be written like f(a+b+c+d) , using + to mean concatenation, while it is something very fundamentally different from summation of real valued numbers. But it can appear confusingly close, as for a,b,c,d that don't share a lot of information among themselves, the result will behave a lot like a function on sum of real numbers. It will, however, diverge from the sum like behaviour as the a,b,c,d share more information among themselves, much in similar to how our intuitions for what is right diverge from sum like behaviour when you start considering exact duplicates of people, which only diverged for a few minutes.
It's a very rough idea, but it seems to me that a lot of common sense moral values are based on some sort of intuitive notion of complexity. Happiness via highly complex stimuli that pass through highly complex neural circuitry inside your head seems like a good thing to pursue; happiness via wire, resistor, and battery seems like a bad thing. What makes the idea of literal wireheading and hard pleasure inducing drugs so revolting for me, is the simplicity, banality of it. I have much fewer objections to e.g. hallucinogens (never took any myself but I am also an artist and I can guess that other people may have lower levels of certain neurotransmitters, making them unable to imagine what I can imagine).
The complexity based metrics have a property that they easily eat for breakfast huge numbers like "a dust speck in the 3^^^3 eyes", and even the infinity. The torture of a conscious being for a long period of time can easily be more complex issue than even the infinite number of dust specks.
Unfortunately, the complexity metrics like Kolmogorov's complexity are noncomputable on arbitrary input, and are big for truly random values. But in so much as the scenario is specific and has been arrived at by computation, there is this computation's complexity which sets an upper bound on complexity of scenario. The mathematics may also be not here yet. We have the intuitive notion of complexity where the totally random noise is not very complex, the very regular signal is not either, but some forms of patterns are highly complex.
This may be difficult to formalize. We could of course only define the complexities when we are informed of properties of something, but can not compute them for arbitrary input from scratch; if we map something as 'random numbers', the complexity is low; if it is encrypted volumes of works of Shakespeare, even though we wouldn't be able to distinguish that from random in practice (assuming good encryption), as we are told what it is, we can assign it higher complexity.
This also aligns with what ever it is that the evolution has been maximizing on the path leading up to H. Sapiens (Note that for the most part, evolution's power gone into improving the bacteria; the path leading up H. Sapiens is a very special case). Maybe we for some reason try to extrapolate this [note: for example, a lot of people rank their preference of animals as food by the animal's complexity of behaviours, which makes the human least desirable food; we have anti-whaling treaties], maybe it is a form of goal convergence between brain as intelligent system, and evolution (both employ hill climbing to arrive at solutions), or maybe we evolved the system that aligns with where evolution was heading because that increased fitness [edit: to address possible comment, we have another system based on evolution - the immune system - it works by evolving the antigens using somatic hypermutation; it's not inconceivable that we use some evolution-like mechanism to tweak our own neural circuitry, given that our circuitry does undergo massive pruning in early stages of life].
I'll link other post, that I didn't know of, which explains it considerably better:
http://lesswrong.com/lw/196/boredom_vs_scope_insensitivity/
Ultimately, the complexity as in Kolmogorov's complexity, is not computable (nor always useful). There are various complexity metrics that are more practical. The interesting thing about complexity metrics, is that under certain conditions, the complexity of concatenation of A and B is close to sum of their complexities, and under other conditions, it is far; that generally has to do with how much A and B have in common. The problem of course is that we can't quite pin down which exact metric is used there.
One sort of complexity is the size of the internal representation inside the head. We don't know how we represent things internally. That is a very complicated problem. It does seem that we use compression, implying that the 'size' of thing inside the head depends to the complexity of it, in terms of it's repetitiveness, but ignoring randomness. It may be that - our hardware being faulty and all - it plays a role how big is the internal representation. It is clearly the case that the more abstractly we represent strangers the less we care about them.
I don't see how that's relevant.
What complexity metric are you using? I suspect it involves only counting information that you find interesting, or something to that extent. Otherwise, I don't see how random data could possibly have low complexity.