A 'random utility function' is a utility function selected according to some simple probability measure over a logical space of formal, compact specifications of utility functions.
For example: suppose utility functions are specified by computer programs (e.g. a program that maps an output description to a rational number). We then draw a random computer program from the standard universal prior on computer programs: where is the algorithmic complexity (Kolmogorov complexity) of the utility-specifying program
This obvious measure could be amended further to e.g. take into account non-halting programs; to not put almost all of the probability mass on extremely simple programs; to put a satisficing criterion on whether it's computationally tractable and physically possible to optimize for (as assumed in the Orthogonality Thesis); etcetera.
Complexity of value is the thesis that the attainable optimum of a random utility function has near-null goodness with very high probability. That is: the attainable optimum configurations of matter for a random utility function are, with very high probability, the moral equivalent of paperclips. This in turn implies that a superintelligence with a random utility function is with very high probability the moral equivalent of a paperclip maximizer.
A 'random utility function' is not: