But would you agree with the claim that a real-world agent is going to have to use a formulation that fits inside limits on the length of usable proofs?
I'm not defining an agent here, I'm defining a mathematical function which evaluates agents. It is uncomputable (as is the Legg-Hutter metric).
Upon further reflection, I think the problems are not with your distribution... but with the neglect of bridging laws or different ways of representing the universe.
N defines the ontology in which the utility function and the "intrinsic mind model" are defined. Y should be regarded as the projection of the universe on this ontology rather than the "objective universe" (whatever the latter means). Thus H implicitly includes both the model of the universe and the bridging laws. In particular, its complexity reflects the total complexity of both. For example, if N is classical and the universe is quantum mechanical, G will arrive at a hypothesis H which combines quantum mechanics with decoherence theory to produce classical macroscopic histories. This hypothesis will have large t(H) since quantum mechanics correctly reproduces the classical dynamics of M at the macroscopic level. This shouldn't come as a surprise: we also perceive the world as classical. More precisely, there would be a dominant family of hypothesis differing in the results of "quantum coin tosses". That is, this ontological projection is precisely the place where the probability interpretation of the wavefunction arises.
Followup to: Intelligence Metrics and Decision Theory
Related to: Bridge Collapse: Reductionism as Engineering Problem
A central problem in AGI is giving a formal definition of intelligence. Marcus Hutter has proposed AIXI as a model of perfectly intelligent agent. Legg and Hutter have defined a quantitative measure of intelligence applicable to any suitable formalized agent such that AIXI is the agent with maximal intelligence according to this measure.
Legg-Hutter intelligence suffers from a number of problems I have previously discussed, the most important being:
Logical Uncertainty
The formalism introduced here was originally proposed by Benja.
Fix a formal system F. We want to be able to assign probabilities to statements s in F, taking into account limited computing resources. Fix D a natural number related to the amount of computing resources that I call "depth of analysis".
Define P0(s) := 1/2 for all s to be our initial prior, i.e. each statement's truth value is decided by a fair coin toss. Now define
PD(s) := P0(s | there are no contradictions of length <= D).
Consider X to be a number in [0, 1] given by a definition in F. Then dk(X) := "The k-th digit of the binary expansion of X is 1" is a statement in F. We define ED(X) := Σk 2-k PD(dk(X)).
Remarks
PD(s) = 0.
Non-Constructive UDT
Consider A a decision algorithm for optimizing utility U, producing an output ("decision") which is an element of C. Here U is just a constant defined in F. We define the U-value of c in C for A at depth of analysis D to be
VD(c, A; U) := ED(U | "A produces c" is true). It is only well defined as long as "A doesn't produce c" cannot be proved at depth of analysis D i.e. PD("A produces c") > 0. We define the absolute U-value of c for A to be
V(c, A; U) := ED(c, A)(U | "A produces c" is true) where D(c, A) := max {D | PD("A produces c") > 0}. Of course D(c, A) can be infinite in which case Einf(...) is understood to mean limD -> inf ED(...).
For example V(c, A; U) yields the natural values for A an ambient control algorithm applied to e.g. a simple model of Newcomb's problem. To see this note that given A's output the value of U can be determined at low depths of analysis whereas the output of A requires a very high depth of analysis to determine.
Naturalized Induction
Our starting point is the "innate model" N: a certain a priori model of the universe including the agent G. This model encodes the universe as a sequence of natural numbers Y = (yk) which obeys either specific deterministic or non-deterministic dynamics or at least some constraints on the possible histories. It may or may not include information on the initial conditions. For example, N can describe the universe as a universal Turing machine M (representing G) with special "sensory" registers e. N constraints the dynamics to be compatible with the rules of the Turing machine but leaves unspecified the behavior of e. Alternatively, N can contain in addition to M a non-trivial model of the environment. Or N can be a cellular automaton with the agent corresponding to a certain collection of cells.
However, G's confidence in N is limited: otherwise it wouldn't need induction. We cannot start with 0 confidence: it's impossible to program a machine if you don't have even a guess of how it works. Instead we introduce a positive real number t which represents the timescale over which N is expected to hold. We then assign to each hypothesis H about Y (you can think about them as programs which compute yk given yj for j < k; more on that later) the weight QS(H) := 2-L(H) (1 - e-t(H)/t). Here L(H) is the length of H's encoding in bits and t(H) is the time during which H remains compatible with N. This is defined for N of deterministic / constraint type but can be generalized to stochastic N.
The weights QS(H) define a probability measure on the space of hypotheses which induces a probability measure on the space of histories Y. Thus we get an alternative to Solomonoff induction which allows for G to be a mechanistic part of the universe, at the price of introducing N and t.
Remarks
Intelligence Metric
To assign intelligence to agents we need to add two ingredients:
Instead, we define I(Q0) := EQS(Emax(U(Y(H)) | "Q(Y(H)) = Q0" is true)). Here the subscript max stands for maximal depth of analysis, as in the construction of absolute UDT value above.
Remarks