Squark comments on Intelligence Metrics with Naturalized Induction using UDT - Less Wrong

13 Post author: Squark 21 February 2014 12:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread. Show more comments above.

Comment author: Squark 24 February 2014 08:10:11PM 0 points [-]

But would you agree with the claim that a real-world agent is going to have to use a formulation that fits inside limits on the length of usable proofs?

I'm not defining an agent here, I'm defining a mathematical function which evaluates agents. It is uncomputable (as is the Legg-Hutter metric).

Upon further reflection, I think the problems are not with your distribution... but with the neglect of bridging laws or different ways of representing the universe.

N defines the ontology in which the utility function and the "intrinsic mind model" are defined. Y should be regarded as the projection of the universe on this ontology rather than the "objective universe" (whatever the latter means). Thus H implicitly includes both the model of the universe and the bridging laws. In particular, its complexity reflects the total complexity of both. For example, if N is classical and the universe is quantum mechanical, G will arrive at a hypothesis H which combines quantum mechanics with decoherence theory to produce classical macroscopic histories. This hypothesis will have large t(H) since quantum mechanics correctly reproduces the classical dynamics of M at the macroscopic level. This shouldn't come as a surprise: we also perceive the world as classical. More precisely, there would be a dominant family of hypothesis differing in the results of "quantum coin tosses". That is, this ontological projection is precisely the place where the probability interpretation of the wavefunction arises.