Eliezer_Yudkowsky comments on An angle of attack on Open Problem #1 - Less Wrong

30 Post author: Benja 18 August 2012 12:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 12 March 2013 01:10:26AM 1 point [-]

(Requested reply.)

I think there'd be a wide variety of systems where, so long as the "parent" agent knows the exact strategy that its child will deploy in all relevant situations at "compile time", the parent will trust the child. The point of the Lob problem is that it arises when we want the parent to trust the child generally, without knowing exactly what the child will do. For the parent to precompute the child's exact actions implies that the child can't be smarter than the parent, so it's not the kind of situation we would encounter when e.g. Agent A wants to build Agent B which has more RAM and faster CPUs than Agent A while still sharing Agent A's goals. This, of course, is the kind of "agents building agents" scenario that I am most interested in.

Comment author: Eliezer_Yudkowsky 08 April 2013 10:32:53PM 2 points [-]

During the April 2013 workshop I rephrased this as the principle "The actions and sensor values of the offspring should not appear outside of quantifiers". Justification: If we have to reason case-by-case about all possible actions, all possible sensor values, and all possibles states of the world, our child's size must be less than or equal to "the number of cases we can consider" / "child's sensor state space" x "child's action state space" x "world state space" which in general implies a logarithmically smaller child. I call this the Vingean Principle.