gjm comments on AI Tao - Less Wrong

-11 Post author: sbenthall 21 October 2014 01:15AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (3)

You are viewing a single comment's thread.

Comment author: gjm 21 October 2014 02:28:42PM 3 points [-]

An agent's optimization power does not equal the unlikelihood of the world it creates. At most it's the unlikelihood of the worlds it can creates, if creating them serves its goals. But since (so at least it seems plausible) most agents will have no limit to the properties they would like the world to have, most agents will not stop trying to optimize the world until they reach the limits of their abilities, which may make improbability a reasonable surrogate for power.

Perhaps your point is precisely that we shouldn't take "unlikelihood of world produced" as a reliable indication of optimization power, contra EY's proposal. If so, I suppose I agree, but I don't think the argument here is any good, because:

If "power" is taken to mean "improbability of world produced" then it is plainly not the case that doing nothing produces the most improbable (hence most evidential-of-power) world. Because the improbability that indicates power is improbability conditional on the agent not acting. So you're mixing up two completely different notions of "unlikely" and it's not surprising you get paradoxical-looking results.