Drahflow comments on Controlling Constant Programs - Less Wrong

25 Post author: Vladimir_Nesov 05 September 2010 01:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread.

Comment author: Drahflow 06 September 2010 10:36:11AM 0 points [-]

If agent() is actually agent('source of world') as the classical newcomb problem has it, I fail to see what is wrong with simply enumerating the possible actions and simulating the 'source of world' with the constant call of agent('source of world') replaced by the current action candidate? And then returning the action with maximum payoff obviously.

Comment author: Vladimir_Nesov 06 September 2010 10:37:54AM *  0 points [-]

See world2(). Also, the agent takes no parameters, it just knows the world program it's working with.

Comment author: Drahflow 06 September 2010 10:42:30AM 2 points [-]

The only difference I can see between "an agent which knows the world program it's working with" and "agent('source of world')" is that the latter agent can be more general.

Comment author: Will_Sawin 10 September 2010 11:55:58PM 0 points [-]

A prior distribution about possible states of the world, which is what you'd want to pass outside of toy-universe examples, is rather clearly part of the agent rather than a parameter.

Comment author: Vladimir_Nesov 06 September 2010 10:54:42AM *  0 points [-]

Yes, in a sense. (Although technically, the agent could know facts about the world program that can't be algorithmically or before-timeout inferred just from the program, and ditto for agent's own program, but that's a fine point.)