Coscott comments on Single player extensive-form games as a model of UDT - Less Wrong

9 Post author: cousin_it 25 February 2014 10:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread. Show more comments above.

Comment author: Coscott 25 February 2014 08:15:28PM 0 points [-]

I do not think it is something you should just assume. I think it is an empirical question. I think that behavioral strategies might not be realistic, because they seem to depend on non-determinism.

Comment author: cousin_it 25 February 2014 08:17:21PM *  2 points [-]

Well, in the Absent-Minded Driver problem it seems reasonable to allow the driver to flip a coin whenever he's faced with a choice. Why do you think that's unrealistic?

Comment author: Coscott 25 February 2014 08:32:15PM 1 point [-]

Hmm. I was thinking that determinism requires that you get the same output in the same situation, but I guess I was not accounting for the fact that we do not require the two nodes in the information set to be the same situation, we only require that they are indistinguishable to the agent.

It does seem realistic to have the absent minded driver flip a coin. (although perhaps it is better to model that as a third option of flipping a coin, which points to chance node.)

On the other hand, If I am a deterministic Turing machine, and Omega simulates me and puts a dollar in whichever of two boxes he predicts I will not pick, then I cannot win this game unless I have an outside source of randomness.

It seems like in different situations, you want different models. It seems to me like you have two different types of agents: a deterministic dUDT agent and a randomized rUDT agent. We should be looking at both, because they are not the same. I also do not know which one I am as a human.

By asking about the Absent-Minded Driver with a coin, you were phrasing the problem so that it does not matter, because an rUDT agent is just a dUDT agent which has access to a fair coin that he can flip any number of times at no cost.

Comment author: cousin_it 25 February 2014 08:43:32PM *  0 points [-]

I agree that there is a difference, and I don't know which model describes humans better. It doesn't seem to matter much in any of our toy problems though, apart from AMD where we really want randomness. So I think I'm going to keep the post as is, with the understanding that you can remove randomness from the model if you really want to.

Comment author: Coscott 25 February 2014 08:59:27PM 0 points [-]

I agree that that is a good solution. Since adding randomness to a node is something that can be done in a formulaic way, it makes sense to have information sets which are just labeled as "you can use behavioral strategies here" It also makes sense to have them labeled as such by default.

I do not think that agents wanting but not having randomness is any more pathological than Newcomb's problem (Although that is already pretty pathological)