# Simulating Problems

1 30 January 2013 01:14PM

Apologies for the rather mathematical nature of this post, but it seems to have some implications for topics relevant to LW. Prior to posting I looked for literature on this but was unable to find any; pointers would be appreciated.

In short, my question is: How can we prove that any simulation of a problem really simulates the problem?

I want to demonstrate that this is not as obvious as it may seem by using the example of Newcomb's Problem. The issue here is of course Omega's omniscience. If we construct a simulation with the rules (payoffs) of Newcomb, an Omega that is always right, and an interface for the agent to interact with the simulation, will that be enough?

Let's say we simulate Omega's prediction by a coin toss and repeat the simulation (without payoffs) until the coin toss matches the agent's decision. This seems to adhere to all specifications of Newcomb and is (if the coin toss is hidden) in fact indistinguishable from it from the agent's perspective. However, if the agent knows how the simulation works, a CDT agent will one-box, while it is assumed that the same agent would two-box in 'real' Newcomb. Not telling the agent how the simulation works is never a solution, so this simulation appears to not actually simulate Newcomb.

Pointing out differences is of course far easier than proving that none exist. Assuming there's a problem we have no idea which decisions agents would make, and we want to build a real-world simulation to find out exactly that. How can we prove that this simulation really simulates the problem?

(Edit: Apparently it wasn't apparent that this is about problems in terms of game theory and decision theory. Newcomb, Prisoner's Dilemma, Iterated Prisoner's Dilemma, Monty Hall, Sleeping Beauty, Two Envelopes, that sort of stuff. Should be clear now.)

Sort By: Best
Comment author: 30 January 2013 06:13:07PM *  1 point [-]

Perhaps I am answering a question other than the one you are asking, but: Every exercise in simulation is an exercise in evaluating which modeling concerns are relevant to the system in question, and then accounting for those factors up to a desired level of accuracy.

If you happen to be dealing with a system simple enough to be simulated exactly - and I don't know of any physical system for which this is possible - then it would be useful to talk about "proving" the correspondence between the simulation and the reality being modeled.

If you are dealing with a real system where you need to make approximations, my intuition says that the best you can do toward proving accuracy would be performing ample validations of the simulation against measured data and verifying that the simulation matches the data to within the expected tolerance.

I suspect that you and I have different concepts of what a simulation is, because you describe an agent (presumably a human being) interacting with the "simulation" in real time. In this case you are mucking up the dynamics of the simulation by introducing a factor which is not accommodated by the model, i.e. the human. The human's reasoning is influenced by knowledge from outside the simulation.

Comment author: 30 January 2013 07:00:44PM *  0 points [-]

I suspect that you and I have different concepts of what a simulation is, because you describe an agent (presumably a human being) interacting with the "simulation" in real time. In this case you are mucking up the dynamics of the simulation by introducing a factor which is not accommodated by the model, i.e. the human. The human's reasoning is influenced by knowledge from outside the simulation.

I didn't necessarily mean human agents. For example, this is a simulation of IPD with which non-human agents can interact with. Each step, the agents make decisions based on the current state of the simulation. If you wanted, you could have exactly the same simulation with actual humans anonymously interacting via interface terminals with a server running the simulation. On the other hand, this is a non-simulation of the same problem because it lacks actual agents that would interact with it. It's just a calculation, although an accurate one.

In general, by "simulation" I mean a practical version of a problem that contains elements which would make it impossible or impractical to construct in real life, but is identical in terms of rules, interactions, results, and so on.

Perhaps I am answering a question other than the one you are asking, but: Every exercise in simulation is an exercise in evaluating which modeling concerns are relevant to the system in question, and then accounting for those factors up to a desired level of accuracy.

That is more or less the question I am asking, and evaluating which modeling concerns are relevant to the system in question is the crucial part. But how can we be certain to have made a correct analogy or simplification? It's easy to tell this is not the case if the end results differ, but if those are what we want to learn then we need a different approach.

Is it possible to simulate Omega, for example? Like the mentioned repeated coin toss, except that we would need to prove that our simulation does in fact in all cases lead to the same decisions that an actual Omega would. Or what if we need statistically significant results from a single agent of a one-shot problem, and we can't memory-wipe the agent? Etc.

Comment author: 31 January 2013 03:10:01PM 0 points [-]

It is more likely a simulation simulates X if it fails like X fails than if it fails in a different way.

Comment author: 31 January 2013 03:12:41PM 0 points [-]

i'm not sure I understand what you mean by 'failing' in regards to simulations. Could you elaborate?

Comment author: 31 January 2013 10:03:32PM 0 points [-]

If a simulation of poker looses money in a way that is similar to a game of poker, it is a good simulation because it will allow for more accurate worst-case budgeting.

Comment author: 31 January 2013 10:11:15PM 0 points [-]

You mean, if an agent loses money. And that's the point; if the only thing you know is that an agent loses money in a simulation of poker, how can you prove the same is true for real poker?

Comment author: 01 February 2013 12:46:28AM 0 points [-]

I think Karl Popper made the the best case that there are no final proofs, only provisional, and that the way to find the more useful provisional proofs is to note how they fail and not how they succeed. A poker simulator that can tell me accurately how much I might loose is more helpful than one that tells me how much I might win. I can budget based on the former and not the later.

If you want final proofs (models, theories, simulations) the answer is there are no scientific final proofs.

I could be wrong, or perhaps i have answered a question not asked.

Comment author: 31 January 2013 06:08:49AM 0 points [-]

Let's say we simulate Omega's prediction by a coin toss and repeat the simulation (without payoffs) until the coin toss matches the agent's decision.

It's not quite clear to me what you have in mind here. Are you envisioning this with human agents or with programs? If with humans, how will they not remember that Omega got it wrong on the past run? If with programs, what's the purpose of the coin?

Comment author: 31 January 2013 06:56:39AM 1 point [-]

If you substitute Omega with a repeated toin coss, there is no Omega, and there is no concept of Omega being always right. Instead of repeating the problem, you can also run several instances of the simulation with several agents simultaneously, and only counting those instances in which the prediction matches the decision.

For this simulation, it is completely irrelevant whether the multiple agents are actually identical human beings, as long as their decision-making process is identical (and deterministic).

Comment author: 01 February 2013 07:57:33AM 0 points [-]

Ah, that makes sense.

Comment author: 30 January 2013 06:53:26PM 0 points [-]

Can you taboo "problem"?

Comment author: 30 January 2013 07:08:09PM *  0 points [-]

If anything, I expected to be asked to taboo 'simulation' — by 'problem' I really just mean game theoretical problems such as Newcomb, Prisoner's Dilemma, Iterated Prisoner's Dilemma, Monty Hall, Sleeping Beauty, Two Envelopes, and so forth.

Would tabooing 'problem' really be helpful?

Comment author: 30 January 2013 07:13:21PM 0 points [-]

It would for me! "Problem" is an extremely broad word. I would also like it if you tabooed "simulation."

Comment author: 30 January 2013 07:47:13PM *  1 point [-]

In terms of game theory, 'problem' is not an extremely broad word at all, and I'm not aware of any grey areas, either. I guess you could define a game-theoretical problem as a ruleset within which agents get payoffs based on decisions they or others make. I really fail to see why you think this term that is prominently featured on LW should be tabooed.

I gave a definition for 'simulation' in another comment:

a practical version of a problem that contains elements which would make it impossible or impractical to construct in real life, but is identical in terms of rules, interactions, results, and so on

I'll taboo the term if others tell me to or upvote your comment, but at present I see no need for it.

Comment author: 30 January 2013 07:55:43PM 1 point [-]

In terms of game theory, 'problem' is not an extremely broad word at all, and I'm not aware of any grey areas, either.

It was not obvious to me that you were talking about game-theoretic problems. "Problem" is not a word owned solely by game theorists.

a practical version of a problem that contains elements which would make it impossible or impractical to construct in real life, but is identical in terms of rules, interactions, results, and so on

It's unclear to me what you mean by this. If a problem contains elements which are impossible to construct in real life, in what sense can a practical version be said to be identical in terms of rules, interactions, results, and so on?

Comment author: 30 January 2013 08:45:12PM *  0 points [-]

I have edited my top-level post to clarify what kind of problems I mean.

If a problem contains elements which are impossible to construct in real life, in what sense can a practical version be said to be identical in terms of rules, interactions, results, and so on?

For a trivial example, Omega predicting an otherwise irrelevant random factor such as a fair coin toss can be reduced to the random factor itself, thereby getting rid of Omega. Equivalence can easily be proven because regardless of whether we allow for backwards causality and whatnot, a fair coin is always fair and even if we assume that Omega may be wrong, the probability of error must still be the same for either side of the coin, so in the end Omega is exactly as random as the coin itself no matter Omega's actual accuracy. Of course this wouldn't apply if the result of the coin toss was also relevant in some other way.

Comment author: 30 January 2013 08:58:24PM 0 points [-]

Okay, so right now I don't understand what your question is. It sounds to me like "how can we prove that simulations are simulations?" given what I understand to be your definition of a simulation.

Comment author: 30 January 2013 09:38:23PM *  0 points [-]

The question is: How can I prove that all possible agents decide identically whether they're considering the simulation or the original problem?

To further illustrate the point of problem and simulation, suppose I have a tank and a bazooka and want to know whether the bazooka would make the tank blow up, but because tanks are somewhat expensive I build another, much cheaper tank lacking all parts I deem irrelevant such as tracks, crew, fire-control and so on. My model tank blows up. But how can I say with certainty that the original would blow up as well? After all, the tracks might have provided additional protection. Could I have used tracks of inferior quality for my model? Which cheaper material would have the same resistance to penetration?

Tank and bazooka are the problem, of which the tank is the impractical part that is replaced by the model tank in the simulation.

Comment author: 30 January 2013 09:43:11PM 0 points [-]

But how can I say with certainty that the original would blow up as well?

You... can't?

Comment author: 30 January 2013 09:53:03PM -1 points [-]

This is obviously not about bazookas and tanks. If you want to know whether real tanks really blow up, you need real evidence. If you want to know whether CDT defects in PD, you don't. You can do maths just with logic and reason, und fortunately this is 100% about maths.

Comment author: 30 January 2013 02:59:11PM 0 points [-]

The agent in Newcomb's problem needs to know that Omega's prediction is caused by the same factors as his actual decision. The agent does not need to know any more detail than that, but does need to know at least that much. If there were no such causal path between prediction and decision then Omega would be unable to make a reliable prediction. When there is correlation, there must, somewhere, be causation (though not necessarily in the same place as the correlation).

If the agent believes that Omega is just pretending to be able to make that prediction, but really tossed a coin, and intends only publicising the cases where the agent's decision happened to be the same, then the agent has no reason to one-box.

If the agent believes Omega's story, but Omega is really tossing a coin and engaging in selective reporting, then the agent's decision may be correct on the basis of his belief, but wrong relative to the truth. Such is life.

To simulate Newcomb's problem with a real agent, you have the problem of convincing the agent you can predict his decision, even though in fact you can't.

Comment author: 30 January 2013 03:11:02PM *  1 point [-]

I only used Newcomb as an example to show that determining whether a simulation actually simulates a problem isn't trivial. The issue here is not finding particular simulations for Newcomb or other problems, but the general concept of correctly linking problems to simulations. As I said, it's a rather mathematical issue. Your last statement seems the most relevant one to me:

To simulate Newcomb's problem with a real agent, you have the problem of convincing the agent you can predict his decision, even though in fact you can't.

Can we generalize this to mean "if a problem can't exist in reality, an accurate simulation of it can't exist either" or something along those lines? Can we prove this?

Comment author: 01 February 2013 01:06:38PM 0 points [-]

Can we generalize this to mean "if a problem can't exist in reality, an accurate simulation of it can't exist either" or something along those lines? Can we prove this?

I would cast this sentence in this form, seeing that if a problem contains some infinite it's impossibile for it to exist in reality. Can an infinite transition system be simulated by a finite transition sistem? If there's only one which can be, then this would disprove your conjecture. The converse of course it's not true...

Comment author: 01 February 2013 02:27:24PM *  0 points [-]

I'm not sure what you mean by an infinite transition system. Are you referring to circular causality such as in Newcomb, or to an actually infinite number of states such as a variant of Sleeping Beauty in which on each day the coin is tossed anew and the experiment only ends once the coin lands heads?

Regardless, I think I have already disproven the conjecture I made above in another comment:

Omega predicting an otherwise irrelevant random factor such as a fair coin toss can be reduced to the random factor itself, thereby getting rid of Omega. Equivalence can easily be proven because regardless of whether we allow for backwards causality and whatnot, a fair coin is always fair and even if we assume that Omega may be wrong, the probability of error must still be the same for either side of the coin, so in the end Omega is exactly as random as the coin itself no matter Omega's actual accuracy.