# Completeness of simulations

1 24 August 2012 10:44PM

Suppose I have an exact simulation of a human. Feeling ambitious, I decide to print out a GLUT of the action this human will take in every circumstance; while the simulation of course works at the level of quarks, I have a different program that takes lists of quark movements and translates them into a suitably high-level language, such as "Confronted with the evidence that his wife is also his mother, the subject will blind himself and abdicate".

Now, one possible situation is "The subject is confronted with the evidence that his wife is also his mother, and additionally with the fact that this GLUT predicts he will do X". Is it clear that an accurate X exists? In high-level language, I would say that, whatever the prediction is, the subject may choose to do something different. More formally we can notice that the simulation is now self-referential: Part of the result is to be used as the input to the calculation, and therefore affects the result. It is not obvious to me that a self-consistent solution necessarily exists.

It seems to me that this is somehow reminiscent of the Halting Problem, and can perhaps be reduced to it. That is, it may be possible to show that an algorithm that can produce X for arbitrary Turing machines would also be a Halting Oracle. If so, this seems to say something interesting about limitations on what a simulation can do, but I'm not sure exactly what.

Sort By: Best
Comment author: 25 August 2012 01:34:22AM 8 points [-]

For clarity, a GLUT is a Giant Look Up Table, numerating the response of a system to every possible sequence of stimulus.

http://lesswrong.com/lw/pa/gazp_vs_glut/

Comment author: 25 August 2012 04:11:41AM *  2 points [-]

If the subject is a contrarian, the desired X doesn't exist. But this doesn't prevent the GLUT from predicting all actions correctly.

We are imagining the following GLUT entry:

"The subject is presented with evidence E and the [possibly false] fact that the predicted action for this entry is X" -> The subject will do action Y.

X need not be equal to Y. It may be that there is no X for which the predicted action Y equals the action X that we told the subject we predicted.This is trivially possible if the subject is a contrarian who will always do the opposite of what you tell him he will do. But the GLUT is still completely accurate! We simply lied to the subject about what the GLUT predicted. We told him we predicted he would do X on being told he would do X, but actually we knew he would do Y.

Comment author: 25 August 2012 04:24:49AM 1 point [-]

Sure, but that's equivalent to just not showing him the GLUT in the first place. All you've done is move the problem one level up. There is still a circumstance in which the GLUT cannot reliably predict his action, namely that where you show him an accurate GLUT.

Comment author: 25 August 2012 05:16:07PM *  2 points [-]

Where did you get the accurate GLUT? Before you had an accurate GLUT , you couldn't show him the accurate GLUT which would be needed to calculate the accurate GLUT.

Sure, you can calculate the GLUT for all possible inputs, some of which may be accurate GLUT inputs, but then you have to solve the problem of finding which of all possible inputs are accurate GLUT inputs. To answer the question, someone has to do calculations above and beyond just simulating a human.

Comment author: 25 August 2012 05:25:50PM 0 points [-]

OK, that's fair. Here are some more thoughts.

To be concrete suppose the input you give to the subject is a finite string of bits of length N. The GLUT is computed by simulating the output produced by each of the 2^N possible inputs, observing the result, and writing it down. Now we want to know what happens when we encode the GLUT as a bit string of length N and giving it to the subject as an input. I think the answer is that in general we can't do this, because to encode all 2^N entries of the GLUT clearly requires far more than N bits, so the GLUT is too big to be a valid input to the simulation. So it seems an accurate GLUT is too big for us to be able to show it to the subject. Maybe this is one answer to the question.

But maybe for some subjects we can do enough compression on the GLUT that we can fit it into N bits. As a trivial example, if the input string is large but the subject's behavior is simply to say "yes" for any input, then we can show him the accurate compressed GLUT: "For any input, the subject will say 'yes.'"

Less trivially, we could think of the simulator program itself as a highly compressed GLUT. And maybe the simulator executable is a mere 100 MB file, while the simulation accepts 100 GB of input. Then we could supply the simulated subject with the executable that is running him as an accurate compressed GLUT.

Suppose the simulated subject is a contrarian and tries to prove that the compressed GLUT we gave him is not accurate. He loads the executable onto a simulated computer and runs the executable on itself to see what it will output. Of course, it creates a simulated simulated subject who takes the executable and loads it onto a simulated simulated computer to see what it will output. This creates a simulated simulated simulated subject...

So this simulation will never halt when given this input. Presumably an accurate uncompressed GLUT would have an entry reading "Subject is supplied with his own source code -> simulation does not halt."

So maybe we can indeed connect to the halting problem?

Comment author: 25 August 2012 07:39:10PM 0 points [-]

So it seems an accurate GLUT is too big for us to be able to show it to the subject. Maybe this is one answer to the question.

Very likely; but I was only going to show him the bits predicting his response to his immediate situation + being shown those bits. We can save a bit of paper by not printing out his predicted reaction to chocolate elephants falling from the sky. :)

So maybe we can indeed connect to the halting problem?

It seems to me that your construction relies on the subject having access to the executable. If we don't give him that access, he cannot use this method of attempted disproof, no matter how contrarian he is.

Comment author: 25 August 2012 08:36:52PM 0 points [-]

I was only going to show him the bits predicting his response to his immediate situation + being shown those bits.

OK, but again this simply may not be possible even if you have an accurate GLUT. If you give him anything that lets him compute your true prediction in finite time, then he can compute your true prediction and then do the opposite. Even if we have a complete and accurate GLUT, we can never supply him with a true prediction if the accurate GLUT contains no entries of the form "Subject is told he is predicted to do X" -> subject does X.

Comment author: 26 August 2012 12:27:21AM 0 points [-]

Well, that's precisely my point. But see prase's comment below, with the very interesting point that every sufficiently-nice function f(x) has some x for which f(x)=x. The question is whether the human brain is sufficiently nice.

Comment author: 25 August 2012 12:36:38AM 2 points [-]

There is not a general solution.

Suppose that the human is just a mathematical function, where the input is a real number x and his output is a real number y. The person is the function y=f(x); and (in the non-self-referential case) an accurate GLUT just predicts: if the person is given the number x, then he will give you the number y.

In the second case, where the person is told what the GLUT says, he is now a function of two variables h(x,y)=z. The person is given the number x, and told that the GLUT predicts that he will output y, and then he outputs z. In order for the GLUT to be an accurate predictor we must have y=z, but that is not possible in general. For instance, consider h(x,y)=y+1; the person always outputs a number that is 1 more than what the GLUT said he would output.

The problem that the GLUT-maker faces is that she knows the human's function, h(x,y)=z, and she must create a GLUT function, y=g(x), such that g(x)=h(x,g(x)) for all x. I presume that there is math out there which says what features h(x,y) must have in order for this to be possible, but I don't know it.

Comment author: 25 August 2012 02:27:15AM *  0 points [-]

The problem that the GLUT-maker faces is that she knows the human's function, h(x,y)=z, and she must create a GLUT function, y=g(x), such that g(x)=h(x,g(x)) for all x. I presume that there is math out there which says what features h(x,y) must have in order for this to be possible, but I don't know it.

What about Exists y: h(x,y) = y? If this holds, take Forall x: g(x) = y and we are done.

Edit: to be more general, we can let y depend on x, which gives Forall x: Exists y(x): h(x,y(x)) = y(x), but that is actually the original statement with the letter y standing in place of the original g. I doubt there is anything more to say about such functions, at least in general.

Comment author: 25 August 2012 02:22:18AM 2 points [-]

Is the fact that the simulated subject is a human important for the proposed thought experiment, besides that it activates all sorts of wrong intuitions about free will and makes the lookup table unimaginably huge or even infinite?

Is it clear that an accurate X exists?

It is not, why should it be? By assumption the subject does whatever the GLUT predicts but it doesn't follow that the GLUT includes a proposition "if the subject is confronted with the information that the GLUT predicts that he will do X, he will do X".

Comment author: 25 August 2012 04:25:44AM 1 point [-]

Is the fact that the simulated subject is a human important for the proposed thought experiment, besides that it activates all sorts of wrong intuitions about free will and makes the lookup table unimaginably huge or even infinite?

I don't think so, any Turing machine will do.

Comment author: 25 August 2012 02:13:08AM 1 point [-]

Your actions are compact and continuous, thanks to the laws of physics. If the GLUT outputs a prediction in a way that is also compact and continuous, either because it follows the laws of physics or you just programmed it that way, then there's at least one fixed point where he'll take the action he sees. Text output is discrete, and thus this wouldn't work, but there are other ways of doing this. For example, you could show him video of what he'll do.

If so, this seems to say something interesting about limitations on what a simulation can do, but I'm not sure exactly what.

It says that it's not necessarily possible to make a self-fulfilling prophesy. You can predict what someone will do without input from you, but you may or may not be able to make an accurate prediction given that you tell them the prediction.

Comment author: 25 August 2012 07:17:54PM 0 points [-]

For those interested: Brouwer fixed-point theorem

I don't see compactness of the set of possible predictions.

Comment author: 26 August 2012 03:17:42AM 1 point [-]

I'd use the Schauder fixed-point theorem so that you don't have to worry as much about what space you use.

One way to define the prediction space would be to have it predict the state of the universe immediately after the prediction is stated. Since anything after that is a function of that point in time, it's sufficient. Every particle is within the distance light would have moved in that time. Each particle has at most the energy of the entire universe plus the maximum energy output of the GLUT response (I'm assuming it's somehow working from outside this universe). This includes some impossible predictions, but that just means that they're not in the range. They're still in the domain. Just take the Cartesian products of the positions and momentums of the particles, and you end up in a Euclidean space.

If you want to take quantum physics into account, you'd need to use the quantum configuration space for each prediction. You only need to worry about everything in the light cone of the prediction again. You define the distance between each configuration space as the integral of the square of the magnitude of the difference of the quantum waveform at each point. Since the integral of the square of the magnitude of each waveform adds to one, it's bounded. Since it's always exactly one, you get a sphere, which isn't convex, but that's just the range. You just take the convex hull, a ball, as the domain.

The actual output of the GLUT may or may not be compact depending on how you display the output.

Comment author: 25 August 2012 04:28:56AM 0 points [-]

If the GLUT outputs a prediction in a way that is also compact and continuous, either because it follows the laws of physics or you just programmed it that way, then there's at least one fixed point where he'll take the action he sees.

I don't understand why that follows; can you elaborate?

Additionally, "at least one fixed point" seems distinct from "we can construct X for all situations".

Comment author: 25 August 2012 05:15:03AM 1 point [-]

I don't understand why that follows; can you elaborate?

A sufficiently nice function is mathematically guaranteed to have at least one fixed point where f(x) = x. We need to make some assumptions to make it nice enough, but once we do that, we just set x as the hypothetical GLUT output, and f(x) to the GLUT output of the subject's reaction to x, and we know there's some value of x where the GLUT output of the reaction is the same as what the subject is reacting to.

Additionally, "at least one fixed point" seems distinct from "we can construct X for all situations".

f is the situation, and the fixed point is what you're calling X. Also, I'm not sure if there's a method to construct X. We just know it exists. You can't even just check every value of X, because that only works if it's discrete, which means it's not sufficiently nice.

Comment author: 25 August 2012 07:47:36PM 0 points [-]

Thanks for the elaboration; this is a very interesting point that I wasn't aware of. But it does seem to rely on the function having the same domain as its range, which presumably is one of the assumptions going into the niceness. It is not clear to me, although perhaps I'm just not thinking it through, that "future movements of quarks" is the same as "symbols to be interpreted as future movements of quarks".

Comment author: 26 August 2012 02:31:18AM 0 points [-]

You could think of it as x is the GLUT output, f(x) is the subject's response, and g(f(x)) is the GLUT's interpretation of the subject's response. f maps from GLUT output to subject response, and g maps from subject response to GLUT output. f and g don't have fixed points, because they don't have the same domain and range. f∘g, however, maps from GLUT output to GLUT output, so it has the same domain and range. I was just calling it f, but this way it might be less confusing.

Comment author: 26 August 2012 04:02:04AM 1 point [-]

The exact simulation will be quantum. You can make statements of quantum mechanics that include multiple 'cat-states' - and in general, this is the sort of result you will get from the GLUT. You will only be able to observe one of these, so such a prediction, though correct, will appear wrong.

Comment author: 25 August 2012 05:03:17PM 1 point [-]

At this point you are not talking about a simulation of a person, but rather a simulation of the entire world. Confronted with the knowledge that a GLUT exists predicting I will do anything, I will resolve to pick some relatively trivial decision each day, write down 10 choices, numbered 0 to 9, and then choose the item based on the first numerical digit I see written out on the 11th page of the first section of the New York Times for that day, excluding page numbers or references to page numbers. At that point, the simulation of me has to simulate the production of the next day's New York Times, which is tantamount to having to simulate the entire world.

Not that that is necessarily any harder to do than a complete and accurate simulation of an individual person, mind you.

Comment author: 25 August 2012 07:33:51PM 2 points [-]

At this point you are not talking about a simulation of a person, but rather a simulation of the entire world.

Sure. A mere computational detail.

Comment author: 25 August 2012 05:04:17AM 1 point [-]

There are Löbian situations and Gödelian situations. In the Löbian situation, the simulation makes no difference to the outcome. You're hungry, you just made breakast, and the simulation predicts you will eat the breakfast because there's no perverse impulse present in your psyche right now, that would drive you to go hungry just to contradict the prediction. The Gödelian situations arise when knowing the prediction does matter, factually or motivationally. If an agent has the intention and the power to always do A if you say B and B if you say A, then there's no way to say A or B and get it right. But you can "predict", for example, that it will listen to your prediction and ... become annoyed, because you didn't say A or B.

Comment author: 25 August 2012 02:11:34AM 0 points [-]

"The subject is confronted with the evidence that his wife is also his mother, and additionally with the fact that this GLUT predicts he will do X". Is it clear that an accurate X exists?

You mean, he is confronted with the statement that this GLUT predicts he will do X. That statement may or may not be true, depending on his behavior. He can choose a strategy of either always choosing to do what is predicted, always choosing to do the opposite of what is predicted, or ignoring the prediction and choosing based on unrelated criteria. It is possible to construct a lookup table containing accurate predictions of this sort only in the first and third cases, but not in the second.

Comment author: 25 August 2012 02:50:12AM 0 points [-]

Except that if the simulation really is accurate, his response should be already taken into account. Reality is deterministic, an adequately accurate and detailed program should be able to predict exactly. Human free will relies on the fact that our behavior has too many influences to be predicted by any past or current means. Currently, we can't even define all of the influences.

Comment author: 25 August 2012 03:14:21AM *  2 points [-]

If the simulation is really accurate, then the GLUT would enter an infinite loop if he uses an 'always do the opposite' strategy.

Ie, "Choose either heads or tails. The oracle predicts you will choose <X>." If his strategy is 'choose heads because I like heads' then the oracle will correctly predict it. If his strategy is 'do what the oracle says', then the oracle can choose either heads or tails, and the oracle will predict that and get it correct. If his strategy is 'flip a coin and choose what it says' then the oracle will predict that action and if it is a sufficiently powerful oracle, get it correct by modeling all the physical interactions that could change the state of the coin.

However, if his strategy is 'do the opposite', then the oracle will never halt. It will get in an infinite recursion choosing heads, then tails, then heads, then tails, etc. until it crashes. It's no different than an infinite loop in a computer program.

It's not that the oracle is inaccurate. It's that a recursive GLUT cannot be constructed for all possible agents.

Comment author: 25 August 2012 02:11:44AM *  -1 points [-]

The concept of an "exact simulation" disquiets me on a deep level, which historically has suggested a wrong question is in the neighborhood.

I do agree with your sense that this is an interpretation of the Halting Problem, with the quirk that self-referential programs are explicitly mentioned.

Comment author: 25 August 2012 01:52:50AM 0 points [-]

The subject is confronted with the evidence that his wife is also his mother, and additionally with the fact that this GLUT predicts he will do X.

In what situation does the GLUT predict that he will do X? Let's say, for example, that the GLUT predicts: "Confronted with the evidence that his wife is also his mother, the subject will do X." When the subject is confronted with the evidence, and also with the GLUT's prediction, he does Y. But the GLUT is not wrong, because it predicted his actions under a different set of information than he actually had. Alternatively, let's define A as: "evidence that his wife is also his mother, and that when shown evidence A, does X". There is no particular reason that there has to exist an X such that it is possible to construct A.

Comment author: 25 August 2012 04:27:34AM 0 points [-]