lmm comments on Does the simulation argument even need simulations? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (102)
I think this leads to unpleasant conclusions. If causality is all we care about, does that mean we shouldn't care about people who are too far away to interact with (e.g. people on an interstellar colony too far away to reach in our lifetime)? Heck, if someone dived into a rotating black hole with the intent to set up a civilization in the zone of "normal space" closer to the singularity, I think I'd care about whether they succeeded, even though it couldn't possibly affect me. Back on Earth, should we care more about people close to us and less about people further away, since we have more causal contact with the former? Should we care more about the rich and powerful than about the poor and weak, since their decisions are more likely to affect us?
If you don't consider the possibility of being simulated it seems like you would make wrong decisions. Suppose that you agree with Bob to create 1000 simulations of the universe tonight, and then tomorrow you'll place a black sphere in the simulated universes. Tomorrow morning Bob offers to bet you a cookie that you're in one of the simulated universes. If you take the bet on the grounds that the model of the universe in which you're not in the simulation is simpler, then it seems like you lose most of the time (at least under naive anthropics).
Now obviously in real life we don't have this indication as to whether we're a simulation. But if we're trying to make a moral decision for which it matters whether we're in a simulation, it's important to get the right answer.
Didn't say that. We might be in a simulation. The question is, is that the more parsimonious hypothesis?
Observation is the king of epistemology, and Parsimony is queen. If parsimony says we're simulated, then we're probably simulated. In the counter-factual world where I have a memory of agreeing with Bob to create 1000 simulations, then parsimony says I'm likely in a simulation. We might be in a universe where the most parsimonious hypothesis given current evidence is simulation, or we might not. Would that I had a parsimony calculator, but for now I'm just guessing not.
There are observations that might lead a simulation hypothesis to be the most parsimonious hypothesis. I claim it as a question which is ultimately in the realm of science, although we still need philosophy to figure out a good way to judge parsimony.
These two statements sum my current stance.
Epistemic Rationality: Take every mathematical structure that isn't ruled out by the evidence. Rank them by parsimony.
CDT (which I'll take as "instrumental rationality" for now):: If your actions have results, you can use actions to choose your favorite result.
so, applying that to the points you raised...
I have sufficient evidence to believe that both the poor and the rich exist. I care about them both. In the counter-factual world where I was more certain concerning the existence of the rich and less certain containing the existence of the poor, then it would make sense to direct my efforts to the rich.
If I want to give people utils, and If I can give 10 utils to person R if I have 70% certainty that they exist to benefit from it, or 20 utils to person P if I have 10% certainty that they exist to benefit from it, I obviously choose person R.
Back to reality: I've got incredible levels of certainty that both the rich and the poor exist.
Once again, it's a question of certainty that they exist. If I told you that donating $100 to the impoverished Lannisters would be efficient altruism, wouldn't you want to check whether such people truly exist and whether the claims I made about them are true?
You'd put every effort into assuring that they succeeded before they dived into the black hole and became causally disconnected from you. Afterwords, you're memory of them would remain as evidence that they exist...you'd hope they were doing alright, but you have no way of knowing and your actions will not effect them now.
taboo care...
Given your current observations, what likelihood can you assign to their existence? (emotional reactions like "care" will probably follow from this).
Can you help them or hurt them via your actions?
So of course you'd care ... in proportion to your certainty that they exist.
It seems to me the most parsimonious hypothesis is that the human race will create many simulations in the future - that seems like the natural course of progress, and I think we need to introduce an additional assumption to claim that we won't. If we accept this then the same logic as if we'd made that agreement with Bob seems to hold.
Hang on. You've gone from talking about "what I can interact with" to "what I know exists". If logic leads us to believe that non-real mathematical universes exist (i.e. under available evidence the most parsimonious assumption is that they do, even though we can't causally interact with them), is that or is that not sufficient reason to weigh them in our moral decisionmaking?
My mistake for using the word "interaction" then - it seems to have different connotations to you than it does to me.
Receiving evidence - AKA making an observation - is an interaction. You can't know something exists unless you can causally interact with it.
How can something non-real exist?
I dispute the idea that what does or does not exist is a question of logic.
I say that logic can tell you how parsimonious a model is, whether it contains contradiction, and stuff like that.
But only observation can tell you what exists / is real.
I'd argue that any simulations that humanity makes must be contained within the entire universe. So adding lower simulations doesn't make the final description of the universe any more complex than it already was. Positing higher simulations, on the other hand, does increase the total number of axioms.
The story you reference contains the case where we make a simulation which is identical to the actual universe. I think that unless our universe has some really weird laws, we won't actually be able to do this.
Not all universes in which humanity creates simulations are universes in which it is parsimonious for us to believe that we are someone's simulation.
You're right, I was being sloppy. My point was: suppose the most parsimonious model that explains our observations also implies the existence of some people who we can't causally interact with. Do we consider those people in our moral calculations?
I can see the logic, but doesn't the same argument apply equally well in the "agreement with Bob" case?
True, but only necessary so that the participants can remember being the people they were outside the simulation; I don't think it's fundamental to any of the arguments.
This is impossible. No causal interaction means no observations. A parsimonious model cannot posit any statements that have no implications for your observations.
But I understand the spirit of your question: if they had causal implications for us, but we had no causal implications for them (implying that we can observe them and they can effect us, but they can't observe us and we can't effect them) then I would certainly care about what happened to them.
But I still can't factor them into any moral calculations because my actions cannot effect them, so they cannot factor into any moral calculations. The laws of the universe have rendered me powerless.
and
I'm not sure I follow these two statements- can you elaborate what you mean?
Wait, what?
So, I go about my life observing things, and one of the things I observe is that objects don't tend to spontaneously disappear... they persist, absent some force that acts on them to disrupt their persistence. I also observe things consistent with there being a lightspeed limit to causal interactions, and with the universe expanding at such a rate that the distance between two points a certain distance apart is increasing faster than lightspeed.
Then George gets into a spaceship and accelerates to near-lightspeed, such that in short order George has crossed that distance threshold.
Which theory is more parsimonious: that George has ceased to exist? that George persists, but I can't causally interact with him? that he persists and I can (somehow) interact with him? other?
Suppose my current actions can affect the expected state of George after he crosses that threshold (e.g., I can put a time bomb on his ship). Does the state of George-beyond-the-threshold factor into my moral calculations about the future?
That George persists, but I can't causally interact with him.
Yes.
My rule: "A parsimonious model cannot posit any statements that have no implications for your observations" has not been contradicted by my answers. The model must explain your observation that a memory of George getting into that spaceship resides in your mind.
As to whether or not George disappeared as soon as he crossed the distance threshold...it's possible, but the set of axioms necessary to describe the universe where George persists is more parsimonious than the set of axioms necessary to describe the universe where George vanishes. Therefore, you should assign a higher likelihood to the probability that George persists.
This is the solution to the so called "Problem" of Induction. "Things don't generally disappear, so I'll assume they'll continue not disappearing" is just a special case of parsimony. Universes in which the future is similar to the past are more parsimonious.
I basically agree with all of this.
So, when lmm invites us to suppose that the most parsimonious model that explains our observations also implies the existence of some people who we can't causally interact with, is George an example of what lmm is inviting us to suppose? If not, why not?
Semantics, perhaps.
I considered things like George's memory trace as an example of an "interaction", the same way as seeing the moonlight is an "interaction" with the moon despite the fact that the light I saw is actually from a past version of the moon and not the current one.
So maybe we were just using different notions of what "causal interaction" means? To me, "people we can't causally interact with" means people who don't cause any of our observations, including memory-related ones.
TheOtherDave's already covered this part
Second one first:
The only reason we need to assume the simulation is identical to the outer universe is so that our protagonists' memory is consistent with being in either. The only reason this is a difficulty at all is because the protagonists need to remember arranging a simulation in the outer universe for the sake of the story, as that's the only reason they suspect the existence of simulated universes like the one they are currently in.
If the protagonists have some other (magical, for the moment) reason to believe that a large number of universes exist and most of those are simulated in one of the others, it doesn't matter if the laws of physics differ between universes - I don't think that's essential to any of the other arguments (unless you want to make an anthropic argument that a particular universe is more or less likely to be simulated than average because of its physical laws).
Now for my first statement.
Your argument as I understood it is: Even if the most parsimonious explanation of our observations necessitates the existence of an "outer" universe and a large number of simulated universes inside it, it is still more parsimonious to assume that we are in the "outer" universe.
My response is: doesn't this same argument mean that we should accept Bob's bet in my example (and therefore lose in the vast majority of cases)?
See the response to TheOtherDave
Then there has been a miscommunication at some point. If you rephrase that as:
"Even if the most parsimonious explanation of our observations necessitates the existence of an "outer" universe and a large number of simulated universes inside it, it is still sometimes more parsimonious to assume that we are in the "outer" universe."
Then you'd be right. The fact that we have the capacity to simulate a bunch of universes ourselves doesn't in-and-of-itself count as evidence that we are being simulated. My argument is more or less identical to V_V's in the other thread.
I would agree with that statement. If our universe turns out to have a ridiculously complex set of laws, it might actually be more parsimonious to posit an Outer Universe with much simpler laws which gave rise to beings which are simulating us. (In the same way that describing the initial conditions of the universe is probably a shorter message than describing a human brain)