You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Does the simulation argument even need simulations?

7 Post author: lmm 11 October 2013 09:16PM

The simulation argument, as I understand it:

  1. Subjectively, existing as a human in the real, physical universe is indistinguishable from existing as a simulated human in a simulated universe
  2. Anthropically, there is no reason to privilege one over the other: if there exist k real humans and l simulated humans undergoing one's subjective experience, one's odds of being a real human are k/(k+l)
  3. Any civilization capable of simulating a universe is quite likely to simulate an enormous number of them
    1. Even if most capable civilizations simulate only a few universes for e.g. ethical reasons, civilizations that have no such concerns could simulate such enormous numbers of universes that the expected number of universes simulated by any simulation-capable civilization is still huge
  4. Our present civilization is likely to reach the point where it can simulate a universe reasonably soon
  5. By 3. and 4., there exist (at some point in history) huge numbers of simulated universes, and therefore huge numbers of simulated humans living in simulated universes
  6. By 2. and 5., our odds of being real humans are tiny (unless we reject 4, by assuming that humanity will never reach the stage of running such simulations)

When we talk about a simulation we're usually thinking of a computer; crudely, we'd represent the universe as a giant array of bytes in RAM, and have some enormously complicated program that could compute the next state of the simulated universe from the previous one[1]. Fundamentally, we're just storing one big number, then performing a calculation and store another number, and so on. In fact our program is simply another number (witness the DeCSS "illegal prime"). This is effectively the GLUT concept applied to the whole universe.

But numbers are just... numbers. If we have a computer calculating the fibonacci sequence, it's hard to see that running the calculating program makes this sequence any more real than if we had just conceptualized the rule[2] - or even, to a mathematical Platonist, if we'd never thought of it at all. And we do know the rule (modulo having a theory of quantum gravity), and the initial state of the universe is (to the best of our knowledge) small and simple enough that we could describe it, or another similar but subtly different universe, in terms small enough to write down. At that point, what we have seems in some sense to be a simulated universe, just as real as if we'd run a computer to calculate it all.

Possible ways out that I can see:

  1. Bite the bullet: we are most likely not even a computer simulation, just a mathematical construct[3]
  2. Accept the other conclusion: either simulations are impractical even for posthuman civilizations, or posthuman civilization is unlikely. But if all that's required for a simulation is a mathematical form for the true laws of physics, and knowledge of some early state of the universe, this means humanity is unlikely to ever learn these two things, which is... disturbing, to say the least. This stance also seems to require rejecting mathematical Platonism and adopting some form of finitist/constructivist position, in which a mathematical notion does not exist until we have constructed it
  3. Argue that something important to the anthropic argument is lost in the move from a computer calculation to a mathematical expression. This seems to require rejecting the Church-Turing thesis and means most established programming theory would be useless in the programming of a simulation[4]
  4. Some other counter to the simulation argument. To me the anthropic part (i.e. step 2) seems the least certain; it appears to be false under e.g. UDASSA, though I don't know enough about anthropics to say more

Thoughts?

 

[1] As I understand it there is no contradiction with relativity; we perform the simulation in some particular frame, but obtain the same events whichever frame we choose

[2] This equivalence is not just speculative. Going back to thinking about computer programs, Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) follows lazy evaluation: a value is not calculated unless it is used. Thus if our simulation contained some regions that had no causal effect on subsequent steps (e.g. some people on a spaceship falling into a black hole), the simulation wouldn't bother to evaluate them[5]

If we upload people who then make phone calls to their relatives to convince them to upload, clearly those people must have been calculated - or at least, enough of them to talk on the phone. But what about a loner who chooses to talk to no-one? Such a person could be more efficiently stored as their initial state plus a counter of how many times the function needs to be run to evaluate them, if anyone were to talk to them. If no-one has their contact details any more, we wouldn't even need to store that much. What about when all humans have uploaded? Sure, you could calculate the world-state for each step explicitly, but that would be wasteful. Our simulated world would still produce the correct outputs if all it did was increment a tick counter

Practically every programming runtime performs some (more limited) form of this, using dataflow analysis, instruction reordering and dead code elimination - usually without the programmer having to explicitly request it. Thus if your theory of anthropics says that an "optimized" simulation is counted differently from a "full" one, then there is little hope of constructing such a thing without developing a significant amount of new tools and programming techniques[4]

[3] Indeed, with an appropriate anthropic argument this might explain why the rules of physics are mathematically simple. I am planning another post on this line of thought

[4] This is worrying if one is in favour of uploading, particularly forcibly - it would be extremely problematic morally if uploads were in some sense "less real" than biological people

[5] One possible way out is that the laws of physics appear to be information-preserving; to simulate the state of the universe at time t=100 you can't discard any part of the state of the universe at time t=50, and must in some sense have calculated all the intermediate steps (though not necessarily explicitly - the state at t=20 could be spread out between several calculations, never appearing in memory as a single number). I don't think this affects the wider argument though

Comments (102)

Comment author: VincentYu 11 October 2013 11:01:22PM *  13 points [-]
  1. Bite the bullet: we are most likely not even a computer simulation, just a mathematical construct[3]

Biting the bullet here is roughly equivalent to accepting Tegmark's Ultimate Ensemble. This was discussed on LW in ata's post from 2010, The mathematical universe: the map that is the territory.

See Tegmark (2008). In particular, Section 6, "Implications for the simulation argument". A relevant extract:

For example, since every universe simulation corresponds to a mathematical structure, and therefore already exists in the Level IV multiverse [the multiverse of all mathematical structures], does it in some meaningful sense exist “more” if it is in addition run on a computer? This question is further complicated by the fact that eternal inflation predicts an infinite space with infinitely many planets, civilizations, and computers, and that the Level IV multiverse includes an infinite number of possible simulations. The above-mentioned fact that our universe (together with the entire Level III multiverse) may be simulatable by quite a short computer program (Sect. 6.2) calls into question whether it makes any ontological difference whether simulations are “run” or not. If, as argued above, the computer need only describe and not compute the history, then the complete description would probably fit on a single memory stick, and no CPU power would be required. It would appear absurd that the existence of this memory stick would have any impact whatsoever on whether the multiverse it describes exists “for real”. Even if the existence of the memory stick mattered, some elements of this multiverse will contain an identical memory stick that would “recursively” support its own physical existence. This would not involve any Catch-22 “chicken-and-egg” problem regarding whether the stick or the multiverse existed first, since the multiverse elements are 4-dimensional spacetimes, whereas “creation” is of course only a meaningful notion within a spacetime.


A while ago, I posted a LW discussion link to John Regehr's blog post about similar ideas: Does a simulation really need to be run?.

Comment author: brazil84 11 October 2013 10:36:24PM 10 points [-]

My thought is that your hypothesis is pretty similar to the Dust Theory.

http://sciencefiction.com/2011/05/23/science-feature-dust-theory/

And Greg Egan's counter-argument to the Dust Theory is pretty decent:

However, I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.

I think the same counter-argument applies to your hypothesis.

Comment author: VincentYu 12 October 2013 05:09:29PM *  4 points [-]

A steelmanned version of Egan's counterargument can be found in what Tegmark calls the (cosmological) measure problem. Egan's original counterargument is too weak because we can simply postulate that there is an appropriate measure over the worlds of interest; we already do that for the many-worlds interpretation!

In Tegmark (2008) (see my other comment):

One such issue is the above-mentioned measure problem, which is in essence the problem of how to deal with annoying infinities and predict conditional probabilities for what an observer should perceive given past observations.

[...]

A second testable prediction of the MUH [Mathematical Universe Hypothesis] is that the Level IV multiverse [the multiverse of all mathematical structures] exists, so that out of all universes containing observers like us, we should expect to find ourselves in a rather typical one. Rigorously carrying out this test requires solving the measure problem, i.e., computing conditional probabilities for observable quantities given other observations (such as our existence) and an assumed theory (such as the MUH, or the hypothesis that only some specific mathematical structure like string theory or the Lie superalgebra mb(3|8) [142] exists). Further work on all aspects of the measure problem is urgently needed regardless of whether the MUH is correct, as this is necessary for observationally testing any theory that involves parallel universes at any level, including cosmological inflation and the string theory landscape [67–71]. Although we are still far from understanding selection effects linked to the requirements for life, we can start testing multiverse predictions by assessing how typical our universe is as regards dark matter, dark energy and neutrinos, because these substances affect only better understood processes like galaxy formation. Early such tests have suggested (albeit using questionable assumptions) that the observed abundance of these three substances is indeed rather typical of what you might measure from a random stable solar system in a multiverse where these abundances vary from universe to universe [42, 134–139].

Tegmark makes a few remarks on using algorithmic complexity as the measure:

It is unclear whether some sort of measure over the Level IV multiverse is required to fully resolve the measure problem, but if this is the case and the CUH [Computable Universe Hypothesis] is correct, then the measure could depend on the algorithmic complexity of the mathematical structures, which would be finite. Labeling them all by finite bit strings s interpreted as real numbers on the unit interval [0, 1) (with the bits giving the binary decimals), the most obvious measure for a given structure S would be the fraction of the unit interval covered by real numbers whose bit strings begin with strings s defining S. A string of length n bits thus gets weight 2^(−n), which means that the measure rewards simpler structures. The analogous measure for computer programs is advocated in [16]. A major concern about such measures is of course that they depend on the choice of representation of structures or computations as bit strings, and no obvious candidate currently exists for which representation to use.

Each of the analogous problems in eternal inflation and the string theory landscape is also called the measure problem (in eternal inflation: how to assign measure over the potentially infinite number of inflationary bubbles; in the string theory landscape: how to assign measure over the astronomical number of false vacua).

In the many-worlds interpretation, the analogous measure problem is resolved by the Born probabilities.

Comment author: brazil84 12 October 2013 10:45:35PM 0 points [-]

Egan's original counterargument is too weak because we can simply postulate that there is an appropriate measure over the worlds of interest;

I don't understand this at all. Can you give an example of such an appropriate measure?

Comment author: VincentYu 12 October 2013 11:54:02PM *  1 point [-]

An example of a measure in this context would be the complexity measure that Tegmark mentioned, as long as we agree on a way to encode mathematical structures (the nonuniqueness of representation is one of the issues that Tegmark brought up).

Whether this is an appropriate measure (i.e., whether it correctly "predicts conditional probabilities for what an observer should perceive given past observations") is unknown; if we knew how to find out, then we could directly resolve the measure problem!

An example of a context where we can give the explicit measure is in the many-words interpretation, where as I mentioned, the Born probabilities resolve the analogous measure problem.

Comment author: brazil84 13 October 2013 08:09:16AM 0 points [-]

An example of a context where we can give the explicit measure is in the many-words interpretation, where as I mentioned, the Born probabilities resolve the analogous measure problem.

So you are saying that the "Born probabilities" are an example of an "appropriate measure" which, if "postulated," rebuts Egan's argument?

Is that correct?

Comment author: lmm 13 October 2013 03:12:42PM *  1 point [-]

The Born probabilities apply to a different context - the multiple Everett branches of MWI, rather than the interpretative universes available under dust theory. If we had an equivalent of the Born probabilities - a measure - for dust theory, then we'd be able to resolve Egan's argument one way or another (depending on which way the numbers came out under this measure).

Since we don't yet know what the measure is, it's not clear whether Egan's argument holds - under the "Tengmark computational complexity measure" Egan would be wrong, under the "naive measure" Egan is right. But we need some external evidence to know which measure to use. (By contrast in the QM case we know the Born probabilities are the correct ones to use, because they correspond to experimental results (and also because e.g. they're preserved under a QM system's unitary evolution)).

Comment author: brazil84 15 October 2013 09:01:29AM 0 points [-]

I would guess you are probably correct that Egan's argument hinges on this point. In essence, Egan seems to be making an informal claim about the relatively likelihood of an orderly dust universe versus a chaotic one.

Boiled down to its essentials, VincentYu's argument seems to be that if Egan's informal claim is incorrect, then Egan's argument fails. Well duh.

Comment author: [deleted] 13 October 2013 07:11:24PM 1 point [-]

Here's a visual representation of the dust theory by Randall Munroe: http://xkcd.com/505/

Comment author: lmm 11 October 2013 10:47:45PM 1 point [-]

Glad to see this has been thought of; that argument was where I was headed in [3] (and this whole line of thought greatly annoyed me when reading Permutation City, so I'm glad Egan's at least looked at it a bit).

This gets us a contradiction, not a refutation, and one man's modus ponens is another man's modus tollens. Can we use this to argue for a flaw in the original simulation argument? I think it again comes down to anthropics: why are our subjective experiences reverse-anthropically more likely than those of dust arrangements? And into which class would simulated people fall?

Comment author: brazil84 12 October 2013 08:29:08AM 0 points [-]

Can we use this to argue for a flaw in the original simulation argument?

I don't think so since it's reasonable to hypothesize that man-made simulations would, generally speaking, by more on the orderly side as opposed to being full of random nonsense.

But it's still an interesting question. One can imagine a room with 2 large computers. The first computer has been carefully programmed to simulate 1950s Los Angeles. There are people in the simulation who are completely convinced that the live in Los Angeles in the 1950s.

The second computer is just doing random computations. But arguably there is some cryptographic interpretation of those computations which also yields a simulation of 1950s Los Angeles.

Comment author: Baughn 12 October 2013 06:49:36PM 0 points [-]

I'd like to see that argument. If you can find a mapping that doesn't end up encoding the simulation in the mapping, I'd be surprised.

Comment author: brazil84 12 October 2013 08:37:20PM 2 points [-]

I'd like to see that argument. If you can find a mapping that doesn't end up encoding the simulation in the mapping, I'd be surprised.

Well why should it matter if the simulation is encoded in the mapping?

Comment author: Baughn 13 October 2013 04:39:00PM 1 point [-]

If it is, that screens off any features of what it's mapping; you can no longer be surprised that 'random noise' produces such output.

Comment author: brazil84 15 October 2013 08:56:09AM 0 points [-]

Again, so what?

Let me adjust the original thought experiment:

The operation first computer is encrypted using a very large one-time pad.

Comment author: falenas108 12 October 2013 01:54:02PM -1 points [-]

I'm not sure I agree with that argument. The fact that quantum mechanics exists, and there are specifically allowed states, is exactly the type of thing I'd expect from a universe driven by a computer simulation. Discrete values are much easier than continuous sets.

On the other hand, superposition and entanglement seem suboptimal.

Comment author: brazil84 12 October 2013 09:50:58PM 0 points [-]

The fact that quantum mechanics exists, and there are specifically allowed states, is exactly the type of thing I'd expect from a universe driven by a computer simulation.

I'm not sure I understand your point. Are you saying that a simulation which is just a mathematical construct would probably not result in a quantized universe?

Comment author: falenas108 12 October 2013 10:49:02PM -1 points [-]

I was intending to say the opposite; that a quantized world would seem like it would take less computational power than a continuous one, therefore the fact that we live in a quantized world is evidence of being in a simulation.

Comment author: brazil84 12 October 2013 10:52:44PM 0 points [-]

I was intending to say the opposite; that a quantized world would seem like it would take less computational power than a continuous one, therefore the fact that we live in a quantized world is evidence of being in a simulation.

That's not an unreasonable point, but I think it goes more to the issue of simulation versus non-simulation than the issue of computer-based simulation versus mathematical construct simulation.

Comment author: Baughn 12 October 2013 06:48:35PM 0 points [-]

Well, I suppose we could postulate something like a continuous version of quantum mechanics for a host universe if we'd like.

Comment author: Emile 12 October 2013 12:59:29PM 2 points [-]

Our present civilization is likely to reach the point where it can simulate a universe reasonably soon

I don't know about that, seems unlikely to me. A future civilization simulating us requires a) tons of information about us, that is likely to be irreversibly lost in the meantime, and b) enough computing power to simulate at a sufficiently fine level of detail (i.e. if it's a crude approximation, it will diverge from what actually happened pretty fast). Either of those alone looks like it makes simulating current-earth unfeasible.

But my main reaction to the simulation argument (even assuming it's possible) is "so what?". Are there any decisions I would change if I knew I might be being simulated?

Comment author: Baughn 12 October 2013 06:08:02PM *  5 points [-]

A future civilization simulating their own ancestors would require a lot of information about them, possibly impossibly-hard-to-get amounts. You're right about that.

So what? They could still simulate some arbitrary, fictional pre-singularity civ. There is no guarantee whatsoever, if we're part of a simulation, that we were ever anything else.

Comment author: lmm 12 October 2013 10:56:51PM 1 point [-]

But my main reaction to the simulation argument (even assuming it's possible) is "so what?". Are there any decisions I would change if I knew I might be being simulated?

Possible ethical position: I care about the continued survival of humanity in some form. I also care about human happiness in some way that avoids the repugnant conclusion (that is, I'm willing to sacrifice some proportion of unhappy lives in exchange for making the rest of them much happier). I am offered the option of releasing an AI that we believe with 99% probability to be Friendly; this has an expectation of greatly increasing human happiness, but carries a small risk of eliminating humanity in this universe. If I believe I am not simulated, I do not release it, because the small risk of eliminating all humanity in existence is not worth taking. If I believe I am simulated, I release it, because it is almost surely impossible for this to eliminate all humanity in existence, and the expected happiness gain is worth it.

Comment author: TheOtherDave 12 October 2013 12:24:24PM 2 points [-]

Mostly, my thought is that "there probably exist real people out there somewhere, and we are probably not among them; we are probably mere simulations in their world" doesn't seem equivalent to "what it means to be a real person, or a real anything, is to be a well-defined abstract computation that need not necessarily be instantiated" (aka Dust theory, as has been said).

That said, I can't really imagine why I would ever care about the difference for longer than it takes to think about the question.

Sure, the former feels more compelling because it's framed as a status challenge, but if I do anything more than just superficially pattern-match it that pretty much dissolves... I have to be a lot more important than I am, relatively speaking, before the social status of my entire universe becomes a relevant consideration in my status calculations.

(To be clear, I am speaking solely for myself here. I do recognize that some folks here view themselves, individually, as important to the future development of our universe, and I can see how for those people the status of our universe as a whole might be an important consideration, and I'm not challenging that; I'm just asserting that I don't view myself as that important, and I believe I'm correct in that evaluation.)

Comment author: lukstafi 12 October 2013 11:48:55AM 2 points [-]

Modern philosophy is just a set of notes on the margins of Descartes' "Meditations".

Comment author: wedrifid 12 October 2013 12:40:58PM 0 points [-]

Modern philosophy is just a set of notes on the margins of Descartes' "Meditations".

That is the most damning criticism of philosophy I have ever seen.

Comment author: lukstafi 12 October 2013 12:45:26PM *  1 point [-]

(1) It's totally tongue-in-cheek. (2) By "modern" I don't mean "contemporary", I mean "since Descartes onwards". (3) By "notes" I mean criticisms. (4) The point is that I see responses to the simulation aka. Daemon argument recurring in philosophy.

Comment author: wedrifid 12 October 2013 01:03:50PM 0 points [-]

(3) By "notes" I mean criticisms.

Ahh, that one makes a difference in connotation. There certainly seems to be more of that than I would judge worthwhile.

Comment author: V_V 12 October 2013 09:11:52PM 2 points [-]

Epistemology 101: Proper beliefs are (probabilistic) constrants over anticipated observations.
How does the belief that we are living in a computer simulation/a projection of the Platonic Hyperuranium/a dream of a god constrain what we expect to observe?

Comment author: lmm 12 October 2013 10:45:44PM *  1 point [-]

I don't think that can be right. We believe in the continued existence of stars that have moved so far away that we can't possibly observe them (due to inflation).

Comment author: V_V 13 October 2013 12:48:29AM 1 point [-]

Yet, that belief constrains our observations.

Comment author: lmm 13 October 2013 09:09:27AM 2 points [-]

How does it? What would be observe differently if some mysterious god destroyed those stars as soon as they moved out of causal contact with humanity?

Comment author: V_V 13 October 2013 09:32:36AM 0 points [-]

No, but the hypothesis of a mysterius god destroying stars exactly when our best cosmological models predict we should stop seeing them is unparsimonious.

And anyway, distant stars never appear to cross the cosmological event horizon from our reference frame. Their light becomes redshifted so much that we can't detect it anymore.

Comment author: lmm 13 October 2013 10:05:26AM 2 points [-]

No, but the hypothesis of a mysterius god destroying stars exactly when our best cosmological models predict we should stop seeing them is unparsimonious.

Sure. But believing or not believing in it doesn't constrain what we expect to observe, just the same as "the belief that we are living in a computer simulation/a projection of the Platonic Hyperuranium/a dream of a god". What's different from the situation in your first post?

Comment author: Ishaan 13 October 2013 10:49:40PM *  0 points [-]

Point of order:

computer simulation/a projection of the Platonic Hyperuranium/a dream of a god

i feel like we shouldn't be putting these two so close together.

"All mathematical statements are equally real"

and

"We are being simulated"

seem like two different claims that shouldn't be blurred together - the first is a matter of ontology and semantics, the second is a matter of fact. If all mathematical structures are equally real it might have weird moral implications, especially for simulations, but even if we successfully reject the idea that all mathematical structures are equally real it does not rule out the simulation hypothesis, and if we accept the idea that all mathematical structures are equally real it does not confirm the simulation hypothesis.

Comment author: V_V 13 October 2013 06:17:43PM 0 points [-]

Epistemology 101, part two: choose the simplest hypothesis among those which are observationally undistinguishable from each other.

Comment author: lmm 13 October 2013 06:38:33PM 0 points [-]

I think the hypothesis that human civilization will at some point derive the ultimate laws of physics, along with enough observations about the state of the early universe to construct a reasonable simulation thereof, is simpler than the alternative - to say that we won't seems to require some additional assumption that scientific progress would stop.

If we accept the existence of a large number of simulated universes, then while I don't have a good theory of anthropics, rationalists should win, and blindly assuming that one is not in a simulation seems like it leads to losing a lot of the time (e.g. my example of betting a cookie with Bob elsewhere in these comments).

Comment author: V_V 13 October 2013 09:55:43PM *  2 points [-]

I think the hypothesis that human civilization will at some point derive the ultimate laws of physics, along with enough observations about the state of the early universe to construct a reasonable simulation thereof, is simpler than the alternative - to say that we won't seems to require some additional assumption that scientific progress would stop.

It is not possible, and it never will be possible, to simulate within our universe something as complex our own universe itself, unless we discover a way to perform infinite computations using finite time, matter and energy, which would violate many known laws of physics.

We already are able to simulate "universes" simpler than our own (e.g. videogames), but this doesn't imply, even probabilistically, that our universe is itself a simulation. Analogy is not a sound argument.

Comment author: lmm 15 October 2013 12:03:08PM 0 points [-]

We already are able to simulate "universes" simpler than our own (e.g. videogames), but this doesn't imply, even probabilistically, that our universe is itself a simulation.

Why not? Because you assign them a low anthropic weighting, or some other reason? (I also had an argument that the Dyson computation applies, but I think that's actually beside the point)

If the simplest possible explanation for our sensory observations includes a universe that contains simulations of other universes, it's a reasonable question which kind we are in, even if they don't all have the same physical laws or the same amount of matter. There's no a propi reason to privilege one hypothesis or the other.

Comment author: lukstafi 14 October 2013 12:08:55PM 0 points [-]

Only in objective modal sense. Beliefs are probabilistic constraints over observations anticipated given a context. So in the example with stars moving away, the stars are still observables because there is counterfactual context where we observe them from nearby (by traveling with them etc.)

Comment author: Ishaan 12 October 2013 08:37:05AM *  2 points [-]

I actually arrived at this believe myself when I was younger, and changed my mind when a roommate beat it out of me. I

I'm currently at the conclusion it's not the same, because an "artificial universe" within a simulation can still interact with the universe. The simulation can influence stuff outside the simulation, and stuff outside the simulation can influence the simulation.

Oddly, the thing that convinced me was thinking about morality. Thinking on it now, I guess framing it in terms of something to protect really is helpful. Ontological platonism can lead to some fucked up conclusions, morally. I'll share a fleshed-out version of the thought-chain that changed my mind.

Review the claim, briefly:

But numbers are just... numbers. If we have a computer calculating the fibonacci sequence, it's hard to see that running the calculating program makes this sequence any more real than if we had just conceptualized the rule[2] - or even, to a mathematical Platonist, if we'd never thought of it at all. And we do know the rule (modulo having a theory of quantum gravity), and the initial state of the universe is (to the best of our knowledge) small and simple enough that we could describe it, or another similar but subtly different universe, in terms small enough to write down. At that point, what we have seems in some sense to be a simulated universe, just as real as if we'd run a computer to calculate it all.

1) So, if I set the initial conditions for a universe containing Suffering Humans, I'm not responsible - the initial conditions of the Hell-universe existed Platonically regardless of the fact that i defined it in the mathematical space.

2) Alright, so now what if I run the Hell Universe? Well, platonically speaking I already specified the entire universe when I laid out the initial conditions, so I don't see why running it is a big deal.

So we are currently running a Simulation of Hell, with a clean conscience. If you haven't already bailed from this ontology, lets continue...

3) Mathematically, the Hells which happen to have Anne inserted at time T were already in the platonic space of possible universes, so why not set the conditions and run that universe? Anne is a real person, by the way - we're just inserting a copy of her into the hell-verse

4) Anne just uploaded her consciousness onto a hard drive. Hold on...Anne can now be thought of as a self contained system, with input and output. Anne's consciousness is defined in the platonic space, as are all possible inputs and outputs that she might experience. If every input we might subject Anne to is already defined in platonic space, it makes no difference which one we choose to actually represent on the computer...

...Anyway, you see where this leads. Now forget the morality part - that was just to illustrate the weaknesses of Platonic ontology. Considering all mathematical structures equally "real" makes the concept of "reality" lose all meaning. There is something very important which distinguishes reality from non-real mathematical universes - the fact that you can observe it. The fact that it can interact with you.

This might seem less obvious when you're unsure whether or not your universe is a simulation, but it's obvious to the parent universe. If we ever start simulating things, we're not going to think of it as simply a representation specifying a point in platonic space - we're going to think of the simulated world as a part of our reality.

Bite the bullet: we are most likely not even a computer simulation, just a mathematical construct[3]

That's not a bullet...I'd say you were biting a bullet if you didn't believe that. Reality has to be a mathematical construct - if it isn't, we've just thrown logic out the window. But that doesn't mean anyone was sitting around writing the equation.

Reality is also special. It's different from all those other mathematical constructs, because I will only ever observe reality.

Even if most capable civilizations simulate only a few universes for e.g. ethical reasons, civilizations that have no such concerns could simulate such enormous numbers of universes that the expected number of universes simulated by any simulation-capable civilization is still huge

I don't think we should be calculating likelihoods this way.

I go to good-old Occam's razor (or more modernly, Mimimum Message Length). Does the simulation argument make for a simpler model? As in, can you actually suggest me a universe in which we are a simulation which is simpler than the universe outlined by vanilla physics? (The answer isn't necessarily "no", but I'd say that the simpler the laws we observe, the more likely the answer is to be "no". If we live in a more complicated universe - especially if the laws of the universe seemed to care about agents (the fact that we are even here does up the probability of that) - the answer might be 'yes". That said. I'd still bet on "no".)

Comment author: lmm 12 October 2013 09:27:49AM 1 point [-]

There is something very important which distinguishes reality from non-real mathematical universes - the fact that you can observe it. The fact that it can interact with you.

I think this leads to unpleasant conclusions. If causality is all we care about, does that mean we shouldn't care about people who are too far away to interact with (e.g. people on an interstellar colony too far away to reach in our lifetime)? Heck, if someone dived into a rotating black hole with the intent to set up a civilization in the zone of "normal space" closer to the singularity, I think I'd care about whether they succeeded, even though it couldn't possibly affect me. Back on Earth, should we care more about people close to us and less about people further away, since we have more causal contact with the former? Should we care more about the rich and powerful than about the poor and weak, since their decisions are more likely to affect us?

I go to good-old Occam's razor (or more modernly, Mimimum Message Length). Does the simulation argument make for a simpler model? As in, can you actually suggest me a universe in which we are a simulation which is simpler than the universe outlined by vanilla physics?

If you don't consider the possibility of being simulated it seems like you would make wrong decisions. Suppose that you agree with Bob to create 1000 simulations of the universe tonight, and then tomorrow you'll place a black sphere in the simulated universes. Tomorrow morning Bob offers to bet you a cookie that you're in one of the simulated universes. If you take the bet on the grounds that the model of the universe in which you're not in the simulation is simpler, then it seems like you lose most of the time (at least under naive anthropics).

Now obviously in real life we don't have this indication as to whether we're a simulation. But if we're trying to make a moral decision for which it matters whether we're in a simulation, it's important to get the right answer.

Comment author: Ishaan 12 October 2013 07:09:58PM *  0 points [-]

If you don't consider the possibility of being simulated

Didn't say that. We might be in a simulation. The question is, is that the more parsimonious hypothesis?

Observation is the king of epistemology, and Parsimony is queen. If parsimony says we're simulated, then we're probably simulated. In the counter-factual world where I have a memory of agreeing with Bob to create 1000 simulations, then parsimony says I'm likely in a simulation. We might be in a universe where the most parsimonious hypothesis given current evidence is simulation, or we might not. Would that I had a parsimony calculator, but for now I'm just guessing not.

There are observations that might lead a simulation hypothesis to be the most parsimonious hypothesis. I claim it as a question which is ultimately in the realm of science, although we still need philosophy to figure out a good way to judge parsimony.

unpleasant conclusions

These two statements sum my current stance.

Epistemic Rationality: Take every mathematical structure that isn't ruled out by the evidence. Rank them by parsimony.

CDT (which I'll take as "instrumental rationality" for now):: If your actions have results, you can use actions to choose your favorite result.

so, applying that to the points you raised...

Should we care more about the rich and powerful than about the poor and weak, since their decisions are more likely to affect us?

I have sufficient evidence to believe that both the poor and the rich exist. I care about them both. In the counter-factual world where I was more certain concerning the existence of the rich and less certain containing the existence of the poor, then it would make sense to direct my efforts to the rich.

If I want to give people utils, and If I can give 10 utils to person R if I have 70% certainty that they exist to benefit from it, or 20 utils to person P if I have 10% certainty that they exist to benefit from it, I obviously choose person R.

Back to reality: I've got incredible levels of certainty that both the rich and the poor exist.

should we care more about people close to us and less about people further away, since we have more causal contact with the former?

Once again, it's a question of certainty that they exist. If I told you that donating $100 to the impoverished Lannisters would be efficient altruism, wouldn't you want to check whether such people truly exist and whether the claims I made about them are true?

if someone dived into a rotating black hole with the intent to set up a civilization in the zone of "normal space" closer to the singularity, I think I'd care about whether they succeeded, even though it couldn't possibly affect me

You'd put every effort into assuring that they succeeded before they dived into the black hole and became causally disconnected from you. Afterwords, you're memory of them would remain as evidence that they exist...you'd hope they were doing alright, but you have no way of knowing and your actions will not effect them now.

. If causality is all we care about, does that mean we shouldn't care about people who are too far away to interact with (e.g. people on an interstellar colony too far away to reach in our lifetime)?

taboo care...

Given your current observations, what likelihood can you assign to their existence? (emotional reactions like "care" will probably follow from this).

Can you help them or hurt them via your actions?

So of course you'd care ... in proportion to your certainty that they exist.

Comment author: lmm 12 October 2013 10:43:16PM 0 points [-]

Observation is the king of epistemology, and Parsimony is queen. If parsimony says we're simulated, then we're probably simulated. In the counter-factual world where I have a memory of agreeing with Bob to create 1000 simulations, then parsimony says I'm likely in a simulation.

It seems to me the most parsimonious hypothesis is that the human race will create many simulations in the future - that seems like the natural course of progress, and I think we need to introduce an additional assumption to claim that we won't. If we accept this then the same logic as if we'd made that agreement with Bob seems to hold.

I have sufficient evidence to believe that both the poor and the rich exist. I care about them both. In the counter-factual world where I was more certain concerning the existence of the rich and less certain containing the existence of the poor, then it would make sense to direct my efforts to the rich.

Hang on. You've gone from talking about "what I can interact with" to "what I know exists". If logic leads us to believe that non-real mathematical universes exist (i.e. under available evidence the most parsimonious assumption is that they do, even though we can't causally interact with them), is that or is that not sufficient reason to weigh them in our moral decisionmaking?

Comment author: Ishaan 13 October 2013 08:25:10AM -1 points [-]

You've gone from talking about "what I can interact with" to "what I know exists"

My mistake for using the word "interaction" then - it seems to have different connotations to you than it does to me.

Receiving evidence - AKA making an observation - is an interaction. You can't know something exists unless you can causally interact with it.

If logic leads us to believe that non-real mathematical universes exist

How can something non-real exist?

I dispute the idea that what does or does not exist is a question of logic.

I say that logic can tell you how parsimonious a model is, whether it contains contradiction, and stuff like that.

But only observation can tell you what exists / is real.

If we accept this then the same logic as if we'd made that agreement with Bob seems to hold.

I'd argue that any simulations that humanity makes must be contained within the entire universe. So adding lower simulations doesn't make the final description of the universe any more complex than it already was. Positing higher simulations, on the other hand, does increase the total number of axioms.

The story you reference contains the case where we make a simulation which is identical to the actual universe. I think that unless our universe has some really weird laws, we won't actually be able to do this.

Not all universes in which humanity creates simulations are universes in which it is parsimonious for us to believe that we are someone's simulation.

Comment author: lmm 13 October 2013 09:17:06AM *  0 points [-]

But only observation can tell you what exists / is real.

You're right, I was being sloppy. My point was: suppose the most parsimonious model that explains our observations also implies the existence of some people who we can't causally interact with. Do we consider those people in our moral calculations?

I'd argue that any simulations that humanity makes must be contained within the entire universe. So adding lower simulations doesn't make the final description of the universe any more complex than it already was. Positing higher simulations, on the other hand, does increase the total number of axioms.

I can see the logic, but doesn't the same argument apply equally well in the "agreement with Bob" case?

The story you reference contains the case where we make a simulation which is identical to the actual universe. I think that unless our universe has some really weird laws, we won't actually be able to do this.

True, but only necessary so that the participants can remember being the people they were outside the simulation; I don't think it's fundamental to any of the arguments.

Comment author: Ishaan 13 October 2013 08:11:57PM *  -1 points [-]

My point was: suppose the most parsimonious model that explains our observations also implies the existence of some people who we can't causally interact with. Do we consider those people in our moral calculations?

This is impossible. No causal interaction means no observations. A parsimonious model cannot posit any statements that have no implications for your observations.

But I understand the spirit of your question: if they had causal implications for us, but we had no causal implications for them (implying that we can observe them and they can effect us, but they can't observe us and we can't effect them) then I would certainly care about what happened to them.

But I still can't factor them into any moral calculations because my actions cannot effect them, so they cannot factor into any moral calculations. The laws of the universe have rendered me powerless.

I can see the logic, but doesn't the same argument apply equally well in the "agreement with Bob" case?

and

True, but only necessary so that the participants can remember being the people they were outside the simulation; I don't think it's fundamental to any of the arguments.

I'm not sure I follow these two statements- can you elaborate what you mean?

Comment author: TheOtherDave 13 October 2013 08:27:03PM 2 points [-]

This is impossible. No causal interaction means no observations. A parsimonious model cannot posit any statements that have no implications for your observations.

Wait, what?

So, I go about my life observing things, and one of the things I observe is that objects don't tend to spontaneously disappear... they persist, absent some force that acts on them to disrupt their persistence. I also observe things consistent with there being a lightspeed limit to causal interactions, and with the universe expanding at such a rate that the distance between two points a certain distance apart is increasing faster than lightspeed.

Then George gets into a spaceship and accelerates to near-lightspeed, such that in short order George has crossed that distance threshold.

Which theory is more parsimonious: that George has ceased to exist? that George persists, but I can't causally interact with him? that he persists and I can (somehow) interact with him? other?

I still can't factor them into any moral calculations because my actions cannot effect them

Suppose my current actions can affect the expected state of George after he crosses that threshold (e.g., I can put a time bomb on his ship). Does the state of George-beyond-the-threshold factor into my moral calculations about the future?

Comment author: Ishaan 13 October 2013 09:02:42PM *  -1 points [-]

Which theory is more parsimonious

That George persists, but I can't causally interact with him.

Suppose my current actions can affect the expected state of George after he crosses that threshold (e.g., I can put a time bomb on his ship). Does the state of George-beyond-the-threshold factor into my moral calculations about the future?

Yes.

My rule: "A parsimonious model cannot posit any statements that have no implications for your observations" has not been contradicted by my answers. The model must explain your observation that a memory of George getting into that spaceship resides in your mind.

As to whether or not George disappeared as soon as he crossed the distance threshold...it's possible, but the set of axioms necessary to describe the universe where George persists is more parsimonious than the set of axioms necessary to describe the universe where George vanishes. Therefore, you should assign a higher likelihood to the probability that George persists.

This is the solution to the so called "Problem" of Induction. "Things don't generally disappear, so I'll assume they'll continue not disappearing" is just a special case of parsimony. Universes in which the future is similar to the past are more parsimonious.

Comment author: TheOtherDave 13 October 2013 09:51:31PM 1 point [-]

I basically agree with all of this.
So, when lmm invites us to suppose that the most parsimonious model that explains our observations also implies the existence of some people who we can't causally interact with, is George an example of what lmm is inviting us to suppose? If not, why not?

Comment author: lmm 13 October 2013 09:10:38PM 0 points [-]

This is impossible. No causal interaction means no observations. A parsimonious model cannot posit any statements that have no implications for your observations.

TheOtherDave's already covered this part

I'm not sure I follow these two statements- can you elaborate what you mean?

Second one first:

The only reason we need to assume the simulation is identical to the outer universe is so that our protagonists' memory is consistent with being in either. The only reason this is a difficulty at all is because the protagonists need to remember arranging a simulation in the outer universe for the sake of the story, as that's the only reason they suspect the existence of simulated universes like the one they are currently in.

If the protagonists have some other (magical, for the moment) reason to believe that a large number of universes exist and most of those are simulated in one of the others, it doesn't matter if the laws of physics differ between universes - I don't think that's essential to any of the other arguments (unless you want to make an anthropic argument that a particular universe is more or less likely to be simulated than average because of its physical laws).

Now for my first statement.

Your argument as I understood it is: Even if the most parsimonious explanation of our observations necessitates the existence of an "outer" universe and a large number of simulated universes inside it, it is still more parsimonious to assume that we are in the "outer" universe.

My response is: doesn't this same argument mean that we should accept Bob's bet in my example (and therefore lose in the vast majority of cases)?

Comment author: Ishaan 13 October 2013 09:19:19PM *  0 points [-]

See the response to TheOtherDave

Your argument as I understood it is: Even if the most parsimonious explanation of our observations necessitates the existence of an "outer" universe and a large number of simulated universes inside it, it is still more parsimonious to assume that we are in the "outer" universe.

Then there has been a miscommunication at some point. If you rephrase that as:

"Even if the most parsimonious explanation of our observations necessitates the existence of an "outer" universe and a large number of simulated universes inside it, it is still sometimes more parsimonious to assume that we are in the "outer" universe."

Then you'd be right. The fact that we have the capacity to simulate a bunch of universes ourselves doesn't in-and-of-itself count as evidence that we are being simulated. My argument is more or less identical to V_V's in the other thread.

(unless you want to make an anthropic argument that a particular universe is more or less likely to be simulated than average because of its physical laws)

I would agree with that statement. If our universe turns out to have a ridiculously complex set of laws, it might actually be more parsimonious to posit an Outer Universe with much simpler laws which gave rise to beings which are simulating us. (In the same way that describing the initial conditions of the universe is probably a shorter message than describing a human brain)

Comment author: torekp 18 October 2013 11:59:28PM 0 points [-]

Considering all mathematical structures equally "real" makes the concept of "reality" lose all meaning.

I agree, and I'd like to offer additional argument. Mathematical objects exist. Almost no one would deny that, for example, there is a number between 7,534,345,617 and 7,534,345,619. Or that there is a Lie group with such-and-such properties. What distinguishes Tegmark's claims from these unremarkable statements? Roughly this: Tegmark is saying that these mathematical objects are physically real. But on his own view, this just amounts to saying that mathematical objects are mathematical objects. Yeah yeah Tegmark, mathematical objects are mathematical objects, can't dispute that, but don't much care. Now I'll turn my attention back to tangible matters.

Tegmark steals his own thunder.

Comment author: Ishaan 19 October 2013 03:42:44AM *  -2 points [-]

I think Tegmark's level 1-4 taxonomy is useful. Strip it of physics and put it to philosophy:

Lv 1) What we can observe directly (qualia)

Lv 2) What we can' t observe, but could be (Russel's teapot)

Lv 3) What we can't observe, but we know might have happened if chance played out differently. (many-worlds)

Lv 4) Mathematical universes.

These are distinct concepts. The question is, where and how do you draw a line and call it reality? (I say that we can't include 4, nor can we only include 1. We either include 1, 2 or 1, 2, 3...preferably the former.)

Comment author: torekp 21 October 2013 01:01:31AM 0 points [-]

I took the portion of your comment I quoted to be about level 4 only. Anyway, that is where my comment is aimed, at agreeing that we can't include 4.

Comment author: FeepingCreature 13 October 2013 05:15:49PM -1 points [-]

I'm currently at the conclusion it's not the same, because an "artificial universe" within a simulation can still interact with the universe. The simulation can influence stuff outside the simulation, and stuff outside the simulation can influence the simulation.

Yeah, but unmodified simulations are the same, whereas modified simulations diverge. The fact that something from the outside interacted with the simulation means that it's just one distinguishably-different one out of many. Purely statistically speaking, we'd expect not-screwed-with universes to form the biggest probability block by far.

Comment author: Ishaan 13 October 2013 08:16:18PM *  1 point [-]

I'm not quite sure what you mean. Would you mind rephrasing or elaborating?

Comment author: FeepingCreature 13 October 2013 09:19:40PM 0 points [-]

The evolution of a universe that's not being influenced by its host universe is determined by its initial state. However, any interaction of a host universe with the nested universe adds bits to its description. Therefor, even if we'd numerically expect most host universes to screw with their child universes somehow (which still isn't given!) they'll all screw with them in different ways, whereas the unscrewed-with ones will all look the same. Thus, while most universes may be screwed-with (which isn't even a given!), the set of unscrewed-with universes is still the biggest subset.

Comment author: Ishaan 13 October 2013 09:32:34PM *  -1 points [-]

However, any interaction of a host universe with the nested universe adds bits to its description

No, you can subtract information from things. Edge case: what if the host just replaces every bit in the hard drive with all 0's?

the set of unscrewed-with universes is still the biggest subset.

In what? the platonic mathematical space? Or the subset of universes that a given host universe simulates?

I think I do get your meaning, but it doesn't seem very well defined...

Comment author: FeepingCreature 13 October 2013 11:45:08PM *  1 point [-]

No, you can subtract information from things.

Of course you can end up with a state that has a lower minimal description length. However, almost any interaction is gonna end up adding bits.

In what? the platonic mathematical space?

Yes, and yes this is very ill-defined, and yes it's not clear why the set size should matter, but the simulation argument rests on the very same assumption - some kind of equal anticipation prior over causes for our universe? So if you already accept the premise that universe counting should matter for the simulation argument, you can just reuse that for the "anticipate being in the unscrewed with universe" argument. (Shouldn't you anticipate being in a screwed with universe, even if you don't know in which way it'd be screwed with? Hm. Is this evidence that most hosts end up not screwing with their sims?)

Comment author: Ishaan 14 October 2013 12:20:30AM *  0 points [-]

If we're only talking about the platonic mathematical space, then why does it matter what hosts do or do not do to their simulations?

The entire thing (host and simulation) is one interacting mathematical unit. There might also be a mathematical unit that represents the simulation, independently of the host, but we can count that separately.

There are an infinite number of mathematical structures that could explain your observations. An infinite number of those involve simulations, and an infinite number of them don't involve simulations. Of the ones that involve simulations, an infinite number of them are "screwed" with and an infinite number are "unscrewed".

So, if we want to choose a model where everything in the platonic mathematical space is "real" (One one level I want to condemn this as literally the most un-parsimonious model of reality, and on another level I'll just say that you have defined reality in a funny way and it's just a semantic distinction) and then we want to figure out where within this structure we are using the rule that "the likelihood of a statement concerning our location being true corresponds to the number of universes in which it is true and which also fit our other observations", then we have to find a way of comparing infinities.

And that's what you're doing - comparing infinities. So ... what mechanism are you proposing for doing so?

Comment author: FeepingCreature 14 October 2013 01:34:19AM *  1 point [-]

I don't know, but the fact that out of an infinity of possible universes we're practically in the single-digit integers, has to mean something. Ask a genie for a random integer and you'd be surprised if it ever finished spitting out numbers in the lifetime of the universe; for it to stop after a few minutes of talking would be absurd. So either we're vastly wrong about the information theoretic complexity of our universe, or the seeming simplicity of its laws is due to either sampling bias, or MU is wrong and this universe really just happens to just exist for no good answerable reason, there's a ludicrous coincidence at work, or there has to be some reason why we are more likely to find ourselves in a universe at the start of the chain, whose hosts are not visibly screwing with it. The point is to add up to normality, after all.

Comment author: FeepingCreature 13 October 2013 05:13:04PM 1 point [-]

The problem with mathematical realism (which, btw, see also), is that it's challenging to justify the simplicity of our initial state - Occam is not a fundamental law of physics, and almost all possible universe-generating laws are unfathomably large. You can sort of justify that by saying "even universes with complicated initial states will tend to simulate simple universes first", but that just leaves you asking why the number of simulations should matter at all. (I don't have a good answer to that; if you find one, I'd love if you could tell me)

Comment author: lmm 13 October 2013 06:32:21PM 0 points [-]

Like I say, I think a good theory of anthropics is the best hope for this. Under UDASSA it's "obvious" that one would be most likely to find oneself in a simple universe - though that may just be begging the question, as I'm not aware of a justification for using a complexity measure in UDASSA.

Comment author: V_V 12 October 2013 09:17:57PM 1 point [-]

Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels)

Why this fascination with Haskell?
It seems more like a toy, or educational tool, or at the very best a tool for highly specialistic research, but pretty surely not suitable for any large scale programming.

Comment author: CronoDAS 13 October 2013 12:56:53AM 1 point [-]
Comment author: V_V 13 October 2013 12:59:13AM 0 points [-]

LoL!

Comment author: CronoDAS 13 October 2013 01:23:17AM 2 points [-]
Comment author: peterward 12 October 2013 04:56:29PM 1 point [-]

Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) >follows lazy evaluation: a value is not calculated unless it is used.

In that case, why does the simulation need to be running all the time? Wouldn't one just ask the fancy, lambda-derived software to render whatever specific event one wanted to see?

If on the other hand wholeuniversefromtimeimmemorial() needs to execute every time, which of course assumes a loophole gets found to infinitely add information to the host universe, then presumably every possible argument (which includes the program's own code--itself a constituent of the universe being simulated) would be needed by function anyway, so why not strict evaluation?

And both of these cases still assume we handle time in a common sense fashion. According to relativity, time is intertwined with the other dimensions, and these dimensions in turn are an artifact of our particular universe, distinctive characteristics created at the Big Bang along with everything else. Therefore, it then seems likely givemethewholeuniverse() would have to execute everything at once--more precisely, would have to excite outside of time--to accurately simulate the universe (or simulation thereof) we observe. Even functional programming has to carry out steps one after the other, requiring a universe with a time dimension, even if the logic to this order is different from that of traditional imperative paradigms.

Comment author: lmm 12 October 2013 10:52:29PM 2 points [-]

In that case, why does the simulation need to be running all the time? Wouldn't one just ask the fancy, lambda-derived software to render whatever specific event one wanted to see?

Indeed we would. If you believe we are such a simulation, that implies the simulator is interested in some event that causally depends on today's history. I don't think this matters though.

And both of these cases still assume we handle time in a common sense fashion. According to relativity, time is intertwined with the other dimensions, and these dimensions in turn are an artifact of our particular universe, distinctive characteristics created at the Big Bang along with everything else. Therefore, it then seems likely givemethewholeuniverse() would have to execute everything at once--more precisely, would have to excite outside of time--to accurately simulate the universe (or simulation thereof) we observe. Even functional programming has to carry out steps one after the other, requiring a universe with a time dimension, even if the logic to this order is different from that of traditional imperative paradigms.

Causality is preserved under relativity, AIUI. You may not necessarily be able to say absolutely whether one event happened before or after another, but you can say what the causal relation between them is (whether one could have caused the other, or they are spatially separated such that neither could have caused the other). So there is no problem with using naive time in one's simulations.

Are you arguing that a simulatable universe must have a time dimension? I don't think that's entirely true; all it means is that a simulatable universe must have a non-cyclic chain of causality. It would be exceedingly difficult to simulate e.g. the Godel rotating universe. But a universe like our own is no problem.

Comment author: Luke_A_Somers 11 October 2013 09:57:25PM 1 point [-]

The Numerical Platonist's construct is just the universe itself again. No problem there.

If you're not a numerical platonist, I don't see how unexecuted computations could be experienced.

And that leaves us with regular simulation.

(Incidentally, point 6 has a hidden assumption about the distribution of simulated universes)

Comment author: lmm 11 October 2013 10:48:45PM 0 points [-]

The Numerical Platonist's construct is just the universe itself again. No problem there.

Why? If it's just because the computations come out the same, doesn't that mean any simulation of the universe is also just the universe itself again?

Comment author: Decius 13 October 2013 08:21:05PM 0 points [-]

Technically we are already running a perfect simulation of a universe literally indistinguishable from our own.

The fact that such a simulation is indistinguishable means that we should be ambivalent about whether it is simulated or not- however, simulations which we run ARE distinguishable from our reality, in the same sense that a Godel statement is true, even if it the difference is not apparent from within the simulation.

Comment author: lmm 14 October 2013 11:51:12AM 1 point [-]

The fact that such a simulation is indistinguishable means that we should be ambivalent about whether it is simulated or not-

Does that necessarily follow? Should we necessarily be ambivalent about e.g. events in any other inflationary bubble (i.e. in star systems that have become causally disconnected from our own)

Comment author: Decius 14 October 2013 10:05:46PM *  -1 points [-]

To your first question: Yes. If something has one of two characteristics, but no information that we can (even theoretically) acquire allows us to determine which of those is true, then it is not meaningful to care about which one is true. Dropping to the object-level, it would be contradictory to have a simulation which accepted as input ONLY a set of initial conditions, but developed sentient life that was aware of you.

To your second question: "star systems that have become causally disconnected from our own" are distinguishable from our own. I'll answer the question "Should be necessarily be ambivalent about things which we cannot even theoretically interact with" as a general case.

Utilitarian: Yes. (It has no effect on us)
Consequentialist: Yes. (We have no effect on them)
Social Contract: Only if we don't have a deal with them.
Deist: Only if God says so.
Naive: Yes; I can't know what they are, so I can't change my decisions based on them.

What theory of ethics or decision has a non-trivial answer?

Comment author: lmm 15 October 2013 11:48:23AM 1 point [-]

It seems like we could reasonably have a utility function that assigns more or less value to certain actions depending on things we can't causally interact with. E.g. a small risk of wiping out all humanity within our future light cone would, I think, be less of a negative if I knew there was a human colony in a causally disconnected region of the universe.

Comment author: Decius 16 October 2013 07:41:36AM *  0 points [-]

How much less? What's the asymptote (of the ratio) as the number of human colony ships that have exited the light cone approach infinity?

ETA: Also, that scenario moved the goalposts again. The question was "Should we consider those hypothetical colonists opinions when deciding to risk destroying everything we can?"

Comment author: lmm 16 October 2013 11:34:53AM 1 point [-]

I don't have a ratio; it's more that I attach an additional (fixed) premium to killing off the entire human race, on top of the ordinary level of disutility I assign to killing each individual human.

(nb I'm trying to phrase this in utilitarian terms but I don't actually consider myself a utilitarian; my true position is more what seems to be described as deontological?)

Comment author: Decius 16 October 2013 10:37:30PM 1 point [-]

So you attach some measure of utility to the statement 'Humanity still exists', and then attach a probability to humanity existing outside of your light cone based on the information available; if humanity is 99% likely to exist outside of the cone, then the additional disutility of wiping out the last human in your light cone is reduced by 99%?

And the disutility of genocide and mass slaughters short of extinction remain unchanged?

Comment author: lmm 17 October 2013 11:49:35AM 0 points [-]

Yeah, that sounds like what I meant.

Comment author: DanielLC 11 October 2013 10:26:06PM *  0 points [-]

If just the conceptual possibility of the universe is enough to experience it, as some have suspected to be the case, you still have to consider the possibility that the part of the universe you're conceptually in is a simulation inside of another conceptual universe.

Looking at it from another angle, I'm pretty sure we all accept that our minds are running on computers known as human brains, and we don't just experience the conceptual possibility of that brain. Mind you, the entire universe might just be some kind of conceptual possibility, but there is a conceptual universe out there that our minds are running on a tiny part of. Once you accept this, it would seem hypocritical to reject the possibility of another layer of conceptual computation out of hand.

In short, just because we're in a mathematical construct doesn't mean that we're not part of a simulation within that mathematical construct. Simulation argument and the universe being a mathematical construct are not mutually exclusive.

But if all that's required for a simulation is a mathematical form for the true laws of physics, and knowledge of some early state of the universe

Why would you even need that much? If we're just talking about the mathematical idea of this universe, it exists whether or not we know how to define it. It's not inconsistent to say that someone defining but not necessarily calculating the math is the necessary and sufficient condition for us to experience it, but I don't see why you'd draw the line there.

Comment author: lmm 11 October 2013 10:55:28PM 0 points [-]

In short, just because we're in a mathematical construct doesn't mean that we're not part of a simulation within that mathematical construct. Simulation argument and the universe being a mathematical construct are not mutually exclusive.

Sure, but if anything it seems like they both apply - we are overwhelmingly likely to be simulated humans in a mathematical-construct universe.

Why would you even need that much? If we're just talking about the mathematical idea of this universe, it exists whether or not we know how to define it. It's not inconsistent to say that someone defining but not necessarily calculating the math is the necessary and sufficient condition for us to experience it, but I don't see why you'd draw the line there.

I was trying to make it clear where the tradeoff with mathematical Platonism is. If you believe mathematical things exist eternally, or exist when defined, or exist when explicitly calculated, that affects what limit you have to place on human civilization's achievements (and if you're a straight-up Platonist then you can't make this objection at all, because as you say, the idea of the universe already exists).

Comment author: BaconServ 12 October 2013 12:35:11AM *  -2 points [-]

I think Can You Prove Two Particles Are Identical? explains the difference between the possibilities here very well: What is the difference? We cannot assume there is a difference simply for the sake of asking what the difference is. Though if you must, I should hope you're well aware of your assumption.