You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife - Less Wrong Discussion

7 Post author: crmflynn 02 November 2015 11:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 04 November 2015 01:25:20AM 1 point [-]

We live in a very special time - right on the cusp of AGI

AGI has been 20 years away for the past 50 years or so. I see no reason to believe the pattern will break any time now :-/

Comment author: jacob_cannell 04 November 2015 07:12:21PM 1 point [-]

AGI has been 20 years away for the past 50 years or so.

No - AGI's arrival can be expected around the end of conventional Moore's Law, as that is naturally when we can expect to have brain level hardware performance. Before that AGI is impractical, shortly after that it becomes inevitable.

There are a large number of people making predictions, almost all of them have no idea what they are talking about. It is the logic behind the predictions that matter.

Comment author: Lumifer 04 November 2015 07:34:12PM 0 points [-]

when we can expect to have brain level hardware performance

I don't think our progress in creating an AGI is constrained by hardware at this point. It's a software problem and you can't solve it by building larger and more densely packed supercomputers.

almost all of them have no idea what they are talking about

Yep :-)

Comment author: jacob_cannell 05 November 2015 06:27:11PM 0 points [-]

I don't think our progress in creating an AGI is constrained by hardware at this point

That is now possibly arguably just becoming true for the first time - as we approach the end of Moore's Law and device geometry shrinks to synapse comparable sizes, densities, etc.

Still, current hardware/software is not all that efficient for the computations that intelligence requires - which is namely enormous amounts of low precision/noisy approximate computing.

t's a software problem and you can't solve it by building larger and more densely packed supercomputers.

Of course you can - it just wouldn't be economical. AGI running on a billion dollar super computer is not practical AGI, as AGI is AI that can do everything a human can do but better - which naturally must include cost.

It isn't a problem of what math to implement - we have that figured out. It's a question of efficiency.

Comment author: Lumifer 05 November 2015 06:45:52PM *  1 point [-]

AGI running on a billion dollar super computer is not practical

Why not? AGI doesn't involve emulating Fred the janitor, the first AGI is likely to have a specific purpose and so will likely have huge advantages over meatbags in the particular domain it was made for.

If people were able to build an AGI on a billion-dollar chunk of hardware right now they would certainly do so, if only as a proof of concept. A billion isn't that much money to a certain class of organizations and people.

It isn't a problem of what math to implement - we have that figured out.

Oh, really? I'm afraid I find that hard to believe.

Comment author: jacob_cannell 05 November 2015 07:22:09PM 0 points [-]

AGI running on a billion dollar super computer is not practical

Why not?

Say you have the code/structure for an AGI all figured out, but it runs in real-time on a billion dollar/year supercomputer. You now have to wait decades to train/educate it up to an adult.

Furthermore, the probability that you get the seed code/structure right on the first try is essentially zero. So rather obviously - to even get AGI in the first place you need enough efficiency to run one AGI mind in real-time on something far far less than a supercomputer.

It isn't a problem of what math to implement - we have that figured out.

Oh, really? I'm afraid I find that hard to believe.

Hard to believe only for those outside ML.

Comment author: V_V 09 November 2015 12:27:41PM *  0 points [-]

Hard to believe only for those outside ML

I don't think that even in ML the school of "let's just make a bigger neural network" is taken seriously.

Neural networks are prone to overfitting. All the modern big neural networks that are fashionable these days require large amounts of training data. Scale up these networks to the size of the human brain, and, even assuming that you have the hardware resources to run them, you will get something that just memorizes the training set and doesn't perform any useful generalization.

Humans can learn from comparatively small amounts of data, and in particular from very little and very indirectly supervised data: you don't have to show a child a thousand apples and push each time an "apple" button on their head for them to learn what an apple looks like.

There is currently lots of research in ML in how to make use of unsupervised data, which is cheaper and more abundant than supervised data, but this is still definitely an open problem, so much that it isn't even clear what properties we want to model and how to evaluate these models (e.g. check out this recent paper).
Therefore, the math relevant to ML has definitely not been all worked out.

Comment author: jacob_cannell 09 November 2015 06:33:28PM *  0 points [-]

I don't think that even in ML the school of "let's just make a bigger neural network" is taken seriously.

That's not actually what I meant when I said we have the math figured out. The math behind general learning is just general bayesian inference in it's various forms. The difficulty is not so much in the math, it is in scaling up efficiently.

To a first approximation the recent surge in progress in AI is entirely due to just making bigger neural networks. As numerous DL researchers have admitted - the new wave of DL is basically just techniques from the 80's scaled up on modern GPUs.

Regarding unsupervised learning - I wholeheartedly agree. However one should also keep in mind that UL and SL are just minor variations of the same theme in a bayesian framework. If you have accurate labeled data, you might as well use it.

In order to recognize and verbally name apples, a child must first have years of visual experience. Supervised DL systems trained from scratch need to learn everything from scratch, even the lowest level features. The object in these systems is not to maximize learning from small amounts of training data.

In the limited training data domain and more generally for mixed datasets where there is a large amount of unlabeled data, transfer learning and mixed UL/SL can do better.

properties we want to model and how to evaluate these models (e.g. check out this recent paper).

Just discussing that here.

The only real surprising part of that paper is the "good model, poor sampling" section. It's not clear how often their particular pathological special case actually shows up in practice. In general a solomonoff learner will not have that problem.

I suspect that a more robust sampling procedure could fix the mismatch. A robust sampler would be one that outputs samples according to their total probability as measured by encoding cost. This corrects the mismatch between the encoder and the sampler. Naively implemented this makes the sampling far more expensive, perhaps exponentially so, but nonetheless it suggests the problem is not fundamental.

Comment author: V_V 09 November 2015 07:47:00PM *  1 point [-]

That's not actually what I meant when I said we have the math figured out. The math behind general learning is just general bayesian inference in it's various forms. The difficulty is not so much in the math, it is in scaling up efficiently.

Ok, but this is even more vague then. At least neural networks are a coherent class of algorithms, with lots of architectural variations and hyperparameters to tune, but still functionally similar. General Bayesian inference, on the other hand, is a broad framework with dozens types of algorithms for different tasks, based on different assumptions and with different functional structure.

You could as well say that once we formulated the theory of universal computation and we had the first digital computers up and running, then we had all the math figured out and it was just a matter of scaling up things. This was probably the sentiment at the famous Dartmouth conference in 1956 where they predicted that ten smart people brainstorming for two months could make significant advancements in multiple fundamental AI problems. I think that we know better now.

Regarding unsupervised learning - I wholeheartedly agree. However one should also keep in mind that UL and SL are just minor variations of the same theme in a bayesian framework. If you have accurate labeled data, you might as well use it.

Supervised learning may be a special case of unsupervised learning but not the other way round. Currently we can only do supervised learning well, at least when when big data is available. There have been attempts to reduce unsupervised learning to supervised learning, which had some practical success in textual NLP (with neural language models and word vectors) but not in other domains such as vision and speech.

The paper I linked, IMHO, may shed some light on why this happened: one of the most popular evaluation measure and training objective, the negative log-likelihood (aka empirical cross-entropy), which captures well our intuition of what a good model must do in binary (or low-dimensional) classification tasks, may break down in the high-dimensional regime, typical of some unsupervised tasks such as sampling.

It's not clear how often their particular pathological special case actually shows up in practice.

I've never seen a modern generative model generate realistic samples of natural images or speech. Text generation fares somewhat better, but it's still far from anything able to pass a Turing test. By contrast, discriminative models for classification or regression trained on large supervised data can often achieve human-level or even super-human performances.

In general a solomonoff learner will not have that problem.

Well, duh, but a Solomonoff learner is uncomputable. Inside a Solomonoff learner there would be a simulation of every possible human looking at the samples, among an infinite number of other things.

Comment author: jacob_cannell 09 November 2015 08:11:45PM 0 points [-]

At least neural networks are a coherent class of algorithms, with lots of architectural variations and hyperparameters to tune, but still functionally similar. General Bayesian inference, on the other hand, is a broad framework with dozens types of algorithms for different tasks, based on different assumptions and with different functional structure.

I don't agree with this memetic taxonomy. I consider neural networks to be mostly synonymous with algebraic tensor networks - general computational graphs over tensors. As such ANN describes a modeling language family, equivalent in expressibility to binary circuit models (and thus Turing universal) but considerably more computationally efficient. The tensor algebra abstraction more closely matches physical hardware reality.

So as a general computing paradagim or circuit model, ANNs can be combined with any approximate inference technique. Backprop on log-likelihood is just one obvious approx method.

You could as well say that once we formulated the theory of universal computation and we had the first digital computers up and running, then we had all the math figured out

Not quite, because it took longer for the math for inference/learning to be worked out, and even somewhat longer for efficient approximations - and indeed that work is still ongoing.

Regardless, even if all the math was available in 1956 it wouldn't of mattered, as they still would have had to wait 60 years or so for efficient implementations (hardware + software).

The paper I linked, IMHO, may shred some light on why this happened: one of the most popular evaluation measure and training objective, the negative log-likelihood (aka empirical cross-entropy), which captures well our intuition of what a good model must do in binary (or low-dimensional) classification tasks, may break down in the high-dimensional regime, typical of some unsupervised tasks such as sampling.

To the extant that this is a problem in practice, it's a problem with typical sampling, not the measure itself. As I mentioned earlier, I believe it can be solved by more advanced sampling techniques that respect total KC/Solomonoff probability. Using these hypothetical correct samplers, good models should always produce good samples.

That being said I agree that generative modelling and realistic sampling in particular is an area ripe for innovation.

I've never seen a modern generative model generate realistic samples of natural images or speech.

You actually probably have seen this in the form of CG in realistic video games or films. Of course those models are hand crafted rather than learned probabilistic generative models. I believe that cross-fertilization of ideas/techniques from graphics and ML will transform both in the near future.

The current image generative models in ML are extremely weak when viewed as procedural graphics engines - for the most part they are just 2D image blenders.

Comment author: Lumifer 05 November 2015 07:34:18PM *  0 points [-]

Say you have the code/structure for an AGI all figured out

How would you know that you have it "all figured out"?

the probability that you get the seed code/structure right on the first try is essentially zero

Err... didn't you just say that it's not a software issue and we have already figured out what math to implement? What's the problem?

Hard to believe only for those outside ML.

Right... build a NN a mile wide and a mile deep and let 'er rip X-/

Comment author: jacob_cannell 06 November 2015 12:31:00AM *  0 points [-]

Say you have the code/structure for an AGI all figured out

How would you know that you have it "all figured out"?

[Furthermore], the probability that you get the seed code/structure right on the first try is essentially zero

Err... didn't you just say that it's not a software issue and we have already figured out what math to implement? What's the problem?

No, I never said it is not a software issue - because the distinction between software/hardware issues is murky at best, especially in the era of ML where most of the 'software' is learned automatically.

You are trolling now - cutting my quotes out of context.

Comment author: crmflynn 04 November 2015 02:46:12AM 0 points [-]

I am not sure it matters when it comes. Presumably, unless we find some other way to extinction, it will come at some point. When it comes, it is likely that the technology will not be a problem for it. Once the technology exists, and probably before, we may need to figure out if and how we want to do simulations. If people have a clear, well developed, and strong preference going into it (including potentially putting it into the AI as a requirement for its modeling of humanity, or it being a big enough “movement” to show up in our CEV) that will likely have a large effect on the odds of it happening. Also, I know some people who sincerely think belief in god is based almost exclusively on fear of death. I am skeptical of this, but if it is true, or even partially true, if even a fraction of the fervor/energy/dedication that is put into religion was put into pushing for this, I think it might be a serious force.

The point about credence is just a point about it being interesting, decision making aside, that something as fickle as collective human will, might determine if I “survive” death, and if all my dead loved ones will as well. So, for example, if this post, or someone building off of my post, but doing it better, were to explode on LW and pour out into reddit and the media, it should increase our credence in an afterlife. If its reception is lukewarm, decrease it. There is something really weird about that, and worth chewing on.

Also, I think that people’s motivation to have an afterlife seems like a more compelling reason to create simulations than experimentation/entertainment, so it helps shift credence around among the four disjuncts of the simulation argument.

Comment author: Lumifer 04 November 2015 04:02:38PM -1 points [-]

Once the technology exists, and probably before, we may need to figure out if and how we want to do simulations.

Simulations of long-ago ancestors..?

Imagine that you have the ability to run a simulation now. Would you want to populate it by people like you, that is, fresh people de novo and possibly people from your parents and grandparents generations -- or would you want to populate it with Egyptian peasants from 3000 B.C.? Homo habilis, maybe? How far back do you want to go?

it should increase our credence in an afterlife

No, I don't think so. You're engaging in magical thinking. What you -- or everyone -- believes does not change the reality.

Comment author: gjm 04 November 2015 04:50:25PM 2 points [-]

What you -- or everyone -- believes does not change the reality.

It can give evidence, though. Consider Hypothesis A: "Societies like ours will generally not decide, as their technological capabilities grow, to engage in massive simulation of their forebears" and Hypothesis B which omits the word "not". Then:

  • The decisions made by, and ideas widely held in, our society, can be evidence favouring A or B.
  • We are more likely simulations if B is right than if A is right.

Similarly if the hypotheses are "... to engage in massive simulation of their forebears, including blissful afterlives", in which case we are more likely to have blissful simulated afterlives if B is right than if A is right. (Not necessarily more likely to have blissful afterlives simpliciter, though -- perhaps, e.g., the truth of B would somehow make it less likely that we get blissful afterlives provided by gods.)

My opinion, for what it's worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.

Comment author: crmflynn 05 November 2015 01:37:35PM 0 points [-]

My opinion, for what it's worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.

I wonder if you might expand on your thoughts on this a bit more. I tend to think that the odds of being in a simulation are quite low as well, but for me the issue is more the threat of extinction than a lack of will.

I can think of some reasons why, even if we could build such simulations, we might not, but I feel that this area is a bit fuzzy in my mind. Some ideas I already have: 1) Issues with the theory of identity 2) Issues with theory of mind 3) Issues with theory of moral value (creating lots high quality lives not seen as valuable, antinatalism, problem of evil) 4) Self-interest (more resources for existing individuals to upload into and utilize) 5) The existence of a convincing two-boxer “proof” of some sort

I also would like to know why an “enthusiastic takeup of the ideas in this post” would not increase your credence significantly? I think there is a very large chance of these ideas not being taken up enthusiastically, but if they were, I am not sure what, aside from extinction, would undermine them. If we get to the point where we can do it, and we want to do it, why would we not do it?

Thank you in advance for any insight, I have spent too long chewing on this without much detailed input, and I would really value it.

Comment author: gjm 05 November 2015 08:50:57PM 1 point [-]

I'm not sure I have much to say that you won't have thought of already. But: First of all, there seem to be lots of ways in which we might fail to develop such technology. We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely). It might turn out that computational superpowers just aren't really available -- that there's only so much processing power we have any realistic way of harnessing. It might turn out that such things are possible but we simply aren't smart enough to find our way to them.

Second, if we (or more precisely our successors, whoever or whatever they are) develop such computational superpowers, why on earth use them for ancestor simulations? In this sort of scenario, maybe we're all living in some kind of virtual universe; wouldn't it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world? Someone else -- entirelyuseless? -- observed earlier in the thread that some such simulation might be necessary in order to figure out enough about our ancestors' minds to simulate them anywhere else, so it's just possible that grotty 21st-century ancestor sims might be a necessary precursor to glorious 25th-century ancestor sims; but why ancestors anyway? What's so special about them, compared with all the other possible minds?

Third, supposing that we have computational superpowers and want to simulate our ancestors, I see no good reason to think it's possible. The information it would take to simulate my great-great-grandparents is dispersed and tangled up with other information, and figuring out enough about my great-great-grandparents to simulate them will be no easier than locating the exact oxygen atoms that were in Julius Caesar's last breath. All the relevant systems are chaotic, measurement is imprecise, and surely there's just no reconstructing our ancestors at this point.

Fourth, it seems quite likely that our superpowered successors, if we have them, will be no more like us than we are like chimpanzees. Perhaps you find it credible that we might want to simulate our ancestors; do you think we would be interested in simulating our ancestors 5 million years ago who were as much like chimps as like us?

Comment author: crmflynn 10 November 2015 12:31:55PM 0 points [-]

First of all, there seem to be lots of ways in which we might fail to develop such technology. We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely). It might turn out that computational superpowers just aren't really available -- that there's only so much processing power we have any realistic way of harnessing. It might turn out that such things are possible but we simply aren't smart enough to find our way to them.

Absolutely. I think this is where this thing most likely fails. Somewhere in the first disjunct. My gut does not think I am in a simulation, and while that is not at all a valid way to acquire knowledge, it is the case that it leans me heavily into this.

Second, if we (or more precisely our successors, whoever or whatever they are) develop such computational superpowers, why on earth use them for ancestor simulations? In this sort of scenario, maybe we're all living in some kind of virtual universe; wouldn't it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world?

So I am not saying that they WOULD do it, I actually can think of a lot of pretty compelling reasons why they MIGHT. If the people who are around then are at all like us, then I think that a subset of them would likely do it for the one-boxer reasons I mentioned in the first post (which I have since updated with a note at the bottom to clarify some things I should have included in the post originally.) Whether or not their intuitions are valid, there is an internal logic, based on these intuitions, which would push for this. Reasons include hedging against the teletransportation paradox (which also applies to self-uploading) and hoping to increase their credence of an afterlife in which those already dead can join in. This is clearer I think in my update. The main confusion is that I am not talking about attempting to simulate or recreate specific dead people, which I do not think is possible. The key to my argument is to create self-locating doubt.

Also, in my argument, the people who create the simulation are never joined with the people in the simulation. These people stay in their simulation computer. The idea is that we are “hoping” we are similarly in a simulation computer, and have been the whole time, and that when we die, we will be transferred (whole) into the simulations afterlife component along with everyone who died before us in our world. Should we be in a simulation, and yet develop some sort of “glorious virtual universe” that we upload into, there are several options. Two ones that quickly come to mind: 1) We might stay in it until we die, then go into the afterlife component, 2) We might at some point be “raptured” by the simulation out of our virtual universe into the existent “glorious virtual afterlife” of the simulation computer we are in.

As it is likely that the technology for simulations will come about at about the same time as for a “glorious virtual universe” we could even treat it as our last big hurrah before we upload ourselves. This makes sense as the people who exist when this technology becomes available will know a large number of loved ones who just missed it. They will also potentially be in especially imminent fear of the teletransportation paradox. I do not think there is any inherent conflict between doing both of these things.

Someone else -- entirelyuseless? -- observed earlier in the thread that some such simulation might be necessary in order to figure out enough about our ancestors' minds to simulate them anywhere else, so it's just possible that grotty 21st-century ancestor sims might be a necessary precursor to glorious 25th-century ancestor sims; but why ancestors anyway? What's so special about them, compared with all the other possible minds?

Just to be clear, I am not talking about our actual individual ancestors. I actually avoided using the term intentionally as I think it is a bit confusing. I am pretty sure this is how Bostrom meant it as well in the original paper, with the word “ancestor” being used in the looser sense, like how we say “homo erectus where our ancestors.” That might be my misinterpretation, but I do not think so. While I could be convinced, I am personally, currently, very skeptical that it would be possible to do any meaningful sort of replication of a person after they die. I think the only way that someone who has already died has any chance of an afterlife is if we are already in a simulation. This is also why my personal, atheistic mind could be susceptible to donating to such a cause when in grief. I wrote an update to my original post at the bottom where I clarify this. The point of the simulation is to change our credence regarding our self-location. If the vast majority of “people like us” (which can be REALLY broadly construed) exist in simulations with afterlives, and do not know it, we have reason to think we might also exist in such a simulation. If this is still not clear after the update, please let me know, as I am trying to pin down something difficult and am not sure if I am continuing to privilege brevity to the detriment of clarity.

Third, supposing that we have computational superpowers and want to simulate our ancestors, I see no good reason to think it's possible. The information it would take to simulate my great-great-grandparents is dispersed and tangled up with other information, and figuring out enough about my great-great-grandparents to simulate them will be no easier than locating the exact oxygen atoms that were in Julius Caesar's last breath. All the relevant systems are chaotic, measurement is imprecise, and surely there's just no reconstructing our ancestors at this point.

I agree with your point so strongly that I am a little surprised to have been interpreted as meaning this. I think that it seems theoretically feasible to simulate a world full of individual people as they advance their way up from simple stone tools onward, each with their own unique life and identity, each existing in a unique world with its own history. Trying to somehow make this the EXACT SAME as ours does not seem at all possible. I also do not see what the advantage of it would be, as it is not more informative or helpful for our purposes to know that we are the same or not as the people above us, so why would be try to “send that down” below us. We do not care about that as a feature of our world, and so would have no reason to try to instill it in the worlds below us. There is sort of a “golden rule” aspect to this in that you do to the simulation below you the best feasible, reality-conforming version of what you want done to you.

Fourth, it seems quite likely that our superpowered successors, if we have them, will be no more like us than we are like chimpanzees. Perhaps you find it credible that we might want to simulate our ancestors; do you think we would be interested in simulating our ancestors 5 million years ago who were as much like chimps as like us?

Maybe? I think that one of the interesting parts about this is where we would choose to draw policy lines around it. Do dogs go to the afterlife? How about fetuses? How about AI? What is heaven like? Who gets to decide this? These are all live questions. It could be that they take a consequential hedonistic approach that is mostly neutral between “who” gets the heaven. It could be that they feel obligated to go back further in gratitude of all those (“types”) who worked for advancement as a species and made their lives possible. It could be that we are actually not too far from superintelligent AI, and that this is going to become a live question in the next century or so, in which case “we” are that class of people they want to simulate in order to increase their credence of others similar to us (their relatives, friends who missed the revolution) being simulated.

As far as how far back you bother to simulate people, it might actually be easier to start off with some very small bands of people in a very primitive setting then to try to go through and make a complex world for people to “start” in without the benefit of cultural knowledge or tradition. It might even be that the “first people” are based on some survivalist hobby back-to-basics types who volunteered to be emulated, copied, and placed in different combinations in primitive earth environments in order to live simple hunter-gatherer lives and have their children go on to populate an earth (possible date of start? https://en.wikipedia.org/wiki/Population_bottleneck). That said, this is deep into the weeds of extremely low-probability speculation. Fun to do, but increasingly meaningless.

Comment author: jacob_cannell 06 November 2015 12:52:53AM 0 points [-]

We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely).

Yes, but that it isn't enough to defeat simulations. One successful future can create a huge number of sims. Observational selection effects thus make survival fare more likely than otherwise expected.

It might turn out that computational superpowers just aren't really available -- that there's only so much processing power we have any realistic way of harnessing.

Even without quantum computing or reversible computing, even just using sustainable resources on earth (solar) - even with those limitations - there are plenty of resources to create large numbers of sims.

In this sort of scenario, maybe we're all living in some kind of virtual universe; wouldn't it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world

The cost is about the same either way. So the question is one of economic preferences. When people can use their wealth to create either new children or bring back the dead, what will they do? You are thus assuming there will be very low demand for resurrecting the dead vs creating new children. This is rather obviously unlikely.

This technology probably isn't that far away - it is a 21st century tech, not 25th. It almost automatically follows AGI, as AGI is actually just the tech to create minds - nothing less. Many people alive today will still be alive when these sims are built. They will bring back their loved ones, who then will want to bring back theirs, and so on.

I see no good reason to think it's possible.

Most people won't understand or believe it until it happens. But likewise very few people actually understand how modern advanced rendering engines work - which would seem like magic to someone from just 50 years ago.

It's an approximate inference problem. The sim never needs anything even remotely close to atomic information. In terms of world detail levels it only requires a little more than current games. The main new tech required is just the large scale massive inference supercomputing infrastructure that AGI requires anyway.

It's easier to understand if you just think of a human brain sim growing up in something like the Matrix, where events are curiously staged and controlled behind the scenes by AIs.

Comment author: gjm 06 November 2015 02:33:41AM 0 points [-]

The opinion-to-reasons ratio is quite high in both your comment and mine to which it's replying, which is probably a sign that there's only limited value in exploring our disagreements, but I'll make a few comments.

One future civilization could perhaps create huge numbers of simulations. But why would it want to? (Note that this is not at all the same question as "why would it create any?".)

The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations. You have to figure out exactly what the dead were like, which (despite your apparent confidence that it's easy to see how easy it is if you just imagine the Matrix) I think is likely to be completely infeasible, and monstrously expensive if it's possible at all. But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before? Where's the value in that? (And if the answer is, as proposed by entirelyuseless, that to figure out who and what they were we need to do lots of simulations of their earthly existence, then note that that's one more reason to think that resurrecting them is terribly expensive.)

(If we can resurrect the dead, then indeed I bet a lot of people will want to do it. But it seems to me they'll want to do it for reasons incompatible with leaving the resurrected dead in simulations of the mundane early 21st century.)

You say with apparent confidence that "this technology probably isn't that far away". Of course that could be correct, but my guess is that you're wronger than a very wrong thing made of wrong. We can't even simulate C. elegans yet, even though that only has about 1k neurons and they're always wired up the same way (which we know).

Yes, it's an approximate inference problem. With an absolutely colossal number of parameters and, at least on the face of it, scarcely any actual information to base the inferences on. I'm unconvinced that "the sim never needs anything even remotely close to atomic information" given that the (simulated or not) world we're in appears to contain particle accelerators and the like, but let's suppose you're right and that nothing finer-grained than simple neuron simulations is needed; you're still going to need at the barest minimum a parameter per synapse, which is something like 10^15 per person. But it's worse, because there are lots of people and they all interact with one another and those interactions are probably where our best hope of getting the information we need for the approximate inference problems comes from -- so now we have to do careful joint simulations of lots of people and optimize all their parameters together. And if the goal is to resurrect the dead (rather than just make new people a bit like our ancestors) then we need really accurate approximate inference, and it's all just a colossal challenge and I really don't think waving your hands and saying "just think of a human brain sim growing up in something like the Matrix" is on the same planet as the right ballpark for justifying a claim that it's anywhere near within reach.

Comment author: jacob_cannell 06 November 2015 05:53:40PM *  0 points [-]

One future civilization could perhaps create huge numbers of simulations. But why would it want to?

I've already answered this - because living people have a high interest in past dead people, and would like them to live again. It's that simple.

The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations.

True, but most of the additional cost boils down to a constant factor once you amortize at large scale. Recreating a single individual - very expensive. Recreating billions? Reduces down to closer to the scaling costs of simulating that many minds.

You have to figure out exactly what the dead were like

No, you don't. For example the amount of information remaining about my grandfather who died in the 1950's is pretty small. We could recover his DNA, and we have a few photos. We have some poetry he wrote, and letters. The total amount of information contained in the memories of living relatives is small, and will be even less by the time the tech is available.

So from my perspective the target is very wide. Personal identity is subjectively relative.

But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before?

You wouldn't. I think you misunderstand. You need the historical sims to recreate the dead in the first place. But once that is running, you can copy out their minds at any point. However you always need one copy to remain in the historical sim for consistency (until they die in the hist-sim).

We can't even simulate C. elegans yet, even though that only has about 1k neurons and they're always wired up the same way (which we know).

You could also say we can't simulate bacteria, but neither is relevant. I'm not familiar enough with C. Elegans sims to evaluate your claim that the current sims are complete failures, but even if this is true it doesn't tell us much because only a tiny amount of resources have been spent on that.

Just to be clear - the historical ress-sims under discussion will be created by large-scale AGI (superintelligence). When I say this tech isn't that far away, it's because AGI isn't that far away, and this follows shortly thereafter.

you're still going to need at the barest minimum a parameter per synapse, which is something like 10^15 per person

Hardly. You are assuming naive encoding without compression. Neural nets - especially large biological brains -are enormously redundant and highly compressible.

Look - it's really hard to accurately estimate the resources for things like this, unless you actually know how to build it. 10^15 is a a reasonable upper bound, but the lower bound is much lower.

For the lower bound, consider compressing the inner monologue - which naturally includes everything a person has ever read, heard, and said (even to themselves).

200 wpm * 500k words/year * 8bits/word ~ 100 MB/year

So that gives a lower bound of 10^10 for a 100 year old. This doesn't include visual information, but the visual cortex is also highly compressible due to translational invariance.

And if the goal is to resurrect the dead (rather than just make new people a bit like our ancestors) then we need really accurate approximate inference, and it's all just a colossal challenge and I really don't think waving your hands and saying "just think of a human brain sim growing up in something like the Matrix"

No - again naysayers will always be able to claim "these aren't really the same people". But their opinions are worthless. The only opinions that matter are those who actually knew the relevant people, and the turing test for resurrection is entirely subjective, relative to their limited knowledge of the resurrectee.

Comment author: gjm 06 November 2015 09:07:30PM 0 points [-]

I've already answered this

But the answer you go on to repeat is one I already explained wasn't relevant, in the sentence after the one you quoted.

most of the additional cost boils down to a constant factor once you amortize at large scale.

I'm not sure what you're arguing. I agree that the additional cost is basically a (large) constant factor; that is, if it costs X to simulate a freshly made new mind, maybe it costs 1000X to recover the details of a long-dead one and simulate that instead. (The factor might well be much more than 1000.) I don't understand how this is any sort of counterargument to my suggestion that it's a reason to simulate new minds rather than old.

the amount of information remaining about my grandfather who died in the 1950's is pretty small.

You say that like it's a good thing, but what it actually means is that almost certainly we can't bring your grandfather back to life, no matter what technology we have. Perhaps we could make someone who somewhat resembles your grandfather, but that's all. Why would you prefer that over making new minds so much as to justify the large extra expense of getting the best approximation we can?

you always need one copy to remain in the historical sim for consistency

I'm not sure what that means. I'd expect that you use the historical simulation in the objective function for the (enormous) optimization problem of determining all the parameters that govern their brain, and then you throw it away and plug the resulting mind into your not-historical simulation. It will always have been the case that at one point you did the historical simulation, but the other simulation won't start going wrong just because you shut down the historical one.

Anyway: as I said before, if you expect lots of historical simulation just to figure out what to put into the non-historical simulation, then that's another reason to think that ancestor simulation is very expensive (because you have to do all that historical simulation). On the other hand, if you expect that a small amount of historical simulation will suffice then (1) I don't believe you (if you're estimating the parameters this way, you'll need to do a lot of it; any optimization procedure needs to evaluate the objective function many times) and (2) in that case surely there are anthropic reasons to find this scenario unlikely, because then we should be very surprised to find ourselves in the historical sim rather than the non-historical one that's the real purpose.

When I say this tech isn't that far away, it's because AGI isn't that far away, and this follows shortly thereafter.

Perhaps I am just misinterpreting your tone (easily done with written communication) but it seems to me that you're outrageously overconfident about what's going to happen on what timescales. We don't know whether, or when, AGI will be achieved. We don't know whether when it is it will rapidly turn into way-superhuman intelligence, or whether that will happen much slower (e.g., depending on hardware technology development which may not be sped up much by slightly-superhuman AGI), or even whether actually the technological wins that would lead to very-superhuman AGI simply aren't possible for some kind of fundamental physical reason we haven't grasped. We don't know whether, if we do make a strongly superhuman AGI, it will enable us to achieve anything resembling our current goals, or whether it will take us apart to use our atoms for something we don't value at all.

You are assuming naive encoding without compression

No, I am assuming that smarter encoding doesn't buy you more than the outrageous amount by which I shrank the complexity by assuming only one parameter per synapse.

that gives a lower bound of 10^10 for a 100 year old

Tried optimizing a function of 10^10 parameters recently? It tends to take a while and converge to the wrong local optimum.

naysayers will always be able to claim "these aren't really the same people". But their opinions are worthless. The only opinions that matter are those who actually knew the relevant people

What makes you think those are different people's opinions? If you present me with a simulated person who purports to be my dead grandfather, and I learn that he's reconstructed from as little information as (I think) we both expect actually to be available, then I will not regard it as the same person as my grandfather. Perhaps I will have no way of telling the difference (since my own reactions on interacting with this simulated person can be available to the optimization process -- if I don't mind hundreds of years of simulated-me being used for that purpose) but there's a big difference between "I can't prove it's not him" and "I have good reason to think it's him".

Comment author: Lumifer 04 November 2015 04:57:55PM 0 points [-]

We are more likely simulations if B is right than if A is right.

And how does that follow?

Comment author: gjm 05 November 2015 12:13:29AM 0 points [-]

"Follow" is probably an exaggeration since this is pretty handwavy, but:

First of all, a clarification: I should really have written something like "We are more likely accurate ancestor-simulations ..." rather than "We are more likely simulations". I hope that was understood, given that the actually relevant hypothesis is one involving accurate ancestor-simulations, but I apologize for not being clearer. OK, on with the show.

Let W be the world of our non-simulated ancestors (who may or may not actually be us, depending on whether we are ancestor-sims). W is (at least as regards the experiences of our non-simulated ancestors) like our world, either because it is our world or because our world is an accurate simulation of W. In particular, if A then W is such as generally not to lead to large-scale ancestor sims, and if B then W is such as generally to lead to large-scale ancestor sims.

So, if B then in addition to W there are probably ancestor-sims of much of W; but if A then there are probably not.

So, if B then some instances of us are probably ancestor-sims, and if A then probably not.

So, Pr(we are ancestor-sims | B) > Pr(we are ancestor-sims | A).

Extreme case: if we somehow know not A but the much stronger A': "A society just like ours will never lead to any sort of ancestor-sims" then we can be confident of not being accurate ancestor-sims.

(I repeat that of course we could still be highly inaccurate ancestor-sims or non-ancestor sims, and A versus B doesn't tell us much about that, but that the question at issue was specifically about accurate ancestor-sims since those are what might be required for our (non-simulated forebears') descendants to give us (or our non-simulated forebears) an afterlife, if they were inclined to do so.)

Comment author: Lumifer 05 November 2015 05:04:55AM -1 points [-]

Consider a different argument.

Our world is either simulated or not.

If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us.

If our world is simulated, we are already simulated and there's nothing we can do to increase our chance of being simulated because it's already so.

Comment author: gjm 05 November 2015 10:02:47AM 2 points [-]

That might be highly relevant[1] if I'd made any argument of the form "If we do X, we make it more likely that we are simulated". But I didn't make any such argument. I said "If societies like ours tend to do X, then it is more likely that we are simulated". That differs in two important ways.

[1] Leaving aside arguments based on exotic decision theories (which don't necessarily deserve to be left aside but are less obvious than the fact that you've completely misrepresented what I said).

Comment author: jacob_cannell 06 November 2015 12:55:53AM 1 point [-]

Knowledge of which decisions we actually make is information which we can update our worldviews on.

Acausal reasoning seems wierd, but it works in practice and dominates classical causal reasoning.

Comment author: Lumifer 06 November 2015 03:05:51AM 1 point [-]

Acausal reasoning seems wierd, but it works in practice and dominates classical causal reasoning.

What do you mean, "works in practice"?

Comment author: crmflynn 05 November 2015 01:12:08PM 1 point [-]

Consider a different argument.

Our world is either simulated or not.

If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us.

If our world is simulated, we are already simulated and there's nothing we can do to increase our chance of being simulated because it's already so.

I am guessing you two-box in the Newcomb paradox as well, right? If you don’t then you might take a second to realize you are being inconsistent.

If you do two-box, realize that a lot of people do not. A lot of people on LW do not. A lot of philosophers who specialize in decision theory do not. It does not mean they are right, it just means that they do not follow your reasoning. They think that the right answer is to one box. They take an action, later in time, which does not seem causally determinative (at least as we normally conceive of causality). They may believe in retrocausality, the may believe in a type of ethics in which two-boxing would be a type of cheating or free-riding, they might just be superstitious, or they might just be humbling themselves in the face of uncertainty. For purposes of this argument, it does not matter. What matters, as an empirical matter, is that they exist. Their existence means that they will ignore or disbelieve that “there’s nothing we can do to increase our chance of being simulated” like they ignore the second box.

If we want to belong to the type of species where the vast majority of the species exists in a simulations with a long-duration, pleasant afterlife, we need to be the “type of species” who builds large numbers of simulations with long-duration, pleasant afterlives. And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one. Pending acausal trade considerations (probably for another post), two-boxers, and likely some one-boxers, will not think that their actions are causing anything, but it will have evidential value still.

Comment author: Lumifer 05 November 2015 04:58:24PM 0 points [-]

I am guessing you two-box in the Newcomb paradox as well, right?

Yes, of course.

a lot of people do not

I don't think this is true. The correct version is your following sentence:

A lot of people on LW do not

People on LW, of course, are not terribly representative of people in general.

What matters, as an empirical matter, is that they exist.

I agree that such people exist.

If we want to belong to the type of species

Hold on, hold on. What is this "type of species" thing? What types are there, what are our options?

And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one.

Nope, sorry, I don't find this reasoning valid.

it will have evidential value still.

Still nope. If you think that people wishing to be in a simulation has "evidential value" for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have "evidential value"? Are you going to cherry-pick "right" beliefs and "wrong" beliefs?

Comment author: crmflynn 10 November 2015 01:29:17PM 1 point [-]

I don't think this is true. The correct version is your following sentence:

A lot of people on LW do not

People on LW, of course, are not terribly representative of people in general.

LW is not really my personal sample for this. I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2/3 of people two box. Nozick, who popularized this, said he thought it was about 50/50. While it is again not representative, of the thousand people who answered the question in this survey, it was about equal (http://philpapers.org/surveys/results.pl). For people with PhD’s in Philosophy it was 458 two-boxers to 348 one-boxers. While I do not know what the actual number would be if there was a Pew Survey, I suspect, especially given the success of Calvinism, magical thinking, etc. that there are a substantial minority of people who would one-box.

What matters, as an empirical matter, is that they exist.

I agree that such people exist.

Okay. Can you see how they might take the approach I have suggested they might? And if yes, can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?

If we want to belong to the type of species

Hold on, hold on. What is this "type of species" thing? What types are there, what are our options?

As a turn of phrase, I was referring two types. One that makes simulations meeting this description, and one that does not. It is like when people advocate for colonizing Mars, they are expressing a desire to be “that type of species.” Not sure what confused you here….

And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one.

Nope, sorry, I don't find this reasoning valid.

If you are in the Sleeping Beauty problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem), and are woken up during the week, what is your credence that the coin has come up tails? How do you decide between the doors in the Monty Hall problem?

I am not asking you to think that the actual odds have changed in real time, I am asking you to adjust your credence based on new information. The order of cards has not changed in the deck, but now you know which ones have been discarded.

If it turns out simulations are impossible, I will adjust my credence about being in one. If a program begins plastering trillions of simulations across the cosmological endowment with von Neumann probes, I will adjust my credence upward. I am not saying that your reality changes, I am saying that the amount of information you have about the location of your reality has changed. If you do not find this valid, what do you not find valid? Why should your credence remain unchanged?

it will have evidential value still.

Still nope. If you think that people wishing to be in a simulation has "evidential value" for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have "evidential value"? Are you going to cherry-pick "right" beliefs and "wrong" beliefs?

Beliefs can cause people to do things, whether that be go to war or build expensive computers. Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq? How can their “belief” in such a thing have any evidential value?

One-boxers wishing to be in a simulation are more likely to create a large number of simulations. The existence of a large number of simulations (especially if they can nest their own simulations) make it more likely that we are not at a “basement level” but instead are in a simulation, like the ones we create. Not because we are creating our own, but because it suggests the realistic possibility that our world was created a “level” above us. This is just about self-locating belief. As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated. However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.” Same as if you were currently living in Western Iraq, you should update your credence from “why should I possibly leave my house, why would it not be safe” to “right, because there are people who are inspired by belief to take actions which make it unsafe.” Your knowledge about others’ beliefs can provide information about certain things that they may have done or may plan to do.

Comment author: jacob_cannell 07 November 2015 10:48:41PM 0 points [-]

Our world is either simulated or not.

Not quite. In the sim case, we along with our world exist as multiple copies - one original along with some number of sims. It's really important to make this distinction, it totally changes the relevant decision theory.

If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us.

No - because we exist as a set of copies which always takes the same actions. If we (in the future) create simulations of our past selves, then we are already today (also) those simulations.

Comment author: Lumifer 08 November 2015 05:34:10AM 0 points [-]

Not quite.

Whether it's not quite or yes quite depends on whether one accepts you idea of the identity as relative, fuzzy, and smeared out over a lot of copies. I don't.

we exist as a set of copies

Do you state this as a fact?

Comment author: jacob_cannell 08 November 2015 06:07:09AM 0 points [-]

Actually the sim argument doesn't depend on fuzzy smeared out identity. The copy issue is orthogonal and it arises in any type of multiverse.

we exist as a set of copies

Do you state this as a fact?

It is given in the sim scenario. I said this in reply to your statement "there's nothing we do can make it simulated".

The statement is incorrect because we are uncertain on our true existential state. And moreover, we have the power to change that state. The first original version of ourselves can create many other copies.

Comment author: V_V 09 November 2015 03:33:21PM *  0 points [-]

I think that the problem with this sort of arguments is that it's like cooperating in prisoner's dilemma hoping that superrationality will make the other player cooperate: It doesn't work.

It seems that lots of people here conflate Newcomb's problem, which is a very unusual single-player decision problem, with prisoner's dilemma, which is the prototypical competitive game from game theory.

Also, I don't see why I should consider an accurate simulation of me, from my birth to my death, ran after my real death as a form of afterlife. How would it be functionally different than screening a movie of my life?

Comment author: gjm 09 November 2015 08:23:14PM 1 point [-]

My understanding is that the proposal here isn't that an accurate simulation of your life should be counted as an afterlife; it's that a somewhat-accurate simulation of lots of bits of your life might be a necessary preliminary to providing you with an afterlife (because they'd be needed to figure out what your brain, or at least your mind, was like in order to recreate it in whatever blissful -- or for that matter torturous -- afterlife might be provided for you).

As for Newcomb versus prisoners' dilemma, see my comments elsewhere in the thread: I am not proposing that our decision whether to engage in large-scale ancestor simulation has any power to affect our past, only that it may provide some evidence bearing on what's likely to have been in our past.

Comment author: crmflynn 10 November 2015 11:21:44AM *  0 points [-]

the proposal here

I just want to clarify in case you mean my proposal, as opposed to the proposal by jacobcannell. This is my reading of what jacobcannell said as well, but it is not at all a part of my argument. In fact, while I would be interested in reading jacobcannell’s thoughts on identity and the self, I share the same skeptical intuitions as other posters in this thread about this. I am open to being wrong, but on first impression I have an extremely difficult time imagining that it will be at all possible to simulate a person after they have died. I suspect that it would be a poor replica, and certainly would not contain the same internal life as the person. Again, I am open to being convinced, but nothing about that makes sense to me at the moment.

I think that I did a poor job of making this clear in my first post, and have added a short note at the end to clarify this. You might consider reading it as it should make my argument clearer.

My proposal is far less interesting, original, or involved then this, and drafts off of Nick Bostrom’s simulation argument in its entirety. What I was discussing was making simulations of new and unique individuals. These individuals would then have an afterlife after dying in which they would be reunited with the other sims from their world to live out a subjectively long, pleasant existence in their simulation computer. There would not be any attempt to replicate anyone in particular or to “join” the people in their simulation through a brain upload or anything else. The interesting and relevant feature would be that the creation of a large number of simulations like this, especially if these simulations could and did create their own simulations like this too, would increase our credence that we were not actually at the “basement level” and instead were ourselves in a simulation like the ones we made. This would increase our credence that dead loved ones had already been shifted over into the afterlife just as we shift people in the sims over into an afterlife after they die. This also circumvents teletransportation concerns (which would still exist if we were uploading ourselves into a simulation of our own!) since everything we are now would just be brought over to the afterlife part of the simulation fully intact.

Comment author: V_V 09 November 2015 10:43:54PM 0 points [-]

My understanding is that the proposal here isn't that an accurate simulation of your life should be counted as an afterlife; it's that a somewhat-accurate simulation of lots of bits of your life might be a necessary preliminary to providing you with an afterlife (because they'd be needed to figure out what your brain, or at least your mind, was like in order to recreate it in whatever blissful -- or for that matter torturous -- afterlife might be provided for you).

Or they are just interested in the password needed to access the cute cat pictures on my phone. Seriously, we are in the realm of wild speculation, we can't say that evidence points any particular way.

Comment author: crmflynn 10 November 2015 10:59:27AM 0 points [-]

I hope I am not intercepting a series of questions when you were only interested in gjm’s response but I enjoyed your comment and wanted to add my thoughts.

I think that the problem with this sort of arguments is that it's like cooperating in prisoner's dilemma hoping that superrationality will make the other player cooperate: It doesn't work.

I am not sure it is settled that it does not work, but I also do not think that most, or maybe any, of my argument relies on an assumption that it does. The first part of it does not even rely on an assumption that one-boxing is reasonable, let alone correct. All it says is that so long as some people play the game this way, as an empirical, descriptive reality of how they actually play, that we are more likely to see certain outcomes in situations that look like Newcomb. This looks like Newcomb.

There is also a second argument further down that suggests that under some circumstances with really high reward, and relatively little cost, that it might be worth trying to “cooperate on the prisoner’s dilemma” as a sort of gamble. This is more susceptible to game theoretic counterpoints, but it is also not put up as an especially strong argument so much as something worth considering more.

It seems that lots of people here conflate Newcomb's problem, which is a very unusual single-player decision problem, with prisoner's dilemma, which is the prototypical competitive game from game theory.

I am pretty sure I am not doing that, but if you wanted to expand on that, especially if you can show that I am, that would be fantastic.

Also, I don't see why I should consider an accurate simulation of me, from my birth to my death, ran after my real death as a form of afterlife. How would it be functionally different than screening a movie of my life?

So, just to be clear, this is not my point at all. I think I was not nearly clear enough on this in the initial post, and I have updated it with a short-ish edit that you might want to read. I personally find the teletransportation paradox pretty paralyzing, enough so that I would have sincere brain-upload concerns. What I am talking about is simulations of non-specific, unique, people in the simulation. After death, these people would be “moved” fully intact into the afterlife component of the simulation. This circumvents teletransportation. Having the vast majority of people “like us” exist in simulations should increase our credence that we are in a simulation just as they are (especially if they can run simulations of their own, or think they are running simulations of their own). The idea is that we will have more reason to think that it is likely one-boxer/altruist/acausal trade types “above” us have similarly created many simulations, of which we are one. Us doing it here should increase our sense that people “like us” have done it “above” us.

Comment author: crmflynn 05 November 2015 12:55:57PM *  0 points [-]

Simulations of long-ago ancestors..?

Imagine that you have the ability to run a simulation now. Would you want to populate it by people like you, that is, fresh people de novo and possibly people from your parents and grandparents generations -- or would you want to populate it with Egyptian peasants from 3000 B.C.? Homo habilis, maybe? How far back do you want to go?

What the simulation would be like depends entirely on the motivation for running it. That is actually sort of the point of the post. If people want to be in a certain kind of simulation, they should run simulations that conform with that.

No, I don't think so. You're engaging in magical thinking. What you -- or everyone -- believes does not change the reality.

What the people “above” us, if they exist, believe absolutely does change reality.

What Omega believes changes reality. People one-box anyway.

Who the Calvinist God has allegedly predestined determines reality. People go to church, pray, etc. anyway.

If we are “the type of species” who builds simulations that we would like to be in, we are much more likely to be a species by-and-large who inhabits simulations which we want to be in.

Comment author: Lumifer 05 November 2015 04:50:21PM -2 points [-]

What the people “above” us, if they exist, believe absolutely does change reality.

And so we are back to the idea of gods.

Comment author: jacob_cannell 06 November 2015 12:56:11AM 1 point [-]

Sure - and nothing wrong with that.