Lumifer comments on Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (84)
I am not sure what is the take-away from this idea. If it is
then, well, increasing credence from 0.0...001% to 0.0...01% is a jump by an order of magnitude, but it still doesn't move the needle leaving the probability in the "vanishingly small" realm.
If it is that we should strive to build such simulations, there are a few issues with this call to action, starting with the observation that at our technological level there isn't much we can do right now, and ending with warning that if many people will want to build Heavens, some people will want to build Hells as well.
Thank you for your comment, and for taking a skeptical approach towards this. I think that trying to punch holes in it is how we figure out if it is worth considering further. I honestly am not sure myself.
I think that my own thoughts on this are a bit like Bostrom's skepticism of the simulation hypothesis, where I do not think it is likely, but I think it is interesting, and it has some properties I like. In particular, I like the “feedback loop” aspect of it being tied into metaphysical credence. The idea that the more people buy into an idea, the more likely it seems that it “has already happened” shows some odd properties of evidence. It is a bit like if I was standing outside of the room where people go to pick up the boxes that Omega dropped off. If I see someone walk out with two unopened boxes, I expect their net wealth has increased ~$1000, if I see someone walk out with one unopened box, I expect them to have increased their wealth ~$1,000,000. That is sort of odd isn’t it? If I see a small, dedicated group of people working on how they would structure simulations, and raising money and trusts to push it a certain political way in the future (laws requiring all simulated people get a minimum duration of afterlife meeting certain specifications, no AIs simulating human civilization for information gathering purposes without “retiring” the people to a heaven afterward, etc.) I have more reason to think I might get a heaven after I die.
As far as the “call to action” I hope that my post was not really read that way. I might have been clearer, and apologize. I think that running simulations followed by afterlife might be a worthwhile thing to do in the future, but I am not even sure it should be done for many reasons. It is worth discussing. One could also imagine that it might be determined, if we overcome and survive the AI intelligence explosion with a good outcome, that it is a worthwhile goal to create more human lives, which are pleasant, throughout our cosmological endowment. Sending off von Neumann probes to build simulations like this might be a live option. Honestly, it is an important question to figure out what we might want from a superintelligent AI, and especially if we might want to not just hand it the question. Coherent extrapolated volition sounds like a best tentative idea, but one we need to be careful with. For example, AI might only be able to produce such a “model” of what we want by running a large number of simulated worlds (to determine what we are all about). If we want simulated worlds to end with a “retirement” for the simulated people in a pleasant afterlife, we might want to specify it in advance, otherwise we are inadvertently reducing the credence we have of our own afterlife as well. Also, if there is an existent acausal trade regime on heaven simulations (this will be another post later) we might get in trouble for not conforming in advance.
As far as simulated hell, I think that fear of this as a possibility keeps the simulated heaven issue even more alive. Someone who would like a pleasant afterlife… which is probably almost all of us, might want to take efforts early to secure that such an afterlife is the norm in cases of simulation, and “hell” absolutely not permitted. Also, the idea that some people might run bad afterlives should probably further motivate people to try to also create as many good simulations as possible, to increase credence that “we” are in one of the good ones. This is like pouring white marbles into the urn to reduce the odds of drawing the black one. You see why the “loop” aspect of this can be kind of interesting. Especially for one-boxer-types, who try to “act out” the correct outcome after-the-fact. For one-boxers, this could be, from a purely and exclusively selfish perspective, the best thing they could possibly do with their life. Increasing the odds of a trillion-life-duration afterlife of extreme utility from 0.001 to 0.01 might be very selfishly rational.
I am not trying to "sell" this, as I have not even bought it myself, I am just sort of playing with it as a live idea. If nothing else, this seems like it might have some importance on considerations going forward. I think that people’s attitudes and approaches to religion suggest that this might be a powerful force for human motivation, and the second disjunct of the simulation argument shows that human motivation might have significant bearing both on our current reality, and on our anticipated future.
Keep in mind that the "simulation hypothesis" is also known as "creationism". In particular it implies that there are beings who constructed the simulation, who are not bound by its rules, and who can change it at will. The conventional name for such beings is "gods".
I would treat is as a category error: ideas are not evidence. Even if they look "evidence-like".
Why would future superpowerful people be interested in increasing your credence?
Remember, this is ground well-trodden by theology. There the question is formulated as "Why doesn't God just reveal Himself to us instead leaving us in doubt?".
I think you and I might be missing one another. Or that I am at least missing your point. Accordingly, my responses below might be off point. Hopefully they are not.
I don’t think that necessarily follows. Creationism implies divinity, and gods implies something bigger than people who build a machine. Are your parents gods for creating you? In my own estimate, creating a simulation is like founding a sperm bank; you are not really “creating” anything, you are just moving pieces around in a way that facilitates more lives. You can mess around with the life and the world, but so can anyone in real life, especially if they have access to power, or guns, or a sperm bank, again, for that matter. It is different in scale, but not in type. Then again, I might be thinking too highly of “gods”?
Also, I get the impression, and apologies if I am wrong, that you are mostly trying to show “family resemblance” with something many of us are skeptical of or dislike. I am atheist myself, and from a very religious background which leaves me wary. However, I think it is worth avoiding a “clustering” way of thinking. If you don’t want to consider something because of who said it, or because it vaguely or analogously resembles something you dislike, you can miss out on some interesting stuff. I think I avoided AI, etc. too long because I thought I did not really like “computer things” which was a mistake that cost me some great time in some huge, wide open, intellectual spaces I now love to run around in.
I might be missing what you are saying, but I do not think I was saying that ideas were evidence. I was saying a group of people rallying around an idea could be a form of evidence. In this case, the “evidence” is that a lot of people might want something. What this is evidence of is that them wanting something makes it more likely that it will come about. I am not sure how this would fail as evidence.
Two things: 1) They are not interested in the credence of people in the simulations, they are interested in their own credence. So if I live in a world that creates simulations, it makes me think it is more likely that I am in a simulation. If I know that 99% of all simulations are good ones, it makes me think I am more likely in a world with good simulations. If I know that 90% of simulations are terrible, I am more likely to think that I am in a terrible simulation. The odd thing, is that people are sort of creating their own evidence. This is why I mentioned Calvinism and “irresistible grace” as analogy. Also Newcomb. Creating nice simulations in the hopes of being in one is like taking one box, or attending Calvinist church regularly and abiding by the doctrines. More to the point for people who two-box and roll their eyes at Calvinists, knowing that there are Calvinists means that we know that some people might try to make simulations in order to try to be in one.
2) I am not sure where “superpowerful” comes from here. I think you might be making assumptions about my assumptions. These simulations might be left unobserved. They might be made by von Neumann probes on distant Dyson spheres. I actually think that people motivated by one-boxing/Calvinist type interpretations are more likely to try to keep simulations unmolested.
I don’t think the question is the same. In particular, I am not solving for “why has god not revealed himself” or even “why haven’t I been told I am in a simulation.” I am just pulling at the second disjunct and its implications. In particular I am looking at what happens if one-boxer types decide they want a simulated alterlife.
Why would people run simulations? Maybe research or entertainment (suggested in the original article). Maybe to fulfill (potentially imaginary) acausal trade conditions (I will probably post on this later). Maybe altruism. Maybe because they want to believe they are in a simulation, and so they make the simulation look just like their world looks, but add an afterlife. They do this in the hopes that it was done “above” them the same way, and they are in such a simulation. They do it in the hopes of being self-fulfilling, or performative, or for whatever reason people one-box and believe in Calvinism.
Not for the sims who live inside the machine. Let me recount once again the relevant features:
These beings look very much like gods to me. The "not bound by our physics", in particular, decisively separates them from sims who, of course, do affect their world in many ways.
That it will come about, yes. That it is this way, no. But that's the whole causality/Newcomb issue.
Makes you think so, but doesn't make me think so. Again,this is the core issue here.
One-boxers want it today, right now? Um, nothing happens.
I think that this sort of risks being an argument about a definition of a word, as we can mostly agree on the potential features of the set-up. But because I have a sense that this claim comes with an implicit charge of fideism, I’ll take another round at clarifying my position. Also, I have written a short update to my original post to clarify some things that I think I was too vague on in the original post. There is a trade-off between being short enough to encourage people to read it, and being thorough enough to be clear, and I think I under-wrote it a bit initially.
They did not really “create” this world so much as organized certain aspects of the environment. Simulated people are still existent in a physical world, albeit as things in a computer. The fact that the world as the simulated people conceive of it is not what it appears to be occurs happens to us as well when we dig into physics and everything becomes weird and unfamiliar. If I am in the environment of a video game, I do not think that anyone has created a different world, I just think that they have created a different environment by arranging bits of pre-existing world.
Is something a miracle if it can be clear in physical terms how it happened? If there is a simulation, than the physics is a replica of physics, and “defying” it is not really any more miraculous than me breaking the Mars off of a diorama of the solar system.
Everyone can do that. I do that by moving a cup of coffee from one place to another. In a more powerful sense, political philosophers have dramatically determined how humans have existed over the last 150 years. Human will shapes our existences a great deal already.
I think that for you, “gods” emerge as a being grows in power, whereas I tend to think that divinity implies something different not just in scale, but in type. This might just be a trivial difference in opinion or definition or approach to something with no real relevance.
I agree with you that this is the core issue. What I think you might be missing, though I could be wrong, is that I am agnostic on this point in the post. Being careful to keep my own intuition out of it. I am not saying that one-boxers believing this necessarily have any effect on our current, existent, reality. What I am saying is two things: 1) Some one-boxers think that it does, and accordingly will be more likely to push for simulations and 2) Knowing that some people will be likely to push for simulations should make even two-boxers think that it is more likely we are in one. If the world was made up exclusively of two-boxers, it would be less likely that people would try to create simulations with heaven-like afterlives. If the world was all one-boxers, it would be more likely. As we are somewhere in between, our credence should be somewhere in between. This is just about making an educated guess about human nature based on how people interact with similar problems. Since human nature is potentially causal on whether or not there are simulations, information that changes our views on the likelihood of a decision one way or another on simulations is relevant to our credence.
If one boxers here, today, want it, is not really the relevant consideration, especially to a two-boxer. However, if there are a lot of one-boxers, who make a lot of simulations, it should increase the two-boxers credence that he or she is in a simulation created by a one-boxer “one level up.” As a two-boxer, the relevant thing is not that THESE one-boxers are causing anything, but that the existence of people who do this might suggest the existence of people who have done this before, “one level up.”
That's what creation is. The issue here is inside view / outside view. Take Pac-Man. From the outside, you arranged bits of existing world to make the Pac-Man world. From the inside, you have no idea that such things as clouds, or marmosets, or airplanes exist: your world consists of walls, dots, and ghosts.
Outside/inside view again. If I saw Mars arbitrarily breaking out of its orbit and go careening off to somewhere, that would look pretty miraculous to me.
I agree about the difference in type. It is here: these beings are not of this world. The difference between you and a character in a MMORG is a difference in type.
Re one/two-boxers, see my answer to the other post...
I agree with you about the inside / outside view. I also think I agree with you about the characteristics of the simulators in relationship to the simulation.
I think I just have a vaguely different, and perhaps personal, sense of how I would define "divine" and "god." If we are in a simulation, I would not consider the simulators gods. Very powerful people, but not gods. If they tried to argue with me that they were gods because they were made of a lot of organic molecules whereas I was just information in a machine, I would suggested it was a distinction without a difference. Show me the uncaused cause or something outside of physics and we can talk
There is a classic answer to this :-/
In the context of the simulated world uncaused causes and breaking physics are easy. Hack the simulation, write directly to the memory, and all things are possible.
It's just the inside/outside view again.
We live in a very special time - right on the cusp of AGI - so there is much that one can do right now. ;)
AGI has been 20 years away for the past 50 years or so. I see no reason to believe the pattern will break any time now :-/
No - AGI's arrival can be expected around the end of conventional Moore's Law, as that is naturally when we can expect to have brain level hardware performance. Before that AGI is impractical, shortly after that it becomes inevitable.
There are a large number of people making predictions, almost all of them have no idea what they are talking about. It is the logic behind the predictions that matter.
I don't think our progress in creating an AGI is constrained by hardware at this point. It's a software problem and you can't solve it by building larger and more densely packed supercomputers.
Yep :-)
That is now possibly arguably just becoming true for the first time - as we approach the end of Moore's Law and device geometry shrinks to synapse comparable sizes, densities, etc.
Still, current hardware/software is not all that efficient for the computations that intelligence requires - which is namely enormous amounts of low precision/noisy approximate computing.
Of course you can - it just wouldn't be economical. AGI running on a billion dollar super computer is not practical AGI, as AGI is AI that can do everything a human can do but better - which naturally must include cost.
It isn't a problem of what math to implement - we have that figured out. It's a question of efficiency.
Why not? AGI doesn't involve emulating Fred the janitor, the first AGI is likely to have a specific purpose and so will likely have huge advantages over meatbags in the particular domain it was made for.
If people were able to build an AGI on a billion-dollar chunk of hardware right now they would certainly do so, if only as a proof of concept. A billion isn't that much money to a certain class of organizations and people.
Oh, really? I'm afraid I find that hard to believe.
Say you have the code/structure for an AGI all figured out, but it runs in real-time on a billion dollar/year supercomputer. You now have to wait decades to train/educate it up to an adult.
Furthermore, the probability that you get the seed code/structure right on the first try is essentially zero. So rather obviously - to even get AGI in the first place you need enough efficiency to run one AGI mind in real-time on something far far less than a supercomputer.
Hard to believe only for those outside ML.
I don't think that even in ML the school of "let's just make a bigger neural network" is taken seriously.
Neural networks are prone to overfitting. All the modern big neural networks that are fashionable these days require large amounts of training data. Scale up these networks to the size of the human brain, and, even assuming that you have the hardware resources to run them, you will get something that just memorizes the training set and doesn't perform any useful generalization.
Humans can learn from comparatively small amounts of data, and in particular from very little and very indirectly supervised data: you don't have to show a child a thousand apples and push each time an "apple" button on their head for them to learn what an apple looks like.
There is currently lots of research in ML in how to make use of unsupervised data, which is cheaper and more abundant than supervised data, but this is still definitely an open problem, so much that it isn't even clear what properties we want to model and how to evaluate these models (e.g. check out this recent paper).
Therefore, the math relevant to ML has definitely not been all worked out.
That's not actually what I meant when I said we have the math figured out. The math behind general learning is just general bayesian inference in it's various forms. The difficulty is not so much in the math, it is in scaling up efficiently.
To a first approximation the recent surge in progress in AI is entirely due to just making bigger neural networks. As numerous DL researchers have admitted - the new wave of DL is basically just techniques from the 80's scaled up on modern GPUs.
Regarding unsupervised learning - I wholeheartedly agree. However one should also keep in mind that UL and SL are just minor variations of the same theme in a bayesian framework. If you have accurate labeled data, you might as well use it.
In order to recognize and verbally name apples, a child must first have years of visual experience. Supervised DL systems trained from scratch need to learn everything from scratch, even the lowest level features. The object in these systems is not to maximize learning from small amounts of training data.
In the limited training data domain and more generally for mixed datasets where there is a large amount of unlabeled data, transfer learning and mixed UL/SL can do better.
Just discussing that here.
The only real surprising part of that paper is the "good model, poor sampling" section. It's not clear how often their particular pathological special case actually shows up in practice. In general a solomonoff learner will not have that problem.
I suspect that a more robust sampling procedure could fix the mismatch. A robust sampler would be one that outputs samples according to their total probability as measured by encoding cost. This corrects the mismatch between the encoder and the sampler. Naively implemented this makes the sampling far more expensive, perhaps exponentially so, but nonetheless it suggests the problem is not fundamental.
How would you know that you have it "all figured out"?
Err... didn't you just say that it's not a software issue and we have already figured out what math to implement? What's the problem?
Right... build a NN a mile wide and a mile deep and let 'er rip X-/
No, I never said it is not a software issue - because the distinction between software/hardware issues is murky at best, especially in the era of ML where most of the 'software' is learned automatically.
You are trolling now - cutting my quotes out of context.
I am not sure it matters when it comes. Presumably, unless we find some other way to extinction, it will come at some point. When it comes, it is likely that the technology will not be a problem for it. Once the technology exists, and probably before, we may need to figure out if and how we want to do simulations. If people have a clear, well developed, and strong preference going into it (including potentially putting it into the AI as a requirement for its modeling of humanity, or it being a big enough “movement” to show up in our CEV) that will likely have a large effect on the odds of it happening. Also, I know some people who sincerely think belief in god is based almost exclusively on fear of death. I am skeptical of this, but if it is true, or even partially true, if even a fraction of the fervor/energy/dedication that is put into religion was put into pushing for this, I think it might be a serious force.
The point about credence is just a point about it being interesting, decision making aside, that something as fickle as collective human will, might determine if I “survive” death, and if all my dead loved ones will as well. So, for example, if this post, or someone building off of my post, but doing it better, were to explode on LW and pour out into reddit and the media, it should increase our credence in an afterlife. If its reception is lukewarm, decrease it. There is something really weird about that, and worth chewing on.
Also, I think that people’s motivation to have an afterlife seems like a more compelling reason to create simulations than experimentation/entertainment, so it helps shift credence around among the four disjuncts of the simulation argument.
Simulations of long-ago ancestors..?
Imagine that you have the ability to run a simulation now. Would you want to populate it by people like you, that is, fresh people de novo and possibly people from your parents and grandparents generations -- or would you want to populate it with Egyptian peasants from 3000 B.C.? Homo habilis, maybe? How far back do you want to go?
No, I don't think so. You're engaging in magical thinking. What you -- or everyone -- believes does not change the reality.
It can give evidence, though. Consider Hypothesis A: "Societies like ours will generally not decide, as their technological capabilities grow, to engage in massive simulation of their forebears" and Hypothesis B which omits the word "not". Then:
Similarly if the hypotheses are "... to engage in massive simulation of their forebears, including blissful afterlives", in which case we are more likely to have blissful simulated afterlives if B is right than if A is right. (Not necessarily more likely to have blissful afterlives simpliciter, though -- perhaps, e.g., the truth of B would somehow make it less likely that we get blissful afterlives provided by gods.)
My opinion, for what it's worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.
I wonder if you might expand on your thoughts on this a bit more. I tend to think that the odds of being in a simulation are quite low as well, but for me the issue is more the threat of extinction than a lack of will.
I can think of some reasons why, even if we could build such simulations, we might not, but I feel that this area is a bit fuzzy in my mind. Some ideas I already have: 1) Issues with the theory of identity 2) Issues with theory of mind 3) Issues with theory of moral value (creating lots high quality lives not seen as valuable, antinatalism, problem of evil) 4) Self-interest (more resources for existing individuals to upload into and utilize) 5) The existence of a convincing two-boxer “proof” of some sort
I also would like to know why an “enthusiastic takeup of the ideas in this post” would not increase your credence significantly? I think there is a very large chance of these ideas not being taken up enthusiastically, but if they were, I am not sure what, aside from extinction, would undermine them. If we get to the point where we can do it, and we want to do it, why would we not do it?
Thank you in advance for any insight, I have spent too long chewing on this without much detailed input, and I would really value it.
I'm not sure I have much to say that you won't have thought of already. But: First of all, there seem to be lots of ways in which we might fail to develop such technology. We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely). It might turn out that computational superpowers just aren't really available -- that there's only so much processing power we have any realistic way of harnessing. It might turn out that such things are possible but we simply aren't smart enough to find our way to them.
Second, if we (or more precisely our successors, whoever or whatever they are) develop such computational superpowers, why on earth use them for ancestor simulations? In this sort of scenario, maybe we're all living in some kind of virtual universe; wouldn't it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world? Someone else -- entirelyuseless? -- observed earlier in the thread that some such simulation might be necessary in order to figure out enough about our ancestors' minds to simulate them anywhere else, so it's just possible that grotty 21st-century ancestor sims might be a necessary precursor to glorious 25th-century ancestor sims; but why ancestors anyway? What's so special about them, compared with all the other possible minds?
Third, supposing that we have computational superpowers and want to simulate our ancestors, I see no good reason to think it's possible. The information it would take to simulate my great-great-grandparents is dispersed and tangled up with other information, and figuring out enough about my great-great-grandparents to simulate them will be no easier than locating the exact oxygen atoms that were in Julius Caesar's last breath. All the relevant systems are chaotic, measurement is imprecise, and surely there's just no reconstructing our ancestors at this point.
Fourth, it seems quite likely that our superpowered successors, if we have them, will be no more like us than we are like chimpanzees. Perhaps you find it credible that we might want to simulate our ancestors; do you think we would be interested in simulating our ancestors 5 million years ago who were as much like chimps as like us?
Absolutely. I think this is where this thing most likely fails. Somewhere in the first disjunct. My gut does not think I am in a simulation, and while that is not at all a valid way to acquire knowledge, it is the case that it leans me heavily into this.
So I am not saying that they WOULD do it, I actually can think of a lot of pretty compelling reasons why they MIGHT. If the people who are around then are at all like us, then I think that a subset of them would likely do it for the one-boxer reasons I mentioned in the first post (which I have since updated with a note at the bottom to clarify some things I should have included in the post originally.) Whether or not their intuitions are valid, there is an internal logic, based on these intuitions, which would push for this. Reasons include hedging against the teletransportation paradox (which also applies to self-uploading) and hoping to increase their credence of an afterlife in which those already dead can join in. This is clearer I think in my update. The main confusion is that I am not talking about attempting to simulate or recreate specific dead people, which I do not think is possible. The key to my argument is to create self-locating doubt.
Also, in my argument, the people who create the simulation are never joined with the people in the simulation. These people stay in their simulation computer. The idea is that we are “hoping” we are similarly in a simulation computer, and have been the whole time, and that when we die, we will be transferred (whole) into the simulations afterlife component along with everyone who died before us in our world. Should we be in a simulation, and yet develop some sort of “glorious virtual universe” that we upload into, there are several options. Two ones that quickly come to mind: 1) We might stay in it until we die, then go into the afterlife component, 2) We might at some point be “raptured” by the simulation out of our virtual universe into the existent “glorious virtual afterlife” of the simulation computer we are in.
As it is likely that the technology for simulations will come about at about the same time as for a “glorious virtual universe” we could even treat it as our last big hurrah before we upload ourselves. This makes sense as the people who exist when this technology becomes available will know a large number of loved ones who just missed it. They will also potentially be in especially imminent fear of the teletransportation paradox. I do not think there is any inherent conflict between doing both of these things.
Just to be clear, I am not talking about our actual individual ancestors. I actually avoided using the term intentionally as I think it is a bit confusing. I am pretty sure this is how Bostrom meant it as well in the original paper, with the word “ancestor” being used in the looser sense, like how we say “homo erectus where our ancestors.” That might be my misinterpretation, but I do not think so. While I could be convinced, I am personally, currently, very skeptical that it would be possible to do any meaningful sort of replication of a person after they die. I think the only way that someone who has already died has any chance of an afterlife is if we are already in a simulation. This is also why my personal, atheistic mind could be susceptible to donating to such a cause when in grief. I wrote an update to my original post at the bottom where I clarify this. The point of the simulation is to change our credence regarding our self-location. If the vast majority of “people like us” (which can be REALLY broadly construed) exist in simulations with afterlives, and do not know it, we have reason to think we might also exist in such a simulation. If this is still not clear after the update, please let me know, as I am trying to pin down something difficult and am not sure if I am continuing to privilege brevity to the detriment of clarity.
I agree with your point so strongly that I am a little surprised to have been interpreted as meaning this. I think that it seems theoretically feasible to simulate a world full of individual people as they advance their way up from simple stone tools onward, each with their own unique life and identity, each existing in a unique world with its own history. Trying to somehow make this the EXACT SAME as ours does not seem at all possible. I also do not see what the advantage of it would be, as it is not more informative or helpful for our purposes to know that we are the same or not as the people above us, so why would be try to “send that down” below us. We do not care about that as a feature of our world, and so would have no reason to try to instill it in the worlds below us. There is sort of a “golden rule” aspect to this in that you do to the simulation below you the best feasible, reality-conforming version of what you want done to you.
Maybe? I think that one of the interesting parts about this is where we would choose to draw policy lines around it. Do dogs go to the afterlife? How about fetuses? How about AI? What is heaven like? Who gets to decide this? These are all live questions. It could be that they take a consequential hedonistic approach that is mostly neutral between “who” gets the heaven. It could be that they feel obligated to go back further in gratitude of all those (“types”) who worked for advancement as a species and made their lives possible. It could be that we are actually not too far from superintelligent AI, and that this is going to become a live question in the next century or so, in which case “we” are that class of people they want to simulate in order to increase their credence of others similar to us (their relatives, friends who missed the revolution) being simulated.
As far as how far back you bother to simulate people, it might actually be easier to start off with some very small bands of people in a very primitive setting then to try to go through and make a complex world for people to “start” in without the benefit of cultural knowledge or tradition. It might even be that the “first people” are based on some survivalist hobby back-to-basics types who volunteered to be emulated, copied, and placed in different combinations in primitive earth environments in order to live simple hunter-gatherer lives and have their children go on to populate an earth (possible date of start? https://en.wikipedia.org/wiki/Population_bottleneck). That said, this is deep into the weeds of extremely low-probability speculation. Fun to do, but increasingly meaningless.
Yes, but that it isn't enough to defeat simulations. One successful future can create a huge number of sims. Observational selection effects thus make survival fare more likely than otherwise expected.
Even without quantum computing or reversible computing, even just using sustainable resources on earth (solar) - even with those limitations - there are plenty of resources to create large numbers of sims.
The cost is about the same either way. So the question is one of economic preferences. When people can use their wealth to create either new children or bring back the dead, what will they do? You are thus assuming there will be very low demand for resurrecting the dead vs creating new children. This is rather obviously unlikely.
This technology probably isn't that far away - it is a 21st century tech, not 25th. It almost automatically follows AGI, as AGI is actually just the tech to create minds - nothing less. Many people alive today will still be alive when these sims are built. They will bring back their loved ones, who then will want to bring back theirs, and so on.
Most people won't understand or believe it until it happens. But likewise very few people actually understand how modern advanced rendering engines work - which would seem like magic to someone from just 50 years ago.
It's an approximate inference problem. The sim never needs anything even remotely close to atomic information. In terms of world detail levels it only requires a little more than current games. The main new tech required is just the large scale massive inference supercomputing infrastructure that AGI requires anyway.
It's easier to understand if you just think of a human brain sim growing up in something like the Matrix, where events are curiously staged and controlled behind the scenes by AIs.
The opinion-to-reasons ratio is quite high in both your comment and mine to which it's replying, which is probably a sign that there's only limited value in exploring our disagreements, but I'll make a few comments.
One future civilization could perhaps create huge numbers of simulations. But why would it want to? (Note that this is not at all the same question as "why would it create any?".)
The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations. You have to figure out exactly what the dead were like, which (despite your apparent confidence that it's easy to see how easy it is if you just imagine the Matrix) I think is likely to be completely infeasible, and monstrously expensive if it's possible at all. But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before? Where's the value in that? (And if the answer is, as proposed by entirelyuseless, that to figure out who and what they were we need to do lots of simulations of their earthly existence, then note that that's one more reason to think that resurrecting them is terribly expensive.)
(If we can resurrect the dead, then indeed I bet a lot of people will want to do it. But it seems to me they'll want to do it for reasons incompatible with leaving the resurrected dead in simulations of the mundane early 21st century.)
You say with apparent confidence that "this technology probably isn't that far away". Of course that could be correct, but my guess is that you're wronger than a very wrong thing made of wrong. We can't even simulate C. elegans yet, even though that only has about 1k neurons and they're always wired up the same way (which we know).
Yes, it's an approximate inference problem. With an absolutely colossal number of parameters and, at least on the face of it, scarcely any actual information to base the inferences on. I'm unconvinced that "the sim never needs anything even remotely close to atomic information" given that the (simulated or not) world we're in appears to contain particle accelerators and the like, but let's suppose you're right and that nothing finer-grained than simple neuron simulations is needed; you're still going to need at the barest minimum a parameter per synapse, which is something like 10^15 per person. But it's worse, because there are lots of people and they all interact with one another and those interactions are probably where our best hope of getting the information we need for the approximate inference problems comes from -- so now we have to do careful joint simulations of lots of people and optimize all their parameters together. And if the goal is to resurrect the dead (rather than just make new people a bit like our ancestors) then we need really accurate approximate inference, and it's all just a colossal challenge and I really don't think waving your hands and saying "just think of a human brain sim growing up in something like the Matrix" is on the same planet as the right ballpark for justifying a claim that it's anywhere near within reach.
And how does that follow?
"Follow" is probably an exaggeration since this is pretty handwavy, but:
First of all, a clarification: I should really have written something like "We are more likely accurate ancestor-simulations ..." rather than "We are more likely simulations". I hope that was understood, given that the actually relevant hypothesis is one involving accurate ancestor-simulations, but I apologize for not being clearer. OK, on with the show.
Let W be the world of our non-simulated ancestors (who may or may not actually be us, depending on whether we are ancestor-sims). W is (at least as regards the experiences of our non-simulated ancestors) like our world, either because it is our world or because our world is an accurate simulation of W. In particular, if A then W is such as generally not to lead to large-scale ancestor sims, and if B then W is such as generally to lead to large-scale ancestor sims.
So, if B then in addition to W there are probably ancestor-sims of much of W; but if A then there are probably not.
So, if B then some instances of us are probably ancestor-sims, and if A then probably not.
So, Pr(we are ancestor-sims | B) > Pr(we are ancestor-sims | A).
Extreme case: if we somehow know not A but the much stronger A': "A society just like ours will never lead to any sort of ancestor-sims" then we can be confident of not being accurate ancestor-sims.
(I repeat that of course we could still be highly inaccurate ancestor-sims or non-ancestor sims, and A versus B doesn't tell us much about that, but that the question at issue was specifically about accurate ancestor-sims since those are what might be required for our (non-simulated forebears') descendants to give us (or our non-simulated forebears) an afterlife, if they were inclined to do so.)
Consider a different argument.
Our world is either simulated or not.
If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us.
If our world is simulated, we are already simulated and there's nothing we can do to increase our chance of being simulated because it's already so.
That might be highly relevant[1] if I'd made any argument of the form "If we do X, we make it more likely that we are simulated". But I didn't make any such argument. I said "If societies like ours tend to do X, then it is more likely that we are simulated". That differs in two important ways.
[1] Leaving aside arguments based on exotic decision theories (which don't necessarily deserve to be left aside but are less obvious than the fact that you've completely misrepresented what I said).
Knowledge of which decisions we actually make is information which we can update our worldviews on.
Acausal reasoning seems wierd, but it works in practice and dominates classical causal reasoning.
I am guessing you two-box in the Newcomb paradox as well, right? If you don’t then you might take a second to realize you are being inconsistent.
If you do two-box, realize that a lot of people do not. A lot of people on LW do not. A lot of philosophers who specialize in decision theory do not. It does not mean they are right, it just means that they do not follow your reasoning. They think that the right answer is to one box. They take an action, later in time, which does not seem causally determinative (at least as we normally conceive of causality). They may believe in retrocausality, the may believe in a type of ethics in which two-boxing would be a type of cheating or free-riding, they might just be superstitious, or they might just be humbling themselves in the face of uncertainty. For purposes of this argument, it does not matter. What matters, as an empirical matter, is that they exist. Their existence means that they will ignore or disbelieve that “there’s nothing we can do to increase our chance of being simulated” like they ignore the second box.
If we want to belong to the type of species where the vast majority of the species exists in a simulations with a long-duration, pleasant afterlife, we need to be the “type of species” who builds large numbers of simulations with long-duration, pleasant afterlives. And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one. Pending acausal trade considerations (probably for another post), two-boxers, and likely some one-boxers, will not think that their actions are causing anything, but it will have evidential value still.
Not quite. In the sim case, we along with our world exist as multiple copies - one original along with some number of sims. It's really important to make this distinction, it totally changes the relevant decision theory.
No - because we exist as a set of copies which always takes the same actions. If we (in the future) create simulations of our past selves, then we are already today (also) those simulations.
I think that the problem with this sort of arguments is that it's like cooperating in prisoner's dilemma hoping that superrationality will make the other player cooperate: It doesn't work.
It seems that lots of people here conflate Newcomb's problem, which is a very unusual single-player decision problem, with prisoner's dilemma, which is the prototypical competitive game from game theory.
Also, I don't see why I should consider an accurate simulation of me, from my birth to my death, ran after my real death as a form of afterlife. How would it be functionally different than screening a movie of my life?
My understanding is that the proposal here isn't that an accurate simulation of your life should be counted as an afterlife; it's that a somewhat-accurate simulation of lots of bits of your life might be a necessary preliminary to providing you with an afterlife (because they'd be needed to figure out what your brain, or at least your mind, was like in order to recreate it in whatever blissful -- or for that matter torturous -- afterlife might be provided for you).
As for Newcomb versus prisoners' dilemma, see my comments elsewhere in the thread: I am not proposing that our decision whether to engage in large-scale ancestor simulation has any power to affect our past, only that it may provide some evidence bearing on what's likely to have been in our past.
I just want to clarify in case you mean my proposal, as opposed to the proposal by jacobcannell. This is my reading of what jacobcannell said as well, but it is not at all a part of my argument. In fact, while I would be interested in reading jacobcannell’s thoughts on identity and the self, I share the same skeptical intuitions as other posters in this thread about this. I am open to being wrong, but on first impression I have an extremely difficult time imagining that it will be at all possible to simulate a person after they have died. I suspect that it would be a poor replica, and certainly would not contain the same internal life as the person. Again, I am open to being convinced, but nothing about that makes sense to me at the moment.
I think that I did a poor job of making this clear in my first post, and have added a short note at the end to clarify this. You might consider reading it as it should make my argument clearer.
My proposal is far less interesting, original, or involved then this, and drafts off of Nick Bostrom’s simulation argument in its entirety. What I was discussing was making simulations of new and unique individuals. These individuals would then have an afterlife after dying in which they would be reunited with the other sims from their world to live out a subjectively long, pleasant existence in their simulation computer. There would not be any attempt to replicate anyone in particular or to “join” the people in their simulation through a brain upload or anything else. The interesting and relevant feature would be that the creation of a large number of simulations like this, especially if these simulations could and did create their own simulations like this too, would increase our credence that we were not actually at the “basement level” and instead were ourselves in a simulation like the ones we made. This would increase our credence that dead loved ones had already been shifted over into the afterlife just as we shift people in the sims over into an afterlife after they die. This also circumvents teletransportation concerns (which would still exist if we were uploading ourselves into a simulation of our own!) since everything we are now would just be brought over to the afterlife part of the simulation fully intact.
Or they are just interested in the password needed to access the cute cat pictures on my phone. Seriously, we are in the realm of wild speculation, we can't say that evidence points any particular way.
I hope I am not intercepting a series of questions when you were only interested in gjm’s response but I enjoyed your comment and wanted to add my thoughts.
I am not sure it is settled that it does not work, but I also do not think that most, or maybe any, of my argument relies on an assumption that it does. The first part of it does not even rely on an assumption that one-boxing is reasonable, let alone correct. All it says is that so long as some people play the game this way, as an empirical, descriptive reality of how they actually play, that we are more likely to see certain outcomes in situations that look like Newcomb. This looks like Newcomb.
There is also a second argument further down that suggests that under some circumstances with really high reward, and relatively little cost, that it might be worth trying to “cooperate on the prisoner’s dilemma” as a sort of gamble. This is more susceptible to game theoretic counterpoints, but it is also not put up as an especially strong argument so much as something worth considering more.
I am pretty sure I am not doing that, but if you wanted to expand on that, especially if you can show that I am, that would be fantastic.
So, just to be clear, this is not my point at all. I think I was not nearly clear enough on this in the initial post, and I have updated it with a short-ish edit that you might want to read. I personally find the teletransportation paradox pretty paralyzing, enough so that I would have sincere brain-upload concerns. What I am talking about is simulations of non-specific, unique, people in the simulation. After death, these people would be “moved” fully intact into the afterlife component of the simulation. This circumvents teletransportation. Having the vast majority of people “like us” exist in simulations should increase our credence that we are in a simulation just as they are (especially if they can run simulations of their own, or think they are running simulations of their own). The idea is that we will have more reason to think that it is likely one-boxer/altruist/acausal trade types “above” us have similarly created many simulations, of which we are one. Us doing it here should increase our sense that people “like us” have done it “above” us.
What the simulation would be like depends entirely on the motivation for running it. That is actually sort of the point of the post. If people want to be in a certain kind of simulation, they should run simulations that conform with that.
What the people “above” us, if they exist, believe absolutely does change reality.
What Omega believes changes reality. People one-box anyway.
Who the Calvinist God has allegedly predestined determines reality. People go to church, pray, etc. anyway.
If we are “the type of species” who builds simulations that we would like to be in, we are much more likely to be a species by-and-large who inhabits simulations which we want to be in.
And so we are back to the idea of gods.
Sure - and nothing wrong with that.