This is a bit rough, but I think that it is an interesting and potentially compelling idea. To keep this short, and accordingly increase the number of eyes over it, I have only sketched the bare bones of the idea. 

     1)      Empirically, people have varying intuitions and beliefs about causality, particularly in Newcomb-like problems (http://wiki.lesswrong.com/wiki/Newcomb's_problem, http://philpapers.org/surveys/results.pl, and https://en.wikipedia.org/wiki/Irresistible_grace).

     2)      Also, as an empirical matter, some people believe in taking actions after the fact, such as one-boxing, or Calvinist “irresistible grace”, to try to ensure or conform with a seemingly already determined outcome. This might be out of a sense of retrocausality, performance, moral honesty, etc. What matters is that we know that they will act it out, despite it violating common sense causality. There has been some great work on decision theory on LW about trying to thread this needle well.

     3)      The second disjunct of the simulation argument (http://wiki.lesswrong.com/wiki/Simulation_argument) shows that the decision making of humanity is evidentially relevant in what our subjective credence should be that we are in a simulation. That is to say, if we are actively headed toward making simulations, we should increase our credence of being in a simulation, if we are actively headed away from making simulations, through either existential risk or law/policy against it, we should decrease our credence.

      4)      Many, if not most, people would like for there to be a pleasant afterlife after death, especially if we could be reunited with loved ones.

     5)      There is no reason to believe that simulations which are otherwise nearly identical copies of our world, could not contain, after the simulated bodily death of the participants, an extremely long-duration, though finite, "heaven"-like afterlife shared by simulation participants.

     6)      Our heading towards creating such simulations, especially if they were capable of nesting simulations, should increase credence that we exist in such a simulation and should perhaps expect a heaven-like afterlife of long, though finite, duration.

     7)      Those who believe in alternative causality, or retrocausality, in Newcomb-like situations should be especially excited about the opportunity to push the world towards surviving, allowing these types of simulations, and creating them, as it would potentially suggest, analogously, that if they work towards creating simulations with heaven-like afterlives, that they might in some sense be “causing” such a heaven to exist for themselves, and even for friends and family who have already died. Such an idea of life-after-death, and especially for being reunited with loved ones, can be extremely compelling.

     8)      I believe that people matching the above description, that is, holding both an intuition in alternative causality, and finding such a heaven-like-afterlife compelling, exist. Further, the existence of such people, and their associated motivation to try to create such simulations, should increase the credence even of two-boxing types, that we already live in such a world with a heaven-like afterlife. This is because knowledge of a motivated minority desiring simulations should increase credence in the likely success of simulations. This is essentially showing that “this probably happened before, one level up” from the two-box perspective.

     9)      As an empirical matter, I also think that there are people who would find the idea of creating simulations with heaven-like afterlives compelling, even if they are not one-boxers, from a simply altruistic perspective, both since it is a nice thing to do for the future sim people, who can, for example, probabilistically have a much better existence than biological children on earth can, and as it is a nice thing to do to increase the credence (and emotional comfort) of both one-boxers and two-boxers in our world thinking that there might be a life after death.

     10)   This creates the opportunity for a secular movement in which people work towards creating these simulations, and use this work and potential success in order to derive comfort and meaning from their life. For example, making donations to a simulation-creating or promoting, or existential threat avoiding, think-tank after a loved one’s death, partially symbolically, partially hopefully.

     11)   There is at least some room for Pascalian considerations even for two-boxers who allow for some humility in their beliefs. Nozick believed one-boxers will become two boxers if Box A is raised to 900,000, and two-boxers will become one-boxers if Box A is lowered to $1. Similarly, trying to work towards these simulations, even if you do not find it altruistically compelling, and even if you think that the odds of alternative or retrocausality is infinitesimally small, might make sense in that the reward could be extremely large, including potentially trillions of lifetimes worth of time spent in an afterlife “heaven” with friends and family.

Finally, this idea might be one worth filling in (I have been, in my private notes for over a year, but am a bit shy to debut that all just yet, even working up the courage to post this was difficult) if only because it is interesting, and could be used as a hook to get more people interested in existential risk, including the AI control problem. This is because existential catastrophe is probably the best enemy of credence in the future of such simulations, and accordingly in our reasonable credence in thinking that we have such a heaven awaiting us after death now. A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade, and game theory between higher and lower worlds, as a form of “compulsion” in which they punish worlds for not creating heaven containing simulations (therefore effecting their credence as observers of the simulation), in order to reach an equilibrium in which simulations with heaven-like afterlives are universal, or nearly universal. More on that later if this is received well.

Also, if anyone would like to join with me in researching, bull sessioning, or writing about this stuff, please feel free to IM me. Also, if anyone has a really good, non-obvious pin with which to pop my balloon, preferably in a gentle way, it would be really appreciated. I am spending a lot of energy and time on this if it is fundamentally flawed in some way.

Thank you.

*******************************

November 11 Updates and Edits for Clarification

     1)      There seems to be confusion about what I mean by self-location and credence. A good way to think of this is the Sleeping Beauty Problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem)

If I imagine myself as Sleeping Beauty (and who doesn’t?), and I am asked on Sunday what my credence is that the coin will be tails, I will say 1/2. If I am awakened during the experiment without being told which day it is and am asked what my credence is that the coin was tails, I will say 2/3. If I am then told it is Monday, I will update my credence to ½. If I am told it is Tuesday I update my credence to 1. If someone asks me two days after the experiment about my credence of it being tails, if I somehow do not know the days of the week still, I will say ½. Credence changes with where you are, and with what information you have. As we might be in a simulation, we are somewhere in the “experiment days” and information can help orient our credence. As humanity potentially has some say in whether or not we are in a simulation, information about how humans make decisions about these types of things can and should effect our credence.

Imagine Sleeping Beauty is a lesswrong reader. If Sleeping Beauty is unfamiliar with the simulation argument, and someone asks her about her credence of being in a simulation, she probably answers something like 0.0000000001% (all numbers for illustrative purposes only). If someone shows her the simulation argument, she increases to 1%. If she stumbles across this blog entry, she increases her credence to 2%, and adds some credence to the additional hypothesis that it may be a simulation with an afterlife. If she sees that a ton of people get really interested in this idea, and start raising funds to build simulations in the future and to lobby governments both for great AI safeguards and for regulation of future simulations, she raises her credence to 4%. If she lives through the AI superintelligence explosion and simulations are being built, but not yet turned on, her credence increases to 20%. If humanity turns them on, it increases to 50%. If there are trillions of them, she increases her credence to 60%. If 99% of simulations survive their own run-ins with artificial superintelligence and produce their own simulations, she increases her credence to 95%. 

2)  This set of simulations does not need to recreate the current world or any specific people in it. That is a different idea that is not necessary to this argument. As written the argument is premised on the idea of creating fully unique people. The point would be to increase our credence that we are functionally identical in type to the unique individuals in the simulation. This is done by creating ignorance or uncertainty in simulations, so that the majority of people similarly situated, in a world which may or may not be in a simulation, are in fact in a simulation. This should, in our ignorance, increase our credence that we are in a simulation. The point is about how we self-locate, as discussed in the original article by Bostrom. It is a short 12-page read, and if you have not read it yet, I would encourage it:  http://simulation-argument.com/simulation.html. The point about past loved ones I was making was to bring up the possibility that the simulations could be designed to transfer people to a separate after-life simulation where they could be reunited after dying in the first part of the simulation. This was not about trying to create something for us to upload ourselves into, along with attempted replicas of dead loved ones. This staying-in-one simulation through two phases, a short life, and relatively long afterlife, also has the advantage of circumventing the teletransportation paradox as “all of the person" can be moved into the afterlife part of the simulation.  

 

New Comment
84 comments, sorted by Click to highlight new comments since: Today at 4:46 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade ...

Assuming you get good feedback and think that you have an interesting, solid arguments ... please think carefully about whether such publicity helps the existential risk movement more than it... (read more)

4crmflynn8y
I would not worry about that for three reasons: 1) I am very shy online. Even posting this took several days and I did not look at the comments for almost a day after. 2) I am bringing this here first to see if it is worth considering, and also because I want input not only on the idea, but on the idea of spreading it further. 3) I would never identify myself with MIRI, etc. not because I would not want to be identified that way, but because I have absolutely not earned it. I also give everyone full permission to disavow me as a lone crackpot as needed should that somehow become a problem. That said, thank you for bringing this up as a concern. I had already thought about it, which is one of the reasons I was mentioning it as a tentative consideration for more deliberation by other people. That said, had I not, it could have been a problem. A lot of stuff in this area is really sensitive, and needs to be handled carefully. That is also why I am nervous to even post it. All of that said, I think I might make another tentative proposal for further consideration. I think that some of these ideas ARE worth getting out there to more people. I have been involved in International NGO work for over a decade, studied it at university, and have lived and worked in half a dozen countries doing this work, and had no exposure to Effective Altruism, FHI, Existential Risk, etc. I hang out in policy/law/NGO circles, and none of my friends in these circles talk about it either. These ideas are not really getting out to those who should be exposed to them. I found EA/MIRI/Existential Risk through the simulation argument, which I read about on a blog I found off of reddit while clicking around on the internet about a year ago. That is kind of messed up. I really wish I had stumbled onto it earlier, and I tentatively think there is a lot of value in making it easier for others to stumble onto it into the future. Especially policy/law types, who are going to be needed at some point in

This is a really fascinating idea, particularly the aspect that we can influence the likelihood we are in a simulation by making it more likely that simulations happen.

To boil it down to a simple thought experiment. Suppose I am in the future where we have a ton of computing power and I know something bad will happen tomorrow (say I'll be fired) barring some 1/1000 likelihood quantum event. No problem, I'll just make millions of simulations of the world with me in my current state so that tomorrow the 1/1000 event happens and I'm saved since I'm almost certainly in one of these simulations I'm about to make!

0crmflynn8y
Maybe? We can increase our credence, but I think whether or not it increases the likelihood is an open question. The intuitions seem to split between two-boxers and a subset of one-boxers. That said, thank you for the secondary thought experiment, which is really interesting.

I agree with your logic and have been thinking about simulation after life and put it in my simulations map. The main problem here is Copernican principle. If heavenly simulations dominate simulation landscape, I will more probably find my self already in heaven, not in real life. But may be I am already in heaven and just play role game about Singularity.

2jacob_cannell8y
That's a good thing. There is one copy of us in the basement universe which actually creates the heaven for the rest of us. As we can never know which version we are, it doesn't really matter which version is in the basement universe.

This has basically been my belief system for a while - we could call it simulism perhaps. These memes are also old. Tipler proposed the whole 'simulation implementing afterlife' idea a few decades ago, although his particular implementation ideas involved emulations at the end of time and questionable physics. Despite that, the general idea of mind uploading into virtual afterlife appears to be pretty mainstream now in transhumanist thought (ie Turing Church).

I think it's fun stuff to discuss, but it has a certain stigma and is politically unpopular to ... (read more)

0crmflynn8y
When you say you believe this, do you mean you believe it to be the case, or you believe it to be a realistic possibility? I stumbled across Tipler when reading up on the simulation argument, and it inspired further “am I being a crackpot” self-doubt, but I don’t think this argument looks much like his. Also, I am not really trying to promote it so much as to feel it out. I have not yet found any reason to think I am wrong about it being a possibility, though I myself do not “feel” it to be likely. That said, with stuff like this, I have no sense that intuitions would tell me anything useful either. “Despite that, the general idea of mind uploading into virtual afterlife appears to be pretty mainstream now in transhumanist thought (ie Turing Church).” Yeah, it comes up in “Superintelligence” and some other things I have read too. The small difference, if there is one, is that this looks backwards, and could be a way to collect those who have already died, and also could be a way to hedge bets for those of us who may not live long enough for transhumanism. It also circumvents the teletransportation paradox and other issues in the philosophy of identity. Also, even when not being treated as a goal, it seems to have evidential value. Finally, there are some acausal trade considerations, and considerations with “watering down” simulations through AI “thought crimes,” that can be considered once this is brought in. I will probably post more of my tentative thoughts on that later. “I think it's fun stuff to discuss, but it has a certain stigma and is politically unpopular to some extent with the x-risk folks. I suspect this may have to do with Tipler's heavily Christian religious spin on the whole thing. Many futurists were atheists first and don't much like the suspicious overlap with Christian memes (resurrection, supernatural creators. 'saving' souls, etc)” The idea of posting about something that is unpopular on such an open-minded site is one of the things that
0jacob_cannell8y
Well naturally I believe the latter, but I also believe the former in the sense of being more likely true than not. Tipler isn't a full crackpot. His earlier book with Barrow - the Anthropic Cosmological Principle - was important in a number of respects and influenced later thinkers such as Kurzweil and Bostrom. Tipler committed to his particular physical cosmology which is now out of date in light of new observations. Cosmological artificial selection (evolution of physics over deep time via creation of new 'baby' universes by superintelligences) is far more likely. In any kind of multiverse, universes which reproduce will dominate in terms of observer measures. Not sure what you mean by this. Don't let that stop you. You can post it on your blog then discuss it here and elsewhere. LW discussion is more open minded these days. I was an atheist until I heard the sim argument and I then updated immediately. It is interesting to look at the various world religions in light of Simulism and the Singularity. Some of the beliefs end up being inadvertently correct or even prescient. For example, consider beliefs concerning burial vs cremation. It's roughly 50/50 split across cultures/religions over time. Both are effective from a health/sanitation point of view, but burial is somewhat more expensive. Judeo-christian religions all strongly believe in burial (cremation was actually outlawed in medieval europe). Hinduism on the other hand strongly supports cremation. In the standard (pre-singularity) atheist worldview, these are are just arbitrary rituals. However, we now know that this couldn't be farther from the truth. Burial preserves DNA for thousands, if not tens of thousands of years. So at some point in the near future robots can extract all of that DNA and use it to help in resurrection simulations. Obviously having someone's DNA is just the beginning of the information that you need for mind reconstruction, but it's a very important first step. There are n

I am not sure what is the take-away from this idea. If it is

should increase credence that we exist in such a simulation and should perhaps expect a heaven-like afterlife of long, though finite, duration

then, well, increasing credence from 0.0...001% to 0.0...01% is a jump by an order of magnitude, but it still doesn't move the needle leaving the probability in the "vanishingly small" realm.

If it is that we should strive to build such simulations, there are a few issues with this call to action, starting with the observation that at our techno... (read more)

2crmflynn8y
Thank you for your comment, and for taking a skeptical approach towards this. I think that trying to punch holes in it is how we figure out if it is worth considering further. I honestly am not sure myself. I think that my own thoughts on this are a bit like Bostrom's skepticism of the simulation hypothesis, where I do not think it is likely, but I think it is interesting, and it has some properties I like. In particular, I like the “feedback loop” aspect of it being tied into metaphysical credence. The idea that the more people buy into an idea, the more likely it seems that it “has already happened” shows some odd properties of evidence. It is a bit like if I was standing outside of the room where people go to pick up the boxes that Omega dropped off. If I see someone walk out with two unopened boxes, I expect their net wealth has increased ~$1000, if I see someone walk out with one unopened box, I expect them to have increased their wealth ~$1,000,000. That is sort of odd isn’t it? If I see a small, dedicated group of people working on how they would structure simulations, and raising money and trusts to push it a certain political way in the future (laws requiring all simulated people get a minimum duration of afterlife meeting certain specifications, no AIs simulating human civilization for information gathering purposes without “retiring” the people to a heaven afterward, etc.) I have more reason to think I might get a heaven after I die. As far as the “call to action” I hope that my post was not really read that way. I might have been clearer, and apologize. I think that running simulations followed by afterlife might be a worthwhile thing to do in the future, but I am not even sure it should be done for many reasons. It is worth discussing. One could also imagine that it might be determined, if we overcome and survive the AI intelligence explosion with a good outcome, that it is a worthwhile goal to create more human lives, which are pleasant, throughout o
2Lumifer8y
Keep in mind that the "simulation hypothesis" is also known as "creationism". In particular it implies that there are beings who constructed the simulation, who are not bound by its rules, and who can change it at will. The conventional name for such beings is "gods". I would treat is as a category error: ideas are not evidence. Even if they look "evidence-like". Why would future superpowerful people be interested in increasing your credence? Remember, this is ground well-trodden by theology. There the question is formulated as "Why doesn't God just reveal Himself to us instead leaving us in doubt?".
1crmflynn8y
I think you and I might be missing one another. Or that I am at least missing your point. Accordingly, my responses below might be off point. Hopefully they are not. I don’t think that necessarily follows. Creationism implies divinity, and gods implies something bigger than people who build a machine. Are your parents gods for creating you? In my own estimate, creating a simulation is like founding a sperm bank; you are not really “creating” anything, you are just moving pieces around in a way that facilitates more lives. You can mess around with the life and the world, but so can anyone in real life, especially if they have access to power, or guns, or a sperm bank, again, for that matter. It is different in scale, but not in type. Then again, I might be thinking too highly of “gods”? Also, I get the impression, and apologies if I am wrong, that you are mostly trying to show “family resemblance” with something many of us are skeptical of or dislike. I am atheist myself, and from a very religious background which leaves me wary. However, I think it is worth avoiding a “clustering” way of thinking. If you don’t want to consider something because of who said it, or because it vaguely or analogously resembles something you dislike, you can miss out on some interesting stuff. I think I avoided AI, etc. too long because I thought I did not really like “computer things” which was a mistake that cost me some great time in some huge, wide open, intellectual spaces I now love to run around in. I might be missing what you are saying, but I do not think I was saying that ideas were evidence. I was saying a group of people rallying around an idea could be a form of evidence. In this case, the “evidence” is that a lot of people might want something. What this is evidence of is that them wanting something makes it more likely that it will come about. I am not sure how this would fail as evidence. Two things: 1) They are not interested in the credence of people in the simulati
3Lumifer8y
Not for the sims who live inside the machine. Let me recount once again the relevant features: * Beings who created the world and are not of this world * Beings who are not bound by the rules of this world (from the inside view they are not bound by physics and can do bona fide miracles) * Beings who can change this world at will. These beings look very much like gods to me. The "not bound by our physics", in particular, decisively separates them from sims who, of course, do affect their world in many ways. That it will come about, yes. That it is this way, no. But that's the whole causality/Newcomb issue. Makes you think so, but doesn't make me think so. Again,this is the core issue here. One-boxers want it today, right now? Um, nothing happens.
0crmflynn8y
I think that this sort of risks being an argument about a definition of a word, as we can mostly agree on the potential features of the set-up. But because I have a sense that this claim comes with an implicit charge of fideism, I’ll take another round at clarifying my position. Also, I have written a short update to my original post to clarify some things that I think I was too vague on in the original post. There is a trade-off between being short enough to encourage people to read it, and being thorough enough to be clear, and I think I under-wrote it a bit initially. They did not really “create” this world so much as organized certain aspects of the environment. Simulated people are still existent in a physical world, albeit as things in a computer. The fact that the world as the simulated people conceive of it is not what it appears to be occurs happens to us as well when we dig into physics and everything becomes weird and unfamiliar. If I am in the environment of a video game, I do not think that anyone has created a different world, I just think that they have created a different environment by arranging bits of pre-existing world. Is something a miracle if it can be clear in physical terms how it happened? If there is a simulation, than the physics is a replica of physics, and “defying” it is not really any more miraculous than me breaking the Mars off of a diorama of the solar system. Everyone can do that. I do that by moving a cup of coffee from one place to another. In a more powerful sense, political philosophers have dramatically determined how humans have existed over the last 150 years. Human will shapes our existences a great deal already. I think that for you, “gods” emerge as a being grows in power, whereas I tend to think that divinity implies something different not just in scale, but in type. This might just be a trivial difference in opinion or definition or approach to something with no real relevance. I agree with you that this is the c
0Lumifer8y
That's what creation is. The issue here is inside view / outside view. Take Pac-Man. From the outside, you arranged bits of existing world to make the Pac-Man world. From the inside, you have no idea that such things as clouds, or marmosets, or airplanes exist: your world consists of walls, dots, and ghosts. Outside/inside view again. If I saw Mars arbitrarily breaking out of its orbit and go careening off to somewhere, that would look pretty miraculous to me. I agree about the difference in type. It is here: these beings are not of this world. The difference between you and a character in a MMORG is a difference in type. Re one/two-boxers, see my answer to the other post...
0crmflynn8y
I agree with you about the inside / outside view. I also think I agree with you about the characteristics of the simulators in relationship to the simulation. I think I just have a vaguely different, and perhaps personal, sense of how I would define "divine" and "god." If we are in a simulation, I would not consider the simulators gods. Very powerful people, but not gods. If they tried to argue with me that they were gods because they were made of a lot of organic molecules whereas I was just information in a machine, I would suggested it was a distinction without a difference. Show me the uncaused cause or something outside of physics and we can talk
0Lumifer8y
There is a classic answer to this :-/ In the context of the simulated world uncaused causes and breaking physics are easy. Hack the simulation, write directly to the memory, and all things are possible. It's just the inside/outside view again.
1jacob_cannell8y
We live in a very special time - right on the cusp of AGI - so there is much that one can do right now. ;)
1Lumifer8y
AGI has been 20 years away for the past 50 years or so. I see no reason to believe the pattern will break any time now :-/
2jacob_cannell8y
No - AGI's arrival can be expected around the end of conventional Moore's Law, as that is naturally when we can expect to have brain level hardware performance. Before that AGI is impractical, shortly after that it becomes inevitable. There are a large number of people making predictions, almost all of them have no idea what they are talking about. It is the logic behind the predictions that matter.
0Lumifer8y
I don't think our progress in creating an AGI is constrained by hardware at this point. It's a software problem and you can't solve it by building larger and more densely packed supercomputers. Yep :-)
0jacob_cannell8y
That is now possibly arguably just becoming true for the first time - as we approach the end of Moore's Law and device geometry shrinks to synapse comparable sizes, densities, etc. Still, current hardware/software is not all that efficient for the computations that intelligence requires - which is namely enormous amounts of low precision/noisy approximate computing. Of course you can - it just wouldn't be economical. AGI running on a billion dollar super computer is not practical AGI, as AGI is AI that can do everything a human can do but better - which naturally must include cost. It isn't a problem of what math to implement - we have that figured out. It's a question of efficiency.
-1Lumifer8y
Why not? AGI doesn't involve emulating Fred the janitor, the first AGI is likely to have a specific purpose and so will likely have huge advantages over meatbags in the particular domain it was made for. If people were able to build an AGI on a billion-dollar chunk of hardware right now they would certainly do so, if only as a proof of concept. A billion isn't that much money to a certain class of organizations and people. Oh, really? I'm afraid I find that hard to believe.
0jacob_cannell8y
Say you have the code/structure for an AGI all figured out, but it runs in real-time on a billion dollar/year supercomputer. You now have to wait decades to train/educate it up to an adult. Furthermore, the probability that you get the seed code/structure right on the first try is essentially zero. So rather obviously - to even get AGI in the first place you need enough efficiency to run one AGI mind in real-time on something far far less than a supercomputer. Hard to believe only for those outside ML.
0V_V8y
I don't think that even in ML the school of "let's just make a bigger neural network" is taken seriously. Neural networks are prone to overfitting. All the modern big neural networks that are fashionable these days require large amounts of training data. Scale up these networks to the size of the human brain, and, even assuming that you have the hardware resources to run them, you will get something that just memorizes the training set and doesn't perform any useful generalization. Humans can learn from comparatively small amounts of data, and in particular from very little and very indirectly supervised data: you don't have to show a child a thousand apples and push each time an "apple" button on their head for them to learn what an apple looks like. There is currently lots of research in ML in how to make use of unsupervised data, which is cheaper and more abundant than supervised data, but this is still definitely an open problem, so much that it isn't even clear what properties we want to model and how to evaluate these models (e.g. check out this recent paper). Therefore, the math relevant to ML has definitely not been all worked out.
0jacob_cannell8y
That's not actually what I meant when I said we have the math figured out. The math behind general learning is just general bayesian inference in it's various forms. The difficulty is not so much in the math, it is in scaling up efficiently. To a first approximation the recent surge in progress in AI is entirely due to just making bigger neural networks. As numerous DL researchers have admitted - the new wave of DL is basically just techniques from the 80's scaled up on modern GPUs. Regarding unsupervised learning - I wholeheartedly agree. However one should also keep in mind that UL and SL are just minor variations of the same theme in a bayesian framework. If you have accurate labeled data, you might as well use it. In order to recognize and verbally name apples, a child must first have years of visual experience. Supervised DL systems trained from scratch need to learn everything from scratch, even the lowest level features. The object in these systems is not to maximize learning from small amounts of training data. In the limited training data domain and more generally for mixed datasets where there is a large amount of unlabeled data, transfer learning and mixed UL/SL can do better. Just discussing that here. The only real surprising part of that paper is the "good model, poor sampling" section. It's not clear how often their particular pathological special case actually shows up in practice. In general a solomonoff learner will not have that problem. I suspect that a more robust sampling procedure could fix the mismatch. A robust sampler would be one that outputs samples according to their total probability as measured by encoding cost. This corrects the mismatch between the encoder and the sampler. Naively implemented this makes the sampling far more expensive, perhaps exponentially so, but nonetheless it suggests the problem is not fundamental.
2V_V8y
Ok, but this is even more vague then. At least neural networks are a coherent class of algorithms, with lots of architectural variations and hyperparameters to tune, but still functionally similar. General Bayesian inference, on the other hand, is a broad framework with dozens types of algorithms for different tasks, based on different assumptions and with different functional structure. You could as well say that once we formulated the theory of universal computation and we had the first digital computers up and running, then we had all the math figured out and it was just a matter of scaling up things. This was probably the sentiment at the famous Dartmouth conference in 1956 where they predicted that ten smart people brainstorming for two months could make significant advancements in multiple fundamental AI problems. I think that we know better now. Supervised learning may be a special case of unsupervised learning but not the other way round. Currently we can only do supervised learning well, at least when when big data is available. There have been attempts to reduce unsupervised learning to supervised learning, which had some practical success in textual NLP (with neural language models and word vectors) but not in other domains such as vision and speech. The paper I linked, IMHO, may shed some light on why this happened: one of the most popular evaluation measure and training objective, the negative log-likelihood (aka empirical cross-entropy), which captures well our intuition of what a good model must do in binary (or low-dimensional) classification tasks, may break down in the high-dimensional regime, typical of some unsupervised tasks such as sampling. I've never seen a modern generative model generate realistic samples of natural images or speech. Text generation fares somewhat better, but it's still far from anything able to pass a Turing test. By contrast, discriminative models for classification or regression trained on large supervised data can o
0jacob_cannell8y
I don't agree with this memetic taxonomy. I consider neural networks to be mostly synonymous with algebraic tensor networks - general computational graphs over tensors. As such ANN describes a modeling language family, equivalent in expressibility to binary circuit models (and thus Turing universal) but considerably more computationally efficient. The tensor algebra abstraction more closely matches physical hardware reality. So as a general computing paradagim or circuit model, ANNs can be combined with any approximate inference technique. Backprop on log-likelihood is just one obvious approx method. Not quite, because it took longer for the math for inference/learning to be worked out, and even somewhat longer for efficient approximations - and indeed that work is still ongoing. Regardless, even if all the math was available in 1956 it wouldn't of mattered, as they still would have had to wait 60 years or so for efficient implementations (hardware + software). To the extant that this is a problem in practice, it's a problem with typical sampling, not the measure itself. As I mentioned earlier, I believe it can be solved by more advanced sampling techniques that respect total KC/Solomonoff probability. Using these hypothetical correct samplers, good models should always produce good samples. That being said I agree that generative modelling and realistic sampling in particular is an area ripe for innovation. You actually probably have seen this in the form of CG in realistic video games or films. Of course those models are hand crafted rather than learned probabilistic generative models. I believe that cross-fertilization of ideas/techniques from graphics and ML will transform both in the near future. The current image generative models in ML are extremely weak when viewed as procedural graphics engines - for the most part they are just 2D image blenders.
0Lumifer8y
How would you know that you have it "all figured out"? Err... didn't you just say that it's not a software issue and we have already figured out what math to implement? What's the problem? Right... build a NN a mile wide and a mile deep and let 'er rip X-/
0jacob_cannell8y
No, I never said it is not a software issue - because the distinction between software/hardware issues is murky at best, especially in the era of ML where most of the 'software' is learned automatically. You are trolling now - cutting my quotes out of context.
0crmflynn8y
I am not sure it matters when it comes. Presumably, unless we find some other way to extinction, it will come at some point. When it comes, it is likely that the technology will not be a problem for it. Once the technology exists, and probably before, we may need to figure out if and how we want to do simulations. If people have a clear, well developed, and strong preference going into it (including potentially putting it into the AI as a requirement for its modeling of humanity, or it being a big enough “movement” to show up in our CEV) that will likely have a large effect on the odds of it happening. Also, I know some people who sincerely think belief in god is based almost exclusively on fear of death. I am skeptical of this, but if it is true, or even partially true, if even a fraction of the fervor/energy/dedication that is put into religion was put into pushing for this, I think it might be a serious force. The point about credence is just a point about it being interesting, decision making aside, that something as fickle as collective human will, might determine if I “survive” death, and if all my dead loved ones will as well. So, for example, if this post, or someone building off of my post, but doing it better, were to explode on LW and pour out into reddit and the media, it should increase our credence in an afterlife. If its reception is lukewarm, decrease it. There is something really weird about that, and worth chewing on. Also, I think that people’s motivation to have an afterlife seems like a more compelling reason to create simulations than experimentation/entertainment, so it helps shift credence around among the four disjuncts of the simulation argument.
-2Lumifer8y
Simulations of long-ago ancestors..? Imagine that you have the ability to run a simulation now. Would you want to populate it by people like you, that is, fresh people de novo and possibly people from your parents and grandparents generations -- or would you want to populate it with Egyptian peasants from 3000 B.C.? Homo habilis, maybe? How far back do you want to go? No, I don't think so. You're engaging in magical thinking. What you -- or everyone -- believes does not change the reality.
3gjm8y
It can give evidence, though. Consider Hypothesis A: "Societies like ours will generally not decide, as their technological capabilities grow, to engage in massive simulation of their forebears" and Hypothesis B which omits the word "not". Then: * The decisions made by, and ideas widely held in, our society, can be evidence favouring A or B. * We are more likely simulations if B is right than if A is right. Similarly if the hypotheses are "... to engage in massive simulation of their forebears, including blissful afterlives", in which case we are more likely to have blissful simulated afterlives if B is right than if A is right. (Not necessarily more likely to have blissful afterlives simpliciter, though -- perhaps, e.g., the truth of B would somehow make it less likely that we get blissful afterlives provided by gods.) My opinion, for what it's worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.
0V_V8y
I think that the problem with this sort of arguments is that it's like cooperating in prisoner's dilemma hoping that superrationality will make the other player cooperate: It doesn't work. It seems that lots of people here conflate Newcomb's problem, which is a very unusual single-player decision problem, with prisoner's dilemma, which is the prototypical competitive game from game theory. Also, I don't see why I should consider an accurate simulation of me, from my birth to my death, ran after my real death as a form of afterlife. How would it be functionally different than screening a movie of my life?
1gjm8y
My understanding is that the proposal here isn't that an accurate simulation of your life should be counted as an afterlife; it's that a somewhat-accurate simulation of lots of bits of your life might be a necessary preliminary to providing you with an afterlife (because they'd be needed to figure out what your brain, or at least your mind, was like in order to recreate it in whatever blissful -- or for that matter torturous -- afterlife might be provided for you). As for Newcomb versus prisoners' dilemma, see my comments elsewhere in the thread: I am not proposing that our decision whether to engage in large-scale ancestor simulation has any power to affect our past, only that it may provide some evidence bearing on what's likely to have been in our past.
0crmflynn8y
I just want to clarify in case you mean my proposal, as opposed to the proposal by jacobcannell. This is my reading of what jacobcannell said as well, but it is not at all a part of my argument. In fact, while I would be interested in reading jacobcannell’s thoughts on identity and the self, I share the same skeptical intuitions as other posters in this thread about this. I am open to being wrong, but on first impression I have an extremely difficult time imagining that it will be at all possible to simulate a person after they have died. I suspect that it would be a poor replica, and certainly would not contain the same internal life as the person. Again, I am open to being convinced, but nothing about that makes sense to me at the moment. I think that I did a poor job of making this clear in my first post, and have added a short note at the end to clarify this. You might consider reading it as it should make my argument clearer. My proposal is far less interesting, original, or involved then this, and drafts off of Nick Bostrom’s simulation argument in its entirety. What I was discussing was making simulations of new and unique individuals. These individuals would then have an afterlife after dying in which they would be reunited with the other sims from their world to live out a subjectively long, pleasant existence in their simulation computer. There would not be any attempt to replicate anyone in particular or to “join” the people in their simulation through a brain upload or anything else. The interesting and relevant feature would be that the creation of a large number of simulations like this, especially if these simulations could and did create their own simulations like this too, would increase our credence that we were not actually at the “basement level” and instead were ourselves in a simulation like the ones we made. This would increase our credence that dead loved ones had already been shifted over into the afterlife just as we shift people in the s
0V_V8y
Or they are just interested in the password needed to access the cute cat pictures on my phone. Seriously, we are in the realm of wild speculation, we can't say that evidence points any particular way.
0crmflynn8y
I hope I am not intercepting a series of questions when you were only interested in gjm’s response but I enjoyed your comment and wanted to add my thoughts. I am not sure it is settled that it does not work, but I also do not think that most, or maybe any, of my argument relies on an assumption that it does. The first part of it does not even rely on an assumption that one-boxing is reasonable, let alone correct. All it says is that so long as some people play the game this way, as an empirical, descriptive reality of how they actually play, that we are more likely to see certain outcomes in situations that look like Newcomb. This looks like Newcomb. There is also a second argument further down that suggests that under some circumstances with really high reward, and relatively little cost, that it might be worth trying to “cooperate on the prisoner’s dilemma” as a sort of gamble. This is more susceptible to game theoretic counterpoints, but it is also not put up as an especially strong argument so much as something worth considering more. I am pretty sure I am not doing that, but if you wanted to expand on that, especially if you can show that I am, that would be fantastic. So, just to be clear, this is not my point at all. I think I was not nearly clear enough on this in the initial post, and I have updated it with a short-ish edit that you might want to read. I personally find the teletransportation paradox pretty paralyzing, enough so that I would have sincere brain-upload concerns. What I am talking about is simulations of non-specific, unique, people in the simulation. After death, these people would be “moved” fully intact into the afterlife component of the simulation. This circumvents teletransportation. Having the vast majority of people “like us” exist in simulations should increase our credence that we are in a simulation just as they are (especially if they can run simulations of their own, or think they are running simulations of their own). The ide
0crmflynn8y
I wonder if you might expand on your thoughts on this a bit more. I tend to think that the odds of being in a simulation are quite low as well, but for me the issue is more the threat of extinction than a lack of will. I can think of some reasons why, even if we could build such simulations, we might not, but I feel that this area is a bit fuzzy in my mind. Some ideas I already have: 1) Issues with the theory of identity 2) Issues with theory of mind 3) Issues with theory of moral value (creating lots high quality lives not seen as valuable, antinatalism, problem of evil) 4) Self-interest (more resources for existing individuals to upload into and utilize) 5) The existence of a convincing two-boxer “proof” of some sort I also would like to know why an “enthusiastic takeup of the ideas in this post” would not increase your credence significantly? I think there is a very large chance of these ideas not being taken up enthusiastically, but if they were, I am not sure what, aside from extinction, would undermine them. If we get to the point where we can do it, and we want to do it, why would we not do it? Thank you in advance for any insight, I have spent too long chewing on this without much detailed input, and I would really value it.
1gjm8y
I'm not sure I have much to say that you won't have thought of already. But: First of all, there seem to be lots of ways in which we might fail to develop such technology. We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely). It might turn out that computational superpowers just aren't really available -- that there's only so much processing power we have any realistic way of harnessing. It might turn out that such things are possible but we simply aren't smart enough to find our way to them. Second, if we (or more precisely our successors, whoever or whatever they are) develop such computational superpowers, why on earth use them for ancestor simulations? In this sort of scenario, maybe we're all living in some kind of virtual universe; wouldn't it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world? Someone else -- entirelyuseless? -- observed earlier in the thread that some such simulation might be necessary in order to figure out enough about our ancestors' minds to simulate them anywhere else, so it's just possible that grotty 21st-century ancestor sims might be a necessary precursor to glorious 25th-century ancestor sims; but why ancestors anyway? What's so special about them, compared with all the other possible minds? Third, supposing that we have computational superpowers and want to simulate our ancestors, I see no good reason to think it's possible. The information it would take to simulate my great-great-grandparents is dispersed and tangled up with other information, and figuring out enough about my great-great-grandparents to simulate them will be no easier than locating the exact oxygen atoms that were in Julius Caesar's last breath. All the relevant systems are chaotic, measurement is imprecise, and surely there's
0crmflynn8y
Absolutely. I think this is where this thing most likely fails. Somewhere in the first disjunct. My gut does not think I am in a simulation, and while that is not at all a valid way to acquire knowledge, it is the case that it leans me heavily into this. So I am not saying that they WOULD do it, I actually can think of a lot of pretty compelling reasons why they MIGHT. If the people who are around then are at all like us, then I think that a subset of them would likely do it for the one-boxer reasons I mentioned in the first post (which I have since updated with a note at the bottom to clarify some things I should have included in the post originally.) Whether or not their intuitions are valid, there is an internal logic, based on these intuitions, which would push for this. Reasons include hedging against the teletransportation paradox (which also applies to self-uploading) and hoping to increase their credence of an afterlife in which those already dead can join in. This is clearer I think in my update. The main confusion is that I am not talking about attempting to simulate or recreate specific dead people, which I do not think is possible. The key to my argument is to create self-locating doubt. Also, in my argument, the people who create the simulation are never joined with the people in the simulation. These people stay in their simulation computer. The idea is that we are “hoping” we are similarly in a simulation computer, and have been the whole time, and that when we die, we will be transferred (whole) into the simulations afterlife component along with everyone who died before us in our world. Should we be in a simulation, and yet develop some sort of “glorious virtual universe” that we upload into, there are several options. Two ones that quickly come to mind: 1) We might stay in it until we die, then go into the afterlife component, 2) We might at some point be “raptured” by the simulation out of our virtual universe into the existent “glorious virtual
0jacob_cannell8y
Yes, but that it isn't enough to defeat simulations. One successful future can create a huge number of sims. Observational selection effects thus make survival fare more likely than otherwise expected. Even without quantum computing or reversible computing, even just using sustainable resources on earth (solar) - even with those limitations - there are plenty of resources to create large numbers of sims. The cost is about the same either way. So the question is one of economic preferences. When people can use their wealth to create either new children or bring back the dead, what will they do? You are thus assuming there will be very low demand for resurrecting the dead vs creating new children. This is rather obviously unlikely. This technology probably isn't that far away - it is a 21st century tech, not 25th. It almost automatically follows AGI, as AGI is actually just the tech to create minds - nothing less. Many people alive today will still be alive when these sims are built. They will bring back their loved ones, who then will want to bring back theirs, and so on. Most people won't understand or believe it until it happens. But likewise very few people actually understand how modern advanced rendering engines work - which would seem like magic to someone from just 50 years ago. It's an approximate inference problem. The sim never needs anything even remotely close to atomic information. In terms of world detail levels it only requires a little more than current games. The main new tech required is just the large scale massive inference supercomputing infrastructure that AGI requires anyway. It's easier to understand if you just think of a human brain sim growing up in something like the Matrix, where events are curiously staged and controlled behind the scenes by AIs.
1gjm8y
The opinion-to-reasons ratio is quite high in both your comment and mine to which it's replying, which is probably a sign that there's only limited value in exploring our disagreements, but I'll make a few comments. One future civilization could perhaps create huge numbers of simulations. But why would it want to? (Note that this is not at all the same question as "why would it create any?".) The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations. You have to figure out exactly what the dead were like, which (despite your apparent confidence that it's easy to see how easy it is if you just imagine the Matrix) I think is likely to be completely infeasible, and monstrously expensive if it's possible at all. But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before? Where's the value in that? (And if the answer is, as proposed by entirelyuseless, that to figure out who and what they were we need to do lots of simulations of their earthly existence, then note that that's one more reason to think that resurrecting them is terribly expensive.) (If we can resurrect the dead, then indeed I bet a lot of people will want to do it. But it seems to me they'll want to do it for reasons incompatible with leaving the resurrected dead in simulations of the mundane early 21st century.) You say with apparent confidence that "this technology probably isn't that far away". Of course that could be correct, but my guess is that you're wronger than a very wrong thing made of wrong. We can't even simulate C. elegans yet, even though that only has about 1k neurons and they're always wired up the same way (which we know). Yes, it's an approximate inference problem. With an absolutely colossal number of parameters and, at least on the face of it, scarcely any actual
0jacob_cannell8y
I've already answered this - because living people have a high interest in past dead people, and would like them to live again. It's that simple. True, but most of the additional cost boils down to a constant factor once you amortize at large scale. Recreating a single individual - very expensive. Recreating billions? Reduces down to closer to the scaling costs of simulating that many minds. No, you don't. For example the amount of information remaining about my grandfather who died in the 1950's is pretty small. We could recover his DNA, and we have a few photos. We have some poetry he wrote, and letters. The total amount of information contained in the memories of living relatives is small, and will be even less by the time the tech is available. So from my perspective the target is very wide. Personal identity is subjectively relative. You wouldn't. I think you misunderstand. You need the historical sims to recreate the dead in the first place. But once that is running, you can copy out their minds at any point. However you always need one copy to remain in the historical sim for consistency (until they die in the hist-sim). You could also say we can't simulate bacteria, but neither is relevant. I'm not familiar enough with C. Elegans sims to evaluate your claim that the current sims are complete failures, but even if this is true it doesn't tell us much because only a tiny amount of resources have been spent on that. Just to be clear - the historical ress-sims under discussion will be created by large-scale AGI (superintelligence). When I say this tech isn't that far away, it's because AGI isn't that far away, and this follows shortly thereafter. Hardly. You are assuming naive encoding without compression. Neural nets - especially large biological brains -are enormously redundant and highly compressible. Look - it's really hard to accurately estimate the resources for things like this, unless you actually know how to build it. 10^15 is a a reasonable upp
0gjm8y
But the answer you go on to repeat is one I already explained wasn't relevant, in the sentence after the one you quoted. I'm not sure what you're arguing. I agree that the additional cost is basically a (large) constant factor; that is, if it costs X to simulate a freshly made new mind, maybe it costs 1000X to recover the details of a long-dead one and simulate that instead. (The factor might well be much more than 1000.) I don't understand how this is any sort of counterargument to my suggestion that it's a reason to simulate new minds rather than old. You say that like it's a good thing, but what it actually means is that almost certainly we can't bring your grandfather back to life, no matter what technology we have. Perhaps we could make someone who somewhat resembles your grandfather, but that's all. Why would you prefer that over making new minds so much as to justify the large extra expense of getting the best approximation we can? I'm not sure what that means. I'd expect that you use the historical simulation in the objective function for the (enormous) optimization problem of determining all the parameters that govern their brain, and then you throw it away and plug the resulting mind into your not-historical simulation. It will always have been the case that at one point you did the historical simulation, but the other simulation won't start going wrong just because you shut down the historical one. Anyway: as I said before, if you expect lots of historical simulation just to figure out what to put into the non-historical simulation, then that's another reason to think that ancestor simulation is very expensive (because you have to do all that historical simulation). On the other hand, if you expect that a small amount of historical simulation will suffice then (1) I don't believe you (if you're estimating the parameters this way, you'll need to do a lot of it; any optimization procedure needs to evaluate the objective function many times) and (2) in t
1jacob_cannell8y
I don't really have a great deal of time to explain this so I"ll be brief. Basically this is something I've thought a great deal about and I have a rather detailed technical vision on how to achieve (At least to the extant that anyone can today. I'm an expert in the relevant fields - computer simulation/graphics and machine learning, and this is my long term life goal.). Fully explaining a rough roadmap would require a small book or long paper, so just keep that in mind. Sorry - I meant a large constant, not a constant multiplier. Simulating a mind costs the same - doesn't matter whether it's in a historical sim world or a modern day sim or a futuristic sim or a fantasy sim ... the cost of simulating the world to (our very crude ) sensory perception limits is always about the same. The extra cost for an h-sim vs others is in the initial historical research/setup (a constant) and consistency guidance. The consistency enforcement can be achieved by replacing standard forward inference with a goal-directed hierarchical bidirectional inference. The cost ends up asymptotically about the same. Instead of just a physical sim, or it's more like a very deep hierarchy where at the highest levels of abstraction historical events are compressed down to text like form in some enormous evolving database written and rewritten by an army of historian AIs. Lower more detailed levels in the graph eventually resolve down into 3D objects and physical simulation sparsely as needed. As I said earlier - you do not determine who is or is not my grandfather. Your beliefs have zero weight on that matter. This is such an enormously different perspective that it isn't worth discussing more until you actually understand what I mean when I say personal identity is relative and subjective. Do you grok it? Perhaps, but I'm not a random sample - not part of your 'we'. I've spent a great deal of time researching the road to AGI. I've written a little about related issues in the past. AGI will
0Lumifer8y
Can you provide some links to your publications on the topic of machine learning?
0jacob_cannell8y
Not yet. :) I meant expert only in "read up on the field", not recognized academic expert. Besides, much industrial work is not published in academic journals for various reasons (time isn't justified, secrecy, etc).
0gjm8y
Historical versus other sims: I agree that if the simulation runs for infinitely long then the relevant difference is an additive rather than a multiplicative constant. But in practice it won't do. Yes, of course I understand your point that I don't get to decide what counts as your grandfather; neither do you get to decide what counts as mine. You apparently expect that our successors will attach a lot of value to simulating people who for all they know (on the basis of a perhaps tiny amount of information) might as well be copies of their ancestors. I do not expect that. Not because I think I get to decide what counts as your grandfather, but because I don't expect our successors to think in the way that you apparently expect them to think. Yes, you'll have terrible overfitting problems if you have too many parameters. But the relevant comparison isn't between the number of parameters in the model and the number of synapses; it's between the number of parameters in the model and the amount of information we have to nail the model down. If it takes more than (say) a gigabyte of maximally-compressed information to describe how one person differs from others, then it will take more than (something on the order of) 10^9 parameters to specify a person that accurately. I appreciate that you think something far cruder will suffice. I hope you appreciate that I disagree. (I also hope you don't think I disagree because I'm an idiot.) Anyway, my point here is this: specifying a person accurately enough requires whatever amount of information it does (call it X), and our successors will have whatever amount of usable information they do (call it Y), and if Y<<X then the correct conclusion isn't "excellent, our number of parameters[1] will be relatively small to avoid overfitting, so we don't need to worry that the fitting process will take for ever", it's "damn, it turns out we can't reconstruct this person". [1] It would be better to say something like "number of indepen
1jacob_cannell8y
AGI will change our world in many ways, one of which concerns our views on personal identity. After AGI people will become accustomed to many different versions or branches of the same mind, mind forking, merging, etc. Copy implies a version that is somehow lesser, which is not the case. Indeed in a successful sim scenario, almost everyone is technically a copy. The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses. Right - again we know that it can't be much more than 10^14 (number of synapses in human adult, it's not 10^15 BTW), and it could be as low as 10^10. The average synapse stores only a bit or two at most (you can look it up, it's been measured - the typical median synapse is tiny and has an extremely low SNR corresponding to a small number of bits.) We can argue about numbers in between, but it doesn't really matter because either way it isn't that much. No - it just doesn't work that way, because identity is not binary. It is infinite shades of grey. Different levels of success require only getting close enough in mindspace, and is highly relative to one's subjective knowledge of the person. What matters most is consistency. It's not like the average person remembers everything they said a few years ago, so that 10^10 figure is extremely generous. Our memory is actually fairly poor. There will be multiple versions of past people - just as we have multiple biographies today. Clearly there is some objective sense in which some versions are more authentic, but this isn't nearly as important as you seem to think - and it is far less important than historical consistency with the rest of the world. We are in the same situation today. For all I know all of my past life is a fantasy created on the fly. What actually matters is consistency - that my memories match the memories of others and recorded history. And in fact due to the malleabilit
1gjm8y
I agree, but evidently we disagree about how our views on personal identity will change if and when AGI (and, which I think is what actually matters here, large-scale virtualization) comes along. That's not how I was intending to use the word. You've been arguing that we need substantially less information than "exactly the amount of compressed information encoded in the synapses". I promise, I do understand this, and I don't see that anything I wrote requires that identity be binary. (In particular, at no point have I been intending to claim that what's required is the exact same neurons, or anything like that.) These are value judgements, or something like them. My values are apparently different from yours, which is fair enough. But the question actually at issue wasn't one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI). So far you've offered no grounds for thinking that they will feel the same way about this as you do, you've just stated your own position as if it's a matter of objective fact (albeit about matters of not-objective-fact). Only if you don't distinguish between what's possible and what's likely. Sure, I could have been created ten seconds ago with completely made-up memories. Or I could be in the hands of a malevolent demon determined to deceive me about everything. Or I could be suffering from some disastrous mental illness. But unless I adopt a position of radical skepticism (which I could; it would be completely irrefutable and completely useless) it seems reasonable not to worry about such possibilities until actual reason for thinking them likely comes along. I will (of course!) agree that our situation has a thing or two in common with that one, because our perception and memory and inference are so limited and error-prone, and because even without simulation p
0jacob_cannell8y
That was misworded - I meant the amount of information actually encoded in the synapses, after advanced compression. As I said before, synapses in NNs are enormously redundant, such that trivial compression dramatically reduces the storage requirements. For the amount of memory/storage to represent a human mind level sim, we get that estimate range between 10^10 to 10^14, as discussed earlier. However a great deal of this will be redundant across minds, so the amount required to specify the differences of one individual will be even less. Right. Well I have these values, and I am not alone. Most people's values will also change in the era of AGI, as most people haven't thought about this clearly. And finally, for a variety of reasons, I expect that people like me will have above average influence and wealth. Your side discussion about your distant relatives suggests you don't foresee how this is likely to come about in practice (which really is my fault as I haven't explained it in this thread, although I have discussed bits of it previously). It isn't about distant ancestors. It starts with regular uploading. All these preserved brains will have damage of various kinds - some arising from the process itself, some from normal aging or disease. AI then steps in to fill in the gaps, using large scale inference. This demand just continues to grow, and it ties into the pervasive virtual world heaven tech that uploads want for other reasons. In short order everyone in the world has proof that virtual heaven is real, and that uploading works. The world changes, and uploading becomes the norm. We become an em society. Someone creates a real Harry Potter sim, and when Harry enters the 'real' world above he then wants to bring back his fictional parents. So it goes. Then the next step is insurance for the living. Accidents can destroy or damage your brain - why risk that? So the AIs can create a simulated copy of the earth, kept up to date in real time through the ridi
0Lumifer8y
And how does that follow?
1gjm8y
"Follow" is probably an exaggeration since this is pretty handwavy, but: First of all, a clarification: I should really have written something like "We are more likely accurate ancestor-simulations ..." rather than "We are more likely simulations". I hope that was understood, given that the actually relevant hypothesis is one involving accurate ancestor-simulations, but I apologize for not being clearer. OK, on with the show. Let W be the world of our non-simulated ancestors (who may or may not actually be us, depending on whether we are ancestor-sims). W is (at least as regards the experiences of our non-simulated ancestors) like our world, either because it is our world or because our world is an accurate simulation of W. In particular, if A then W is such as generally not to lead to large-scale ancestor sims, and if B then W is such as generally to lead to large-scale ancestor sims. So, if B then in addition to W there are probably ancestor-sims of much of W; but if A then there are probably not. So, if B then some instances of us are probably ancestor-sims, and if A then probably not. So, Pr(we are ancestor-sims | B) > Pr(we are ancestor-sims | A). Extreme case: if we somehow know not A but the much stronger A': "A society just like ours will never lead to any sort of ancestor-sims" then we can be confident of not being accurate ancestor-sims. (I repeat that of course we could still be highly inaccurate ancestor-sims or non-ancestor sims, and A versus B doesn't tell us much about that, but that the question at issue was specifically about accurate ancestor-sims since those are what might be required for our (non-simulated forebears') descendants to give us (or our non-simulated forebears) an afterlife, if they were inclined to do so.)
-1Lumifer8y
Consider a different argument. Our world is either simulated or not. If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us. If our world is simulated, we are already simulated and there's nothing we can do to increase our chance of being simulated because it's already so.
3gjm8y
That might be highly relevant[1] if I'd made any argument of the form "If we do X, we make it more likely that we are simulated". But I didn't make any such argument. I said "If societies like ours tend to do X, then it is more likely that we are simulated". That differs in two important ways. [1] Leaving aside arguments based on exotic decision theories (which don't necessarily deserve to be left aside but are less obvious than the fact that you've completely misrepresented what I said).
-5Lumifer8y
2crmflynn8y
I am guessing you two-box in the Newcomb paradox as well, right? If you don’t then you might take a second to realize you are being inconsistent. If you do two-box, realize that a lot of people do not. A lot of people on LW do not. A lot of philosophers who specialize in decision theory do not. It does not mean they are right, it just means that they do not follow your reasoning. They think that the right answer is to one box. They take an action, later in time, which does not seem causally determinative (at least as we normally conceive of causality). They may believe in retrocausality, the may believe in a type of ethics in which two-boxing would be a type of cheating or free-riding, they might just be superstitious, or they might just be humbling themselves in the face of uncertainty. For purposes of this argument, it does not matter. What matters, as an empirical matter, is that they exist. Their existence means that they will ignore or disbelieve that “there’s nothing we can do to increase our chance of being simulated” like they ignore the second box. If we want to belong to the type of species where the vast majority of the species exists in a simulations with a long-duration, pleasant afterlife, we need to be the “type of species” who builds large numbers of simulations with long-duration, pleasant afterlives. And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one. Pending acausal trade considerations (probably for another post), two-boxers, and likely some one-boxers, will not think that their actions are causing anything, but it will have evidential value still.
-1Lumifer8y
Yes, of course. I don't think this is true. The correct version is your following sentence: People on LW, of course, are not terribly representative of people in general. I agree that such people exist. Hold on, hold on. What is this "type of species" thing? What types are there, what are our options? Nope, sorry, I don't find this reasoning valid. Still nope. If you think that people wishing to be in a simulation has "evidential value" for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have "evidential value"? Are you going to cherry-pick "right" beliefs and "wrong" beliefs?
2crmflynn8y
LW is not really my personal sample for this. I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2/3 of people two box. Nozick, who popularized this, said he thought it was about 50/50. While it is again not representative, of the thousand people who answered the question in this survey, it was about equal (http://philpapers.org/surveys/results.pl). For people with PhD’s in Philosophy it was 458 two-boxers to 348 one-boxers. While I do not know what the actual number would be if there was a Pew Survey, I suspect, especially given the success of Calvinism, magical thinking, etc. that there are a substantial minority of people who would one-box. Okay. Can you see how they might take the approach I have suggested they might? And if yes, can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish? As a turn of phrase, I was referring two types. One that makes simulations meeting this description, and one that does not. It is like when people advocate for colonizing Mars, they are expressing a desire to be “that type of species.” Not sure what confused you here…. If you are in the Sleeping Beauty problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem), and are woken up during the week, what is your credence that the coin has come up tails? How do you decide between the doors in the Monty Hall problem? I am not asking you to think that the actual odds have changed in real time, I am asking you to adjust your credence based on new information. The order of cards has not changed in the deck, but now you know which ones have been discarded. If it turns out simulations are impossible, I will adjust my credence about being in one. If a program begins plastering trillions of simulations across the cosmological endowment with von Neumann probes, I will adjust my credence upward. I am not saying that yo
0Lumifer8y
Interesting. Not what I expected, but I can always be convinced by data. I wonder to which degree the religiosity plays a part -- Omega is basically God, so do you try to contest His knowledge..? Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster -- so what? My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about "types" -- one can certainly imagine them, but that has nothing to do with reality. Which new information? Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish? You are conflating here two very important concepts, that is, "present" and "future". People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah. Correct. My belief is that it IS possible that we live in a simulation but it has the same status as believing it IS possible that Jesus (or Allah, etc.) is actually God. The probability is non-zero, but it's not affecting any decisions I'm making. I still don't see why the number of one-boxers around should cause me to update this probability to anything more significant.
0crmflynn8y
By analogy, what are some things that decrease my credence in thinking that humans will survive to a “post-human stage.” For me, some are 1) We seem terrible at coordination problems at a policy level, 2) We are not terribly cautious in developing new, potentially dangerous, technology, 3) some people are actively trying to end the world for religious/ideological reasons. So as I learn more about ISIS and its ideology and how it is becoming increasingly popular, since they are literally trying to end the world, it further decreases my credence that we will make it to a post-human stage. I am not saying that my learning information about them is actually changing the odds, just that it is giving me more information with which to make my knowledge of the already-existing world more accurate. It’s Bayesianism. For another analogy, my credence for the idea that “NYC will be hit by a dirty bomb in the next 20 years” was pretty low until I read about the ideology and methods of radical Islam and the poor containment of nuclear material in the former Soviet Union. My reading about these people’s ideas did not change anything, however, their ideas are causally relevant, and my knowledge of this factor increase my credence of that as a possibility. For one final analogy, if there is a stack of well-shuffled playing cards in front of me, what is my credence that the bottom card is a queen of hearts? 1/52. Now let’s say I flip the top two cards, and they are a 5 and a king. What is my credence now that the bottom card is a queen of hearts? 1/50. Now let’s say I go through the next 25 cards and none of them are the queen of hearts. What is my credence now that the bottom card is the queen of hearts? 1 in 25. The card at the bottom has not changed. The reality is in place. All I am doing is gaining information which helps me get a sense of location. I do want to clarify though, that I am reasoning with you as a two-boxer. I think one-boxers might view specific instances like t
0Lumifer8y
Why do you talk in terms of credence? In Bayesianism your belief of how likely something is is just a probability, so we're talking about probabilities, right? Sure, OK. Aren't you doing some rather severe privileging of the hypothesis? The world has all kinds of people. Some want to destroy the world (and that should increase my credence that the world will get destroyed); some want electronic heavens (and that should increase my credence that there will be simulated heavens); some want break out of the circle of samsara (and that should increase my credence that any death will be truly final); some want a lot of beer (and that should increase my credence that the future will be full of SuperExtraSpecialBudLight), etc. etc. And as the Egan's Law says, "It all adds up to normality". I think you're being very Christianity-centric and Christians are only what, about a third of the world's population? I still don't know why people would create imprecise simulations of those who lived and died long ago. Locate this statement on a timeline. Let's go back a couple of hundred years: do humans want to make simulations of humans? No, they don't. Things change and eternal truths are rare. Future is uncertain and judgements of what people of far future might want to do or not to do are not reliable. Easily enough. You assume -- for no good reason known to me -- that a simulation must mimic the real world to the best of its ability. I don't see why this should be so. A petri dish, in way, is a controlled simulation of, say, the growth and competition between different strains of bacteria (or yeast, or mold, etc.). Imagine an advanced (post-human or, say, alien) civilization doing historical research through simulations, running A/B tests on the XXI-century human history. If we change X, will the history go in the Y direction? Let's see. That's a petri dish -- or a video game, take your pick. That's not a comforting thought. From what I know about human nature, people wi
0jacob_cannell8y
Not quite. In the sim case, we along with our world exist as multiple copies - one original along with some number of sims. It's really important to make this distinction, it totally changes the relevant decision theory. No - because we exist as a set of copies which always takes the same actions. If we (in the future) create simulations of our past selves, then we are already today (also) those simulations.
0Lumifer8y
Whether it's not quite or yes quite depends on whether one accepts you idea of the identity as relative, fuzzy, and smeared out over a lot of copies. I don't. Do you state this as a fact?
0jacob_cannell8y
Actually the sim argument doesn't depend on fuzzy smeared out identity. The copy issue is orthogonal and it arises in any type of multiverse. It is given in the sim scenario. I said this in reply to your statement "there's nothing we do can make it simulated". The statement is incorrect because we are uncertain on our true existential state. And moreover, we have the power to change that state. The first original version of ourselves can create many other copies.
0Lumifer8y
If the identity isn't smeared then our world -- our specific world -- is either simulated or not. Uncertainty doesn't grant the power to change the status from not-simulated to simulated.
1jacob_cannell8y
Sure. But we don't know which copy we are, and all copies make the same decisions. Each individual copy is either simulated or not, and nothing each individual copy does can change that - true. However, all of the copies output the same decisions, and each copy can not determine it's true existential status. So the uncertainty is critically important - because the distribution itself can be manipulated by producing more copies. By creating simulations in the future, you alter the distribution by creating more sim copies such that it is thus more likely that one has been a sim the whole time. Draw out the graph and perhaps it will make more sense. It doesn't actually violate physical causality - the acuasality is only relative - an (intentional) illusion due to lack of knowledge.
0Lumifer8y
All copies might make the same decisions, but the originals make different decisions. Remember how upthread you talked about copies being relative and imperfect images of the originals? This means that the set of copies and the singleton of originals are different.
0jacob_cannell8y
As individual variants they may have slight differences (less so for more advanced sims constructed later), but that doesn't matter. The 'decision' we are talking about here is an abstract high level decision or belief concerning whether one will support the construction of historical sims (financially, politically, etc). The numerous versions of a person might occasionally make different decisions here and there for exactly what word to use or what not, but they will (necessarily by design) agree on major life decisions. I never said "imperfect images" - that's your beef.
0Lumifer8y
Let me quote you: Given all this I can't see how you insist that copies make the same decisions as originals. In fact, in your quote you even have different copies making different decisions ("multiple versions").
0jacob_cannell8y
The different versions arise from multiverse considerations. The obvious basic route to sim capture is recreating very close copies that experience everything we remember having experienced - a recreation of our exact specific historical timeline/branch. But even recreating other versions corresponding to other nearby branches in the multiverse could work and is potentially more computationally efficient. The net effect is the same: it raises the probabillity that we exist in a sim created by some other version/branch. So there are two notions of historical 'accuracy'. The first being accuracy in terms of exact match with a specific timeline, the other being accuracy in terms of matching only samples from the overall multiverse distribution. Success only requires a high total probability that we are in a sim. It doesn't matter much which specific historical timeline creates the sim. The idea of decision agreement still applies across different versions in the multiverse. It doesn't require exact agreement with every micro decision, only general agreement on the key decisions involving sim creation.
0jacob_cannell8y
Knowledge of which decisions we actually make is information which we can update our worldviews on. Acausal reasoning seems wierd, but it works in practice and dominates classical causal reasoning.
1Lumifer8y
What do you mean, "works in practice"?
0crmflynn8y
What the simulation would be like depends entirely on the motivation for running it. That is actually sort of the point of the post. If people want to be in a certain kind of simulation, they should run simulations that conform with that. What the people “above” us, if they exist, believe absolutely does change reality. What Omega believes changes reality. People one-box anyway. Who the Calvinist God has allegedly predestined determines reality. People go to church, pray, etc. anyway. If we are “the type of species” who builds simulations that we would like to be in, we are much more likely to be a species by-and-large who inhabits simulations which we want to be in.
-2Lumifer8y
And so we are back to the idea of gods.
1jacob_cannell8y
Sure - and nothing wrong with that.

This thread did much to clarify to me why some people consider LW a cult.

2DanArmak8y
That observation isn't useful to others unless you share your insights.
0Lumifer8y
In this case I prefer to wave my hand in the general direction of and let readers find (or not) their own evidence.