Comment author: jacob_cannell 04 November 2015 12:45:40AM *  1 point [-]

This has basically been my belief system for a while - we could call it simulism perhaps. These memes are also old. Tipler proposed the whole 'simulation implementing afterlife' idea a few decades ago, although his particular implementation ideas involved emulations at the end of time and questionable physics. Despite that, the general idea of mind uploading into virtual afterlife appears to be pretty mainstream now in transhumanist thought (ie Turing Church).

I think it's fun stuff to discuss, but it has a certain stigma and is politically unpopular to some extent with the x-risk folks. I suspect this may have to do with Tipler's heavily Christian religious spin on the whole thing. Many futurists were atheists first and don't much like the suspicious overlap with Christian memes (resurrection, supernatural creators. 'saving' souls, etc)

A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going.

This could be a good conversational hook, but technically I am not so certain this is true. In general the key to afterlife is more likely something like "do that which your future descendants/simulators would most reward you for", which has much in common with "do god's will". If you believe that global x-risks are large and you could have a large impact there, then sure that has very high value. But assessing global x-risks is difficult.

Also, minimizing x-risk is not the same as maximizing future utility. For example, there are many potential scenarios where very little of the potential sim capacity is used, even though they aren't x-risk style disasters. There are also local considerations which may dominate for most people - resurrection depends on future generosity which is highly unlikely to be uniform and instead will follow complex economics. "Be a good, interesting, and future important person" may trump x-risk for many people that can't contribute to x-risk much directly.

Comment author: crmflynn 04 November 2015 02:19:09AM 0 points [-]

When you say you believe this, do you mean you believe it to be the case, or you believe it to be a realistic possibility?

I stumbled across Tipler when reading up on the simulation argument, and it inspired further “am I being a crackpot” self-doubt, but I don’t think this argument looks much like his. Also, I am not really trying to promote it so much as to feel it out. I have not yet found any reason to think I am wrong about it being a possibility, though I myself do not “feel” it to be likely. That said, with stuff like this, I have no sense that intuitions would tell me anything useful either.

“Despite that, the general idea of mind uploading into virtual afterlife appears to be pretty mainstream now in transhumanist thought (ie Turing Church).”

Yeah, it comes up in “Superintelligence” and some other things I have read too. The small difference, if there is one, is that this looks backwards, and could be a way to collect those who have already died, and also could be a way to hedge bets for those of us who may not live long enough for transhumanism. It also circumvents the teletransportation paradox and other issues in the philosophy of identity. Also, even when not being treated as a goal, it seems to have evidential value. Finally, there are some acausal trade considerations, and considerations with “watering down” simulations through AI “thought crimes,” that can be considered once this is brought in. I will probably post more of my tentative thoughts on that later.

“I think it's fun stuff to discuss, but it has a certain stigma and is politically unpopular to some extent with the x-risk folks. I suspect this may have to do with Tipler's heavily Christian religious spin on the whole thing. Many futurists were atheists first and don't much like the suspicious overlap with Christian memes (resurrection, supernatural creators. 'saving' souls, etc)”

The idea of posting about something that is unpopular on such an open-minded site is one of the things that makes me scared to post online. Transhumanism, AI risk (“like the Terminator?”), one-boxing the Newcomb Paradox, LW seems pretty good at getting past some initial discomfort to dig deeper. I had actually once heard a really short thing about “The Singularity” on the radio, which could have been a much earlier introduction to all this, but I sort of blew it off. Stuff like my past flippancy makes me inclined to try to avoid trusting my gut, and superficial reasons to ignore something, and to try to take a really careful approach to deconstructing argument. I am also atheist, and grew up very religiously Christian, so I think I also have a strong suspicion and aversion to its approach. But again, I try not to let superficial or familial similarity to things interrupt a systematic approach to reality. I am currently trying to transition from doing one-the-ground NGO work in developing countries in order to work on this stuff. My gut hates this, and my availability bias is doing backflips, but I think that this stuff might be too important to take the easy way out of it.

Also, your point about the hook is absolutely correct. I was sort of trying to imitate the “catchy” salon/huffpost/buzzfeed headline that would try to draw people in. “Ten Ways Atheists Go to Heaven, You Won’t Believe #6!” It was also meant a bit self-deprecatingly.

“There are also local considerations which may dominate for most people - resurrection depends on future generosity which is highly unlikely to be uniform and instead will follow complex economics. "Be a good, interesting, and future important person" may trump x-risk for many people that can't contribute to x-risk much directly.”

Yeah, there is a lot here. What is so weird about the second disjunct is that it means that we sort of do this or fail at this as a group. And it means that, while laying on my deathbed, my evaluation of how well we are doing as a species is going to play directly on my credence of what, if anything, comes next. It’s strange isn’t it? That said, it is also interesting that, even if we somehow knew that existential risk would not be a problem in our lifetime, with this, there is a purely selfish reason to donate to FHI/MIRI. In fact, with the correct sense of scale, with high enough odds and marginal benefit to donations, it could be the economically rational thing to do.

Comment author: Kyre 03 November 2015 05:12:01AM 6 points [-]

A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade ...

Assuming you get good feedback and think that you have an interesting, solid arguments ... please think carefully about whether such publicity helps the existential risk movement more than it harms. On the plus side, you might get people thinking about existential risk that otherwise would not have. On the minus side, most people aren't going to understand what you write, and some of the the ones that half-understand it are going to loudly proclaim it as more evidence that MIRI etc are full of insane apocalyptic cultists.

Comment author: crmflynn 04 November 2015 01:41:29AM 4 points [-]

I would not worry about that for three reasons: 1) I am very shy online. Even posting this took several days and I did not look at the comments for almost a day after. 2) I am bringing this here first to see if it is worth considering, and also because I want input not only on the idea, but on the idea of spreading it further. 3) I would never identify myself with MIRI, etc. not because I would not want to be identified that way, but because I have absolutely not earned it. I also give everyone full permission to disavow me as a lone crackpot as needed should that somehow become a problem. That said, thank you for bringing this up as a concern. I had already thought about it, which is one of the reasons I was mentioning it as a tentative consideration for more deliberation by other people. That said, had I not, it could have been a problem. A lot of stuff in this area is really sensitive, and needs to be handled carefully. That is also why I am nervous to even post it.

All of that said, I think I might make another tentative proposal for further consideration. I think that some of these ideas ARE worth getting out there to more people. I have been involved in International NGO work for over a decade, studied it at university, and have lived and worked in half a dozen countries doing this work, and had no exposure to Effective Altruism, FHI, Existential Risk, etc. I hang out in policy/law/NGO circles, and none of my friends in these circles talk about it either. These ideas are not really getting out to those who should be exposed to them. I found EA/MIRI/Existential Risk through the simulation argument, which I read about on a blog I found off of reddit while clicking around on the internet about a year ago. That is kind of messed up. I really wish I had stumbled onto it earlier, and I tentatively think there is a lot of value in making it easier for others to stumble onto it into the future. Especially policy/law types, who are going to be needed at some point in the near future anyway.

I also feel that the costs of people thinking that people have “weird ideas” should probably be weighed against the benefits of flying the flag for other like-minded people to see. For the most part, people not liking other people is not much different than them not knowing about them, but having allies and fellow-travelers adds value. It is more minds to attack difficult problems at more angles, more policy makers listening when it is time to make some proposals, and it is more money finding its way into MIRI/FHI/etc. It might be worth trying to make existential risk a more widely known concern, a bit like climate change. It would not necessarily even have to water down LW, as it could be that those interested in the LW approach will come here, and those from other backgrounds, especially less technical backgrounds, find lateral groups. In climate change now, there are core scientists, scientists who dabble, and a huge group of activist types/policy people/regulators with little to no interest in the science who are sort of doing their own thing laterally to the main guys.

Comment author: Lumifer 03 November 2015 06:34:13PM *  1 point [-]

I am not sure what is the take-away from this idea. If it is

should increase credence that we exist in such a simulation and should perhaps expect a heaven-like afterlife of long, though finite, duration

then, well, increasing credence from 0.0...001% to 0.0...01% is a jump by an order of magnitude, but it still doesn't move the needle leaving the probability in the "vanishingly small" realm.

If it is that we should strive to build such simulations, there are a few issues with this call to action, starting with the observation that at our technological level there isn't much we can do right now, and ending with warning that if many people will want to build Heavens, some people will want to build Hells as well.

Comment author: crmflynn 04 November 2015 01:16:42AM 2 points [-]

Thank you for your comment, and for taking a skeptical approach towards this. I think that trying to punch holes in it is how we figure out if it is worth considering further. I honestly am not sure myself.

I think that my own thoughts on this are a bit like Bostrom's skepticism of the simulation hypothesis, where I do not think it is likely, but I think it is interesting, and it has some properties I like. In particular, I like the “feedback loop” aspect of it being tied into metaphysical credence. The idea that the more people buy into an idea, the more likely it seems that it “has already happened” shows some odd properties of evidence. It is a bit like if I was standing outside of the room where people go to pick up the boxes that Omega dropped off. If I see someone walk out with two unopened boxes, I expect their net wealth has increased ~$1000, if I see someone walk out with one unopened box, I expect them to have increased their wealth ~$1,000,000. That is sort of odd isn’t it? If I see a small, dedicated group of people working on how they would structure simulations, and raising money and trusts to push it a certain political way in the future (laws requiring all simulated people get a minimum duration of afterlife meeting certain specifications, no AIs simulating human civilization for information gathering purposes without “retiring” the people to a heaven afterward, etc.) I have more reason to think I might get a heaven after I die.

As far as the “call to action” I hope that my post was not really read that way. I might have been clearer, and apologize. I think that running simulations followed by afterlife might be a worthwhile thing to do in the future, but I am not even sure it should be done for many reasons. It is worth discussing. One could also imagine that it might be determined, if we overcome and survive the AI intelligence explosion with a good outcome, that it is a worthwhile goal to create more human lives, which are pleasant, throughout our cosmological endowment. Sending off von Neumann probes to build simulations like this might be a live option. Honestly, it is an important question to figure out what we might want from a superintelligent AI, and especially if we might want to not just hand it the question. Coherent extrapolated volition sounds like a best tentative idea, but one we need to be careful with. For example, AI might only be able to produce such a “model” of what we want by running a large number of simulated worlds (to determine what we are all about). If we want simulated worlds to end with a “retirement” for the simulated people in a pleasant afterlife, we might want to specify it in advance, otherwise we are inadvertently reducing the credence we have of our own afterlife as well. Also, if there is an existent acausal trade regime on heaven simulations (this will be another post later) we might get in trouble for not conforming in advance.

As far as simulated hell, I think that fear of this as a possibility keeps the simulated heaven issue even more alive. Someone who would like a pleasant afterlife… which is probably almost all of us, might want to take efforts early to secure that such an afterlife is the norm in cases of simulation, and “hell” absolutely not permitted. Also, the idea that some people might run bad afterlives should probably further motivate people to try to also create as many good simulations as possible, to increase credence that “we” are in one of the good ones. This is like pouring white marbles into the urn to reduce the odds of drawing the black one. You see why the “loop” aspect of this can be kind of interesting. Especially for one-boxer-types, who try to “act out” the correct outcome after-the-fact. For one-boxers, this could be, from a purely and exclusively selfish perspective, the best thing they could possibly do with their life. Increasing the odds of a trillion-life-duration afterlife of extreme utility from 0.001 to 0.01 might be very selfishly rational.

I am not trying to "sell" this, as I have not even bought it myself, I am just sort of playing with it as a live idea. If nothing else, this seems like it might have some importance on considerations going forward. I think that people’s attitudes and approaches to religion suggest that this might be a powerful force for human motivation, and the second disjunct of the simulation argument shows that human motivation might have significant bearing both on our current reality, and on our anticipated future.

Newcomb, Bostrom, Calvin: Credence and the strange path to a finite afterlife

7 crmflynn 02 November 2015 11:03PM

This is a bit rough, but I think that it is an interesting and potentially compelling idea. To keep this short, and accordingly increase the number of eyes over it, I have only sketched the bare bones of the idea. 

     1)      Empirically, people have varying intuitions and beliefs about causality, particularly in Newcomb-like problems (http://wiki.lesswrong.com/wiki/Newcomb's_problemhttp://philpapers.org/surveys/results.pl, and https://en.wikipedia.org/wiki/Irresistible_grace).

     2)      Also, as an empirical matter, some people believe in taking actions after the fact, such as one-boxing, or Calvinist “irresistible grace”, to try to ensure or conform with a seemingly already determined outcome. This might be out of a sense of retrocausality, performance, moral honesty, etc. What matters is that we know that they will act it out, despite it violating common sense causality. There has been some great work on decision theory on LW about trying to thread this needle well.

     3)      The second disjunct of the simulation argument (http://wiki.lesswrong.com/wiki/Simulation_argument) shows that the decision making of humanity is evidentially relevant in what our subjective credence should be that we are in a simulation. That is to say, if we are actively headed toward making simulations, we should increase our credence of being in a simulation, if we are actively headed away from making simulations, through either existential risk or law/policy against it, we should decrease our credence.

      4)      Many, if not most, people would like for there to be a pleasant afterlife after death, especially if we could be reunited with loved ones.

     5)      There is no reason to believe that simulations which are otherwise nearly identical copies of our world, could not contain, after the simulated bodily death of the participants, an extremely long-duration, though finite, "heaven"-like afterlife shared by simulation participants.

     6)      Our heading towards creating such simulations, especially if they were capable of nesting simulations, should increase credence that we exist in such a simulation and should perhaps expect a heaven-like afterlife of long, though finite, duration.

     7)      Those who believe in alternative causality, or retrocausality, in Newcomb-like situations should be especially excited about the opportunity to push the world towards surviving, allowing these types of simulations, and creating them, as it would potentially suggest, analogously, that if they work towards creating simulations with heaven-like afterlives, that they might in some sense be “causing” such a heaven to exist for themselves, and even for friends and family who have already died. Such an idea of life-after-death, and especially for being reunited with loved ones, can be extremely compelling.

     8)      I believe that people matching the above description, that is, holding both an intuition in alternative causality, and finding such a heaven-like-afterlife compelling, exist. Further, the existence of such people, and their associated motivation to try to create such simulations, should increase the credence even of two-boxing types, that we already live in such a world with a heaven-like afterlife. This is because knowledge of a motivated minority desiring simulations should increase credence in the likely success of simulations. This is essentially showing that “this probably happened before, one level up” from the two-box perspective.

     9)      As an empirical matter, I also think that there are people who would find the idea of creating simulations with heaven-like afterlives compelling, even if they are not one-boxers, from a simply altruistic perspective, both since it is a nice thing to do for the future sim people, who can, for example, probabilistically have a much better existence than biological children on earth can, and as it is a nice thing to do to increase the credence (and emotional comfort) of both one-boxers and two-boxers in our world thinking that there might be a life after death.

     10)   This creates the opportunity for a secular movement in which people work towards creating these simulations, and use this work and potential success in order to derive comfort and meaning from their life. For example, making donations to a simulation-creating or promoting, or existential threat avoiding, think-tank after a loved one’s death, partially symbolically, partially hopefully.

     11)   There is at least some room for Pascalian considerations even for two-boxers who allow for some humility in their beliefs. Nozick believed one-boxers will become two boxers if Box A is raised to 900,000, and two-boxers will become one-boxers if Box A is lowered to $1. Similarly, trying to work towards these simulations, even if you do not find it altruistically compelling, and even if you think that the odds of alternative or retrocausality is infinitesimally small, might make sense in that the reward could be extremely large, including potentially trillions of lifetimes worth of time spent in an afterlife “heaven” with friends and family.

Finally, this idea might be one worth filling in (I have been, in my private notes for over a year, but am a bit shy to debut that all just yet, even working up the courage to post this was difficult) if only because it is interesting, and could be used as a hook to get more people interested in existential risk, including the AI control problem. This is because existential catastrophe is probably the best enemy of credence in the future of such simulations, and accordingly in our reasonable credence in thinking that we have such a heaven awaiting us after death now. A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade, and game theory between higher and lower worlds, as a form of “compulsion” in which they punish worlds for not creating heaven containing simulations (therefore effecting their credence as observers of the simulation), in order to reach an equilibrium in which simulations with heaven-like afterlives are universal, or nearly universal. More on that later if this is received well.

Also, if anyone would like to join with me in researching, bull sessioning, or writing about this stuff, please feel free to IM me. Also, if anyone has a really good, non-obvious pin with which to pop my balloon, preferably in a gentle way, it would be really appreciated. I am spending a lot of energy and time on this if it is fundamentally flawed in some way.

Thank you.

*******************************

November 11 Updates and Edits for Clarification

     1)      There seems to be confusion about what I mean by self-location and credence. A good way to think of this is the Sleeping Beauty Problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem)

If I imagine myself as Sleeping Beauty (and who doesn’t?), and I am asked on Sunday what my credence is that the coin will be tails, I will say 1/2. If I am awakened during the experiment without being told which day it is and am asked what my credence is that the coin was tails, I will say 2/3. If I am then told it is Monday, I will update my credence to ½. If I am told it is Tuesday I update my credence to 1. If someone asks me two days after the experiment about my credence of it being tails, if I somehow do not know the days of the week still, I will say ½. Credence changes with where you are, and with what information you have. As we might be in a simulation, we are somewhere in the “experiment days” and information can help orient our credence. As humanity potentially has some say in whether or not we are in a simulation, information about how humans make decisions about these types of things can and should effect our credence.

Imagine Sleeping Beauty is a lesswrong reader. If Sleeping Beauty is unfamiliar with the simulation argument, and someone asks her about her credence of being in a simulation, she probably answers something like 0.0000000001% (all numbers for illustrative purposes only). If someone shows her the simulation argument, she increases to 1%. If she stumbles across this blog entry, she increases her credence to 2%, and adds some credence to the additional hypothesis that it may be a simulation with an afterlife. If she sees that a ton of people get really interested in this idea, and start raising funds to build simulations in the future and to lobby governments both for great AI safeguards and for regulation of future simulations, she raises her credence to 4%. If she lives through the AI superintelligence explosion and simulations are being built, but not yet turned on, her credence increases to 20%. If humanity turns them on, it increases to 50%. If there are trillions of them, she increases her credence to 60%. If 99% of simulations survive their own run-ins with artificial superintelligence and produce their own simulations, she increases her credence to 95%. 

2)  This set of simulations does not need to recreate the current world or any specific people in it. That is a different idea that is not necessary to this argument. As written the argument is premised on the idea of creating fully unique people. The point would be to increase our credence that we are functionally identical in type to the unique individuals in the simulation. This is done by creating ignorance or uncertainty in simulations, so that the majority of people similarly situated, in a world which may or may not be in a simulation, are in fact in a simulation. This should, in our ignorance, increase our credence that we are in a simulation. The point is about how we self-locate, as discussed in the original article by Bostrom. It is a short 12-page read, and if you have not read it yet, I would encourage it:  http://simulation-argument.com/simulation.html. The point about past loved ones I was making was to bring up the possibility that the simulations could be designed to transfer people to a separate after-life simulation where they could be reunited after dying in the first part of the simulation. This was not about trying to create something for us to upload ourselves into, along with attempted replicas of dead loved ones. This staying-in-one simulation through two phases, a short life, and relatively long afterlife, also has the advantage of circumventing the teletransportation paradox as “all of the person" can be moved into the afterlife part of the simulation.  

 

Comment author: crmflynn 02 November 2015 02:30:20AM 4 points [-]

I have been lurking around LW for a little over a year. I found it indirectly through the Simulation Argument > Bostrom > AI > MIRI > LW. I am a graduate of Yale Law School, and have an undergraduate degree in Economics and International Studies focusing on NGO work. I also read a lot, but in something of a wandering path that I realize can and should be improved upon with the help, resources, and advice of LW.

I have spent the last few years living and working in developing countries around the world in various public interest roles, trying to find opportunities to do high-impact work. This was based around a vague and undertheorized consequentialism that has been pretty substantially rethought after finding FHI/MIRI/EA/LW etc. Without knowing about the larger effective altruism movement (aside from vague familiarity with Singer, QALY cost effectiveness comparisons between NGOs, etc.) I had been trying to do something like effective altruism on my own. I had some success with this, but a lot of it was just the luck of being in the right place at the right time. I think that this stuff is important enough that I should be approaching it more systematically and strategically than I had been. In particular, I am spending a lot of time moving my altruism away from just the concrete present and into thinking about “astronomical waste” and the potential importance of securing the future for humanity. This is sort of difficult, as I have a lot of experiential “availability” from working on the ground in poor countries which pulls on my biases, especially when faced with a lot of abstraction as the only counterweight. However, as stated, I feel this is too important to do incorrectly, even if it means taming intuitions and the easily available answer.

I have also been spending a lot of time recently thinking about the second disjunct of the simulation argument. Unless I am making a fundamental mistake, it seems as though the second disjunct, by bringing in human decision making (or our coherent extrapolated volition, etc.) into the process, sort of indirectly entangles the probable metaphysical reality of our world with our own decision making. This is true as a sort of unfolding of evidence if you are a two-boxer, but it is potentially sort-of-causally true if you are a one-boxer. Meaning if we clear the existential hurdle, this is seemingly the next thing between us and the likely truth of being in a simulation. I actually have a very short write-up on this which I will post in the discussion area when I have sufficient karma (2 points, so probably soon…) I also have much longer notes on a lot of related stuff which I might turn into posts in the future if, after my first short post, this is interesting to anyone.

I am a bit shy online, so I might not post much, but I am trying to get bolder as part of a self-improvement scheme, so we will see how it goes. Either way, I will be reading.

Thank you LW for existing, and providing such rigorous and engaging content, for free, as a community.

View more: Prev