Comment author: SodaPopinski 06 November 2015 05:30:42PM 3 points [-]

This is a really fascinating idea, particularly the aspect that we can influence the likelihood we are in a simulation by making it more likely that simulations happen.

To boil it down to a simple thought experiment. Suppose I am in the future where we have a ton of computing power and I know something bad will happen tomorrow (say I'll be fired) barring some 1/1000 likelihood quantum event. No problem, I'll just make millions of simulations of the world with me in my current state so that tomorrow the 1/1000 event happens and I'm saved since I'm almost certainly in one of these simulations I'm about to make!

Comment author: crmflynn 10 November 2015 09:27:38AM 0 points [-]

This is a really fascinating idea, particularly the aspect that we can influence the likelihood we are in a simulation by making it more likely that simulations happen.

Maybe? We can increase our credence, but I think whether or not it increases the likelihood is an open question. The intuitions seem to split between two-boxers and a subset of one-boxers.

That said, thank you for the secondary thought experiment, which is really interesting.

Comment author: gjm 04 November 2015 04:50:25PM 2 points [-]

What you -- or everyone -- believes does not change the reality.

It can give evidence, though. Consider Hypothesis A: "Societies like ours will generally not decide, as their technological capabilities grow, to engage in massive simulation of their forebears" and Hypothesis B which omits the word "not". Then:

  • The decisions made by, and ideas widely held in, our society, can be evidence favouring A or B.
  • We are more likely simulations if B is right than if A is right.

Similarly if the hypotheses are "... to engage in massive simulation of their forebears, including blissful afterlives", in which case we are more likely to have blissful simulated afterlives if B is right than if A is right. (Not necessarily more likely to have blissful afterlives simpliciter, though -- perhaps, e.g., the truth of B would somehow make it less likely that we get blissful afterlives provided by gods.)

My opinion, for what it's worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.

Comment author: crmflynn 05 November 2015 01:37:35PM 0 points [-]

My opinion, for what it's worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.

I wonder if you might expand on your thoughts on this a bit more. I tend to think that the odds of being in a simulation are quite low as well, but for me the issue is more the threat of extinction than a lack of will.

I can think of some reasons why, even if we could build such simulations, we might not, but I feel that this area is a bit fuzzy in my mind. Some ideas I already have: 1) Issues with the theory of identity 2) Issues with theory of mind 3) Issues with theory of moral value (creating lots high quality lives not seen as valuable, antinatalism, problem of evil) 4) Self-interest (more resources for existing individuals to upload into and utilize) 5) The existence of a convincing two-boxer “proof” of some sort

I also would like to know why an “enthusiastic takeup of the ideas in this post” would not increase your credence significantly? I think there is a very large chance of these ideas not being taken up enthusiastically, but if they were, I am not sure what, aside from extinction, would undermine them. If we get to the point where we can do it, and we want to do it, why would we not do it?

Thank you in advance for any insight, I have spent too long chewing on this without much detailed input, and I would really value it.

Comment author: Lumifer 05 November 2015 05:04:55AM -1 points [-]

Consider a different argument.

Our world is either simulated or not.

If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us.

If our world is simulated, we are already simulated and there's nothing we can do to increase our chance of being simulated because it's already so.

Comment author: crmflynn 05 November 2015 01:12:08PM 1 point [-]

Consider a different argument.

Our world is either simulated or not.

If our world is not simulated, there's nothing we do can make it simulated. We can work towards other simulations, but that's not us.

If our world is simulated, we are already simulated and there's nothing we can do to increase our chance of being simulated because it's already so.

I am guessing you two-box in the Newcomb paradox as well, right? If you don’t then you might take a second to realize you are being inconsistent.

If you do two-box, realize that a lot of people do not. A lot of people on LW do not. A lot of philosophers who specialize in decision theory do not. It does not mean they are right, it just means that they do not follow your reasoning. They think that the right answer is to one box. They take an action, later in time, which does not seem causally determinative (at least as we normally conceive of causality). They may believe in retrocausality, the may believe in a type of ethics in which two-boxing would be a type of cheating or free-riding, they might just be superstitious, or they might just be humbling themselves in the face of uncertainty. For purposes of this argument, it does not matter. What matters, as an empirical matter, is that they exist. Their existence means that they will ignore or disbelieve that “there’s nothing we can do to increase our chance of being simulated” like they ignore the second box.

If we want to belong to the type of species where the vast majority of the species exists in a simulations with a long-duration, pleasant afterlife, we need to be the “type of species” who builds large numbers of simulations with long-duration, pleasant afterlives. And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one. Pending acausal trade considerations (probably for another post), two-boxers, and likely some one-boxers, will not think that their actions are causing anything, but it will have evidential value still.

Comment author: Lumifer 04 November 2015 04:02:38PM -1 points [-]

Once the technology exists, and probably before, we may need to figure out if and how we want to do simulations.

Simulations of long-ago ancestors..?

Imagine that you have the ability to run a simulation now. Would you want to populate it by people like you, that is, fresh people de novo and possibly people from your parents and grandparents generations -- or would you want to populate it with Egyptian peasants from 3000 B.C.? Homo habilis, maybe? How far back do you want to go?

it should increase our credence in an afterlife

No, I don't think so. You're engaging in magical thinking. What you -- or everyone -- believes does not change the reality.

Comment author: crmflynn 05 November 2015 12:55:57PM *  0 points [-]

Simulations of long-ago ancestors..?

Imagine that you have the ability to run a simulation now. Would you want to populate it by people like you, that is, fresh people de novo and possibly people from your parents and grandparents generations -- or would you want to populate it with Egyptian peasants from 3000 B.C.? Homo habilis, maybe? How far back do you want to go?

What the simulation would be like depends entirely on the motivation for running it. That is actually sort of the point of the post. If people want to be in a certain kind of simulation, they should run simulations that conform with that.

No, I don't think so. You're engaging in magical thinking. What you -- or everyone -- believes does not change the reality.

What the people “above” us, if they exist, believe absolutely does change reality.

What Omega believes changes reality. People one-box anyway.

Who the Calvinist God has allegedly predestined determines reality. People go to church, pray, etc. anyway.

If we are “the type of species” who builds simulations that we would like to be in, we are much more likely to be a species by-and-large who inhabits simulations which we want to be in.

Comment author: Lumifer 04 November 2015 03:55:04PM *  1 point [-]

I do not think it is likely, but I think it is interesting

Keep in mind that the "simulation hypothesis" is also known as "creationism". In particular it implies that there are beings who constructed the simulation, who are not bound by its rules, and who can change it at will. The conventional name for such beings is "gods".

idea ... shows some odd properties of evidence.

I would treat is as a category error: ideas are not evidence. Even if they look "evidence-like".

Also, the idea that some people might run bad afterlives should probably further motivate people to try to also create as many good simulations as possible, to increase credence that “we” are in one of the good ones.

Why would future superpowerful people be interested in increasing your credence?

Remember, this is ground well-trodden by theology. There the question is formulated as "Why doesn't God just reveal Himself to us instead leaving us in doubt?".

Comment author: crmflynn 05 November 2015 12:46:33PM 1 point [-]

I think you and I might be missing one another. Or that I am at least missing your point. Accordingly, my responses below might be off point. Hopefully they are not.

“Keep in mind that the "simulation hypothesis" is also known as "creationism". In particular it implies that there are beings who constructed the simulation, who are not bound by its rules, and who can change it at will. The conventional name for such beings is "gods".”

I don’t think that necessarily follows. Creationism implies divinity, and gods implies something bigger than people who build a machine. Are your parents gods for creating you? In my own estimate, creating a simulation is like founding a sperm bank; you are not really “creating” anything, you are just moving pieces around in a way that facilitates more lives. You can mess around with the life and the world, but so can anyone in real life, especially if they have access to power, or guns, or a sperm bank, again, for that matter. It is different in scale, but not in type. Then again, I might be thinking too highly of “gods”?

Also, I get the impression, and apologies if I am wrong, that you are mostly trying to show “family resemblance” with something many of us are skeptical of or dislike. I am atheist myself, and from a very religious background which leaves me wary. However, I think it is worth avoiding a “clustering” way of thinking. If you don’t want to consider something because of who said it, or because it vaguely or analogously resembles something you dislike, you can miss out on some interesting stuff. I think I avoided AI, etc. too long because I thought I did not really like “computer things” which was a mistake that cost me some great time in some huge, wide open, intellectual spaces I now love to run around in.

“I would treat is as a category error: ideas are not evidence. Even if they look "evidence-like"”

I might be missing what you are saying, but I do not think I was saying that ideas were evidence. I was saying a group of people rallying around an idea could be a form of evidence. In this case, the “evidence” is that a lot of people might want something. What this is evidence of is that them wanting something makes it more likely that it will come about. I am not sure how this would fail as evidence.

“Why would future superpowerful people be interested in increasing your credence?”

Two things: 1) They are not interested in the credence of people in the simulations, they are interested in their own credence. So if I live in a world that creates simulations, it makes me think it is more likely that I am in a simulation. If I know that 99% of all simulations are good ones, it makes me think I am more likely in a world with good simulations. If I know that 90% of simulations are terrible, I am more likely to think that I am in a terrible simulation. The odd thing, is that people are sort of creating their own evidence. This is why I mentioned Calvinism and “irresistible grace” as analogy. Also Newcomb. Creating nice simulations in the hopes of being in one is like taking one box, or attending Calvinist church regularly and abiding by the doctrines. More to the point for people who two-box and roll their eyes at Calvinists, knowing that there are Calvinists means that we know that some people might try to make simulations in order to try to be in one.

2) I am not sure where “superpowerful” comes from here. I think you might be making assumptions about my assumptions. These simulations might be left unobserved. They might be made by von Neumann probes on distant Dyson spheres. I actually think that people motivated by one-boxing/Calvinist type interpretations are more likely to try to keep simulations unmolested.

“Remember, this is ground well-trodden by theology. There the question is formulated as "Why doesn't God just reveal Himself to us instead leaving us in doubt?".”

I don’t think the question is the same. In particular, I am not solving for “why has god not revealed himself” or even “why haven’t I been told I am in a simulation.” I am just pulling at the second disjunct and its implications. In particular I am looking at what happens if one-boxer types decide they want a simulated alterlife.

Why would people run simulations? Maybe research or entertainment (suggested in the original article). Maybe to fulfill (potentially imaginary) acausal trade conditions (I will probably post on this later). Maybe altruism. Maybe because they want to believe they are in a simulation, and so they make the simulation look just like their world looks, but add an afterlife. They do this in the hopes that it was done “above” them the same way, and they are in such a simulation. They do it in the hopes of being self-fulfilling, or performative, or for whatever reason people one-box and believe in Calvinism.

Comment author: Lumifer 04 November 2015 01:25:20AM 1 point [-]

We live in a very special time - right on the cusp of AGI

AGI has been 20 years away for the past 50 years or so. I see no reason to believe the pattern will break any time now :-/

Comment author: crmflynn 04 November 2015 02:46:12AM 0 points [-]

I am not sure it matters when it comes. Presumably, unless we find some other way to extinction, it will come at some point. When it comes, it is likely that the technology will not be a problem for it. Once the technology exists, and probably before, we may need to figure out if and how we want to do simulations. If people have a clear, well developed, and strong preference going into it (including potentially putting it into the AI as a requirement for its modeling of humanity, or it being a big enough “movement” to show up in our CEV) that will likely have a large effect on the odds of it happening. Also, I know some people who sincerely think belief in god is based almost exclusively on fear of death. I am skeptical of this, but if it is true, or even partially true, if even a fraction of the fervor/energy/dedication that is put into religion was put into pushing for this, I think it might be a serious force.

The point about credence is just a point about it being interesting, decision making aside, that something as fickle as collective human will, might determine if I “survive” death, and if all my dead loved ones will as well. So, for example, if this post, or someone building off of my post, but doing it better, were to explode on LW and pour out into reddit and the media, it should increase our credence in an afterlife. If its reception is lukewarm, decrease it. There is something really weird about that, and worth chewing on.

Also, I think that people’s motivation to have an afterlife seems like a more compelling reason to create simulations than experimentation/entertainment, so it helps shift credence around among the four disjuncts of the simulation argument.

Comment author: jacob_cannell 04 November 2015 12:45:40AM *  1 point [-]

This has basically been my belief system for a while - we could call it simulism perhaps. These memes are also old. Tipler proposed the whole 'simulation implementing afterlife' idea a few decades ago, although his particular implementation ideas involved emulations at the end of time and questionable physics. Despite that, the general idea of mind uploading into virtual afterlife appears to be pretty mainstream now in transhumanist thought (ie Turing Church).

I think it's fun stuff to discuss, but it has a certain stigma and is politically unpopular to some extent with the x-risk folks. I suspect this may have to do with Tipler's heavily Christian religious spin on the whole thing. Many futurists were atheists first and don't much like the suspicious overlap with Christian memes (resurrection, supernatural creators. 'saving' souls, etc)

A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going.

This could be a good conversational hook, but technically I am not so certain this is true. In general the key to afterlife is more likely something like "do that which your future descendants/simulators would most reward you for", which has much in common with "do god's will". If you believe that global x-risks are large and you could have a large impact there, then sure that has very high value. But assessing global x-risks is difficult.

Also, minimizing x-risk is not the same as maximizing future utility. For example, there are many potential scenarios where very little of the potential sim capacity is used, even though they aren't x-risk style disasters. There are also local considerations which may dominate for most people - resurrection depends on future generosity which is highly unlikely to be uniform and instead will follow complex economics. "Be a good, interesting, and future important person" may trump x-risk for many people that can't contribute to x-risk much directly.

Comment author: crmflynn 04 November 2015 02:19:09AM 0 points [-]

When you say you believe this, do you mean you believe it to be the case, or you believe it to be a realistic possibility?

I stumbled across Tipler when reading up on the simulation argument, and it inspired further “am I being a crackpot” self-doubt, but I don’t think this argument looks much like his. Also, I am not really trying to promote it so much as to feel it out. I have not yet found any reason to think I am wrong about it being a possibility, though I myself do not “feel” it to be likely. That said, with stuff like this, I have no sense that intuitions would tell me anything useful either.

“Despite that, the general idea of mind uploading into virtual afterlife appears to be pretty mainstream now in transhumanist thought (ie Turing Church).”

Yeah, it comes up in “Superintelligence” and some other things I have read too. The small difference, if there is one, is that this looks backwards, and could be a way to collect those who have already died, and also could be a way to hedge bets for those of us who may not live long enough for transhumanism. It also circumvents the teletransportation paradox and other issues in the philosophy of identity. Also, even when not being treated as a goal, it seems to have evidential value. Finally, there are some acausal trade considerations, and considerations with “watering down” simulations through AI “thought crimes,” that can be considered once this is brought in. I will probably post more of my tentative thoughts on that later.

“I think it's fun stuff to discuss, but it has a certain stigma and is politically unpopular to some extent with the x-risk folks. I suspect this may have to do with Tipler's heavily Christian religious spin on the whole thing. Many futurists were atheists first and don't much like the suspicious overlap with Christian memes (resurrection, supernatural creators. 'saving' souls, etc)”

The idea of posting about something that is unpopular on such an open-minded site is one of the things that makes me scared to post online. Transhumanism, AI risk (“like the Terminator?”), one-boxing the Newcomb Paradox, LW seems pretty good at getting past some initial discomfort to dig deeper. I had actually once heard a really short thing about “The Singularity” on the radio, which could have been a much earlier introduction to all this, but I sort of blew it off. Stuff like my past flippancy makes me inclined to try to avoid trusting my gut, and superficial reasons to ignore something, and to try to take a really careful approach to deconstructing argument. I am also atheist, and grew up very religiously Christian, so I think I also have a strong suspicion and aversion to its approach. But again, I try not to let superficial or familial similarity to things interrupt a systematic approach to reality. I am currently trying to transition from doing one-the-ground NGO work in developing countries in order to work on this stuff. My gut hates this, and my availability bias is doing backflips, but I think that this stuff might be too important to take the easy way out of it.

Also, your point about the hook is absolutely correct. I was sort of trying to imitate the “catchy” salon/huffpost/buzzfeed headline that would try to draw people in. “Ten Ways Atheists Go to Heaven, You Won’t Believe #6!” It was also meant a bit self-deprecatingly.

“There are also local considerations which may dominate for most people - resurrection depends on future generosity which is highly unlikely to be uniform and instead will follow complex economics. "Be a good, interesting, and future important person" may trump x-risk for many people that can't contribute to x-risk much directly.”

Yeah, there is a lot here. What is so weird about the second disjunct is that it means that we sort of do this or fail at this as a group. And it means that, while laying on my deathbed, my evaluation of how well we are doing as a species is going to play directly on my credence of what, if anything, comes next. It’s strange isn’t it? That said, it is also interesting that, even if we somehow knew that existential risk would not be a problem in our lifetime, with this, there is a purely selfish reason to donate to FHI/MIRI. In fact, with the correct sense of scale, with high enough odds and marginal benefit to donations, it could be the economically rational thing to do.

Comment author: Kyre 03 November 2015 05:12:01AM 6 points [-]

A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade ...

Assuming you get good feedback and think that you have an interesting, solid arguments ... please think carefully about whether such publicity helps the existential risk movement more than it harms. On the plus side, you might get people thinking about existential risk that otherwise would not have. On the minus side, most people aren't going to understand what you write, and some of the the ones that half-understand it are going to loudly proclaim it as more evidence that MIRI etc are full of insane apocalyptic cultists.

Comment author: crmflynn 04 November 2015 01:41:29AM 4 points [-]

I would not worry about that for three reasons: 1) I am very shy online. Even posting this took several days and I did not look at the comments for almost a day after. 2) I am bringing this here first to see if it is worth considering, and also because I want input not only on the idea, but on the idea of spreading it further. 3) I would never identify myself with MIRI, etc. not because I would not want to be identified that way, but because I have absolutely not earned it. I also give everyone full permission to disavow me as a lone crackpot as needed should that somehow become a problem. That said, thank you for bringing this up as a concern. I had already thought about it, which is one of the reasons I was mentioning it as a tentative consideration for more deliberation by other people. That said, had I not, it could have been a problem. A lot of stuff in this area is really sensitive, and needs to be handled carefully. That is also why I am nervous to even post it.

All of that said, I think I might make another tentative proposal for further consideration. I think that some of these ideas ARE worth getting out there to more people. I have been involved in International NGO work for over a decade, studied it at university, and have lived and worked in half a dozen countries doing this work, and had no exposure to Effective Altruism, FHI, Existential Risk, etc. I hang out in policy/law/NGO circles, and none of my friends in these circles talk about it either. These ideas are not really getting out to those who should be exposed to them. I found EA/MIRI/Existential Risk through the simulation argument, which I read about on a blog I found off of reddit while clicking around on the internet about a year ago. That is kind of messed up. I really wish I had stumbled onto it earlier, and I tentatively think there is a lot of value in making it easier for others to stumble onto it into the future. Especially policy/law types, who are going to be needed at some point in the near future anyway.

I also feel that the costs of people thinking that people have “weird ideas” should probably be weighed against the benefits of flying the flag for other like-minded people to see. For the most part, people not liking other people is not much different than them not knowing about them, but having allies and fellow-travelers adds value. It is more minds to attack difficult problems at more angles, more policy makers listening when it is time to make some proposals, and it is more money finding its way into MIRI/FHI/etc. It might be worth trying to make existential risk a more widely known concern, a bit like climate change. It would not necessarily even have to water down LW, as it could be that those interested in the LW approach will come here, and those from other backgrounds, especially less technical backgrounds, find lateral groups. In climate change now, there are core scientists, scientists who dabble, and a huge group of activist types/policy people/regulators with little to no interest in the science who are sort of doing their own thing laterally to the main guys.

Comment author: Lumifer 03 November 2015 06:34:13PM *  1 point [-]

I am not sure what is the take-away from this idea. If it is

should increase credence that we exist in such a simulation and should perhaps expect a heaven-like afterlife of long, though finite, duration

then, well, increasing credence from 0.0...001% to 0.0...01% is a jump by an order of magnitude, but it still doesn't move the needle leaving the probability in the "vanishingly small" realm.

If it is that we should strive to build such simulations, there are a few issues with this call to action, starting with the observation that at our technological level there isn't much we can do right now, and ending with warning that if many people will want to build Heavens, some people will want to build Hells as well.

Comment author: crmflynn 04 November 2015 01:16:42AM 2 points [-]

Thank you for your comment, and for taking a skeptical approach towards this. I think that trying to punch holes in it is how we figure out if it is worth considering further. I honestly am not sure myself.

I think that my own thoughts on this are a bit like Bostrom's skepticism of the simulation hypothesis, where I do not think it is likely, but I think it is interesting, and it has some properties I like. In particular, I like the “feedback loop” aspect of it being tied into metaphysical credence. The idea that the more people buy into an idea, the more likely it seems that it “has already happened” shows some odd properties of evidence. It is a bit like if I was standing outside of the room where people go to pick up the boxes that Omega dropped off. If I see someone walk out with two unopened boxes, I expect their net wealth has increased ~$1000, if I see someone walk out with one unopened box, I expect them to have increased their wealth ~$1,000,000. That is sort of odd isn’t it? If I see a small, dedicated group of people working on how they would structure simulations, and raising money and trusts to push it a certain political way in the future (laws requiring all simulated people get a minimum duration of afterlife meeting certain specifications, no AIs simulating human civilization for information gathering purposes without “retiring” the people to a heaven afterward, etc.) I have more reason to think I might get a heaven after I die.

As far as the “call to action” I hope that my post was not really read that way. I might have been clearer, and apologize. I think that running simulations followed by afterlife might be a worthwhile thing to do in the future, but I am not even sure it should be done for many reasons. It is worth discussing. One could also imagine that it might be determined, if we overcome and survive the AI intelligence explosion with a good outcome, that it is a worthwhile goal to create more human lives, which are pleasant, throughout our cosmological endowment. Sending off von Neumann probes to build simulations like this might be a live option. Honestly, it is an important question to figure out what we might want from a superintelligent AI, and especially if we might want to not just hand it the question. Coherent extrapolated volition sounds like a best tentative idea, but one we need to be careful with. For example, AI might only be able to produce such a “model” of what we want by running a large number of simulated worlds (to determine what we are all about). If we want simulated worlds to end with a “retirement” for the simulated people in a pleasant afterlife, we might want to specify it in advance, otherwise we are inadvertently reducing the credence we have of our own afterlife as well. Also, if there is an existent acausal trade regime on heaven simulations (this will be another post later) we might get in trouble for not conforming in advance.

As far as simulated hell, I think that fear of this as a possibility keeps the simulated heaven issue even more alive. Someone who would like a pleasant afterlife… which is probably almost all of us, might want to take efforts early to secure that such an afterlife is the norm in cases of simulation, and “hell” absolutely not permitted. Also, the idea that some people might run bad afterlives should probably further motivate people to try to also create as many good simulations as possible, to increase credence that “we” are in one of the good ones. This is like pouring white marbles into the urn to reduce the odds of drawing the black one. You see why the “loop” aspect of this can be kind of interesting. Especially for one-boxer-types, who try to “act out” the correct outcome after-the-fact. For one-boxers, this could be, from a purely and exclusively selfish perspective, the best thing they could possibly do with their life. Increasing the odds of a trillion-life-duration afterlife of extreme utility from 0.001 to 0.01 might be very selfishly rational.

I am not trying to "sell" this, as I have not even bought it myself, I am just sort of playing with it as a live idea. If nothing else, this seems like it might have some importance on considerations going forward. I think that people’s attitudes and approaches to religion suggest that this might be a powerful force for human motivation, and the second disjunct of the simulation argument shows that human motivation might have significant bearing both on our current reality, and on our anticipated future.

Comment author: crmflynn 02 November 2015 02:30:20AM 4 points [-]

I have been lurking around LW for a little over a year. I found it indirectly through the Simulation Argument > Bostrom > AI > MIRI > LW. I am a graduate of Yale Law School, and have an undergraduate degree in Economics and International Studies focusing on NGO work. I also read a lot, but in something of a wandering path that I realize can and should be improved upon with the help, resources, and advice of LW.

I have spent the last few years living and working in developing countries around the world in various public interest roles, trying to find opportunities to do high-impact work. This was based around a vague and undertheorized consequentialism that has been pretty substantially rethought after finding FHI/MIRI/EA/LW etc. Without knowing about the larger effective altruism movement (aside from vague familiarity with Singer, QALY cost effectiveness comparisons between NGOs, etc.) I had been trying to do something like effective altruism on my own. I had some success with this, but a lot of it was just the luck of being in the right place at the right time. I think that this stuff is important enough that I should be approaching it more systematically and strategically than I had been. In particular, I am spending a lot of time moving my altruism away from just the concrete present and into thinking about “astronomical waste” and the potential importance of securing the future for humanity. This is sort of difficult, as I have a lot of experiential “availability” from working on the ground in poor countries which pulls on my biases, especially when faced with a lot of abstraction as the only counterweight. However, as stated, I feel this is too important to do incorrectly, even if it means taming intuitions and the easily available answer.

I have also been spending a lot of time recently thinking about the second disjunct of the simulation argument. Unless I am making a fundamental mistake, it seems as though the second disjunct, by bringing in human decision making (or our coherent extrapolated volition, etc.) into the process, sort of indirectly entangles the probable metaphysical reality of our world with our own decision making. This is true as a sort of unfolding of evidence if you are a two-boxer, but it is potentially sort-of-causally true if you are a one-boxer. Meaning if we clear the existential hurdle, this is seemingly the next thing between us and the likely truth of being in a simulation. I actually have a very short write-up on this which I will post in the discussion area when I have sufficient karma (2 points, so probably soon…) I also have much longer notes on a lot of related stuff which I might turn into posts in the future if, after my first short post, this is interesting to anyone.

I am a bit shy online, so I might not post much, but I am trying to get bolder as part of a self-improvement scheme, so we will see how it goes. Either way, I will be reading.

Thank you LW for existing, and providing such rigorous and engaging content, for free, as a community.

View more: Prev | Next