This is a linkpost for http://prepper.i2phides.me/

Greetings LessWrong community. I have written this prepping guide, to explain the threats of AGI in simple terms, and to propose the best course of action on an individual level, based on rational and logical thinking and strategies.

I am very concerned with how the topic is handled in the public sphere and on a personal level. I have identified and tried to explain many illogical fallacies that people commit to, which includes many if not most experts who give statements on this topic.

One of those fallacies is the principle to remain in inaction or to rely on herd mentality in a situation where the individual feels helpless. Another one concerns how to handle threats, depending on their magnitude, in absence of proof and certainty of knowledge. Another one would be to base your actions purely on faith into your personal beliefs about the outcome, when you cannot sufficiently rule out other outcomes to any meaningful degree.

I have seen many people argue very illogical things on this site, such that it makes no sense to concern yourself with societal collapse brought on by AGI, because it would end the world as we know it anyway. Or that we are incapable to know what to do, because we have not yet established what will truly happen. 

While it might be true that some people would be content to really do nothing and invest nothing to address any risks, simply on a gamble like in Russian roulette, I don't think this suicidal mentality is what most people would recognize as an actually reasonable strategy for themselves and their family, if they only give the topic sufficient thought and consideration.

I would love to hear your feedback on my guide and have it analyzed through a rigorous logical lens as well, that fits the principles of LessWrong about logic and reason. I have been reading LessWrong from time to time, but I am not actually part of this community.

Unfortunately I have not been able to further support most of the logic proposed in this guide by some form of more academic framework to assess and manage (existential) risk, because such a framework just doesn't seem to exist.

Here is the direct onion link for faster access:

http://prepiitrg6np4tggcag4dk4juqvppsqitsvnwuobouwkwl2drlsex5qd.onion/

Edit: Please if you downvote, can you explain your rationale?

New Comment
9 comments, sorted by Click to highlight new comments since:

I am downvoting this. The author and I both believe AGI to be a large risk. Also, I am not against prepping in general: e.g., I wish my country (the US) would spend 1000 times what it is currently spending (in money and in people's time and attention) on preparing for nuclear war (and a lot of the advice on the linked site, i2phides.me, is good advice for preparing for a nuclear war). But the author has not supported his assertion that misaligned super-intelligent AGI can be prepared for. Nowhere in the 5 pages I read on the linked site (introduction, the threat, what to expect, the plan and critique) does the author address the fairly straightforward point that an AGI will tend to be quite superhuman in its ability to make plans and keep plans a secret, and consequently by the time any human notices there is a problem (more so than can be noticed already now) the AGI will have such an overwhelming advantage that responses like relocating to a very rural area with plenty of water, food, weapons, etc, will be quite futile. (In fact getting into a spaceship and fleeing earth at .1 the speed of light would likewise be futile though that might buy you a few years of extra survival if it were practical.)

I can sort of imagine an AGI's choosing to take control over most of humanity's electricity-generating capacity, then successfully repelling all attempts by human law enforcement agencies and militaries to retake control. But if it can do that, why wouldn't it also try to kill as many people as possible? The main benefit (from the AGI's point of view) to that would be to reduce the likelihood that people would create another AGI (which of course would require electricity, but resourceful humans might figure out how to create a hidden capacity to generate the electricity necessary to run computers). And if it tries to do that, it would probably succeed in killing every single one of us. Perhaps that would take decades. Probably not. But even if it does, what hope would the humans in their rural hideouts stocked with supplies have when the vast majority of the capacity to wield technology (including technologies invented by the AGI) is in the hands of the AGI? There is a huge gaping difference between an AGI's seizing control over most of the electrical-generating capacity for its own selfish ends with callous disregard for what that does to the humans and an AGI's setting out to kill all the humans or more precisely all the humans who might be able to contribute to the creation of a second AGI (and note that it is much easier simply to kill a particular person that to predict whether he or she would be able to contribute). The former seems survivable by sufficiently prepared people; the latter does not (although we can have a conversation about it).

The point is that the likelihood of an AGI's doing the former and not doing the latter is so low that preparing for the possibility strikes me as unworthwhile (since we could use our time and resources instead on trying to prevent the creation of the AGI).

Also, a minor point about presentation. Most of us on LW are already familiar with a few dozen arguments for the view that AGI research is very dangerous. In contrast, the notion that AGI is a risk that can be prepared for by, e.g., stockpiling supplies according to a plan, is new to most of us. When you interleave your argumentation for the latter with copious arguments for the former like you have done here, you make it tedious for the reader, who must wade through a lot of arguments he or she has seen before in looking for your argument for why preparing is worthwhile (which frankly I never found and tentatively concluded has not been written down by you).

If there is more to your argument than, "the more dangerous the risk, the more food, water and weapons we need to have stockpiled", please write it down and don't mix it in with argumentation that AGI is very dangerous, which I and many others here already believe to be the case.

Thanks for your reply.

To the point of contention, I believe it is actually fairly well illustrated on the website, that it is a fallacy in itself to make it up to such nuances and to demand proof of any particular of the possible outcomes in the future to shape one's actions, if it pertains to the question of whether or not you should do your bests to mitigate the risk in ensuring your basic survival.

An unsurvivable AGI outcome is just one of the many possible scenarios. Although you can speculate about the details of how it could play out (partial extermination, full extermination, no extermination) and what means AGI might use, the whole point of thinking logically about the issue in terms of ensuring your survival is to recognize that those things are ultimately unknowable. What you have to do is to simply develop a strategy that deals with all eventualities as best as possible. 

I don't know if it is clear to you, but those are the basic scenarios of AGI development taken into consideration on the website, which are basically everything you can consider:

  1. AGI will be docile, that is super-intelligent but fully obedient to humans
    -> society can collapse due to abuse or misuse (ranging from mild to severe)
    -> society can collapse due to system shock (ranging from mild to severe)
    -> society can flourish due to benefits
  2. AGI will become fully independent
    -> accidental extinction due to insanity (ranging from mild to severe)
    -> deliberate extinction to eliminate humans as a threat (ranging from mild to severe)
    -> coincidental extinction due to disregard for human life (ranging from mild to severe)
    -> AGI will become god or a guardian angle and help or digitalize human existence
    -> AGI will leave on a space ship and simply leave us alone, because we are insignificant and there is an infinite amount of planets to exploit resources from
  3. Experimental stages of AGI lead to some kind of destruction
  4. AGI will not develop in the near future, because it is too complicated or outlawed
  5. AGIs will fight against each other and may cause some kind of detrimental chaos

Obviously you can also mix those scenarios, as they all derive from each other, and they all can be rather permanent or temporary in nature, mild or severe as well. For example you could have a collapsing stock market, due to system shock of hyper-intelligent but fully human-controlled AI systems, all the while independent AGI systems are on the rise 2 years later and then waging digital wars against each other and some of them try to exterminate humans, while others try to protect them or enslave them. While this seems somewhat ridiculous to consider, it is just to illustrate that there is a wide range of possible outcomes with a wide range of details, and no single definitive outcome (e.g. full extinction of every last human on earth from AGI) can be determined with any degree of certainty.

In the end in most scenarios, society will recover in time and some amount of human life will still be present afterwards. But even if there was just a single scenario with the off chance of human survival, then this would be enough to work towards by literally spending all your time and resources on it. Anything else can only be described as suicidal.

Personally though, I believe the chances of overall human survival are very high, and the chances of hostile AGI are rather low. This is exactly how it is reflected by expert opinion as well. But there is a lot of in-between risk that you need to consider, most of which concerns intermediary stages of AGI, and this is where spending reasonable amounts of money (e.g. 2000 Euros) come into play.

So I was hoping you can help me to improve the page and tell me how that was not clear from reading it through and maybe how I can write it differently, without adding thousands of words to the page.

This guide was not written for LessWrong btw., but for common people who are smart enough to follow the arguments and are willing to protect themselves if necessary in light of this new information and proper consideration and risk management.

But even if there was just a single scenario with the off chance of human survival, then this would be enough to work towards by literally spending all your time and resources on it. Anything else can only be described as suicidal.

Well, sure, I see the logic in that. Unlike you, however, my probability that (even if I started preparing in earnest now) I would survive an AGI that has taken control of most human infrastructure and has the capacity to invent new technologies is so low that my best course of action (best out of a bad lot) is to put all my efforts into preventing the creation of the AGI.

I arrive at that low probability by asking myself what would I do (what subgoals I would pursue) if I assigned zero intrinsic value to human life and human preferences and I could invent powerful new technologies (and discover new scientific laws that no human knows about) and my ability to plan were truly superhuman. I.e., I put myself in the "shoes" of the AGI (and I reflected on that vantage point over many days).

In other words, by the time AGI research has progressed to the stage that things like my supply of food, water and electricity get disrupted by an AGI, it is almost certainly already too late for me on my models.

I was hoping you can help me to improve the page

OK. Which is the audience you are hoping to influence? Are you hoping to alert preppers to the dangers of AI or are you hoping to get people who already know AI is dangerous (like many of us here) to prepare themselves for shortages and mobs of desperate unprepared people? (It would be a bad idea IMO to try to engage both audiences with a single set of web pages.)

The audience is the general public. That is anyone who has the attention span and is smart enough to read what I wrote, without feeling the need to disregard the idea out of comfort, personal conviction, laziness, etc. I was toying with the idea of writing a much shorter version for stupid people. But then I think that is just an exercise in futility. And also I don't really like the idea of stupid people gaining such an existential advantage.

Unfortunately I am not allowed to create another post the next 7 days, due to low karma. I have written a new post however that I will post soon. It is quite long. If you are interested you can read it here:

https://pastebin.com/7WR0P8ZM 

Maybe you could tell me how much it is on track, from your experience with this community. And I would also like to understand any reasons how my perspective could be flawed. I was reading quite a bit on here and watching some videos. Although influenced a lot by your replies, it is not meant to attack you, but those who follow by authority and influence of Eliezer and don't think for themselves instead.

I don't write and will not write those these things to please people though. If people cannot jump over their own shadow and process dissonance in a healthy constructive way, then so be it.

I didn't follow the link, but in general I think there is some argument for minimal prepping around AGI, where the problems are caused by societal disruption during the early post-AGI days. The problems are probably not even enacted by AGIs, just human institutions going loopy for a time.

My model (of exploratory engineering kind) says that there is at most a 1-3 year period between the first AGI and capability to quickly convert everything on Earth to compute, if AGI-generated technological progress is not held back, or if they immediately escape with enough resources to continue on track. In the more-than-few-months timelines, available compute doesn't suffice for (strong) superintelligence through algorithmic progress that's reachable quickly using available compute. So the delay is in moving towards a compute manufacturing megaproject without already having access to superintelligence, only relying on much faster human level research. And also that there is no shortcut to figuring out scalable diamondoid nanotech in that timeframe without superintelligence. (But macroscopic biotech needs to be tractable, or else the timelines can go even further, using human labor to build factories that build robot hands.)

After this period, humanity gets what superintelligence decides. During this time, there isn't a superintelligence, and humanity isn't necessarily a top priority, so there isn't sufficient effective compute that its problems nonetheless necessarily get solved. The possibility of a positive decision on humanity's fate motivates only not wiping everyone out (if even that, since a backup might suffice). Thus keeping mall shelves full is not a given.

Downvoting because there's a fair bit of text promising advice, but no actual advice, or even indication of what dimensions the advice might take.  I was intrigued enough (but annoyed that it was required) to follow the link, which ALSO didn't have any advice, just further links.

My estimation that there's anything actually useful in there is pretty low.

It is a prepping guide, like it says in the title and introduction page. Prepping is the practice of preparing for disasters. Are you sure you actually opened the link I posted? Here is a PDF printout of the site: https://docdro.id/nnIJ16G 

Or are you literally just downvoting because you got tired after one click?

Got tired after a few minutes of looking for actual content.  your PDF contains some advice, but not much that seems AGI-threat-specific.  Still doesn't seem particularly well-suited to this site.