Greetings LessWrong community. I have written this prepping guide, to explain the threats of AGI in simple terms, and to propose the best course of action on an individual level, based on rational and logical thinking and strategies.
I am very concerned with how the topic is handled in the public sphere and on a personal level. I have identified and tried to explain many illogical fallacies that people commit to, which includes many if not most experts who give statements on this topic.
One of those fallacies is the principle to remain in inaction or to rely on herd mentality in a situation where the individual feels helpless. Another one concerns how to handle threats, depending on their magnitude, in absence of proof and certainty of knowledge. Another one would be to base your actions purely on faith into your personal beliefs about the outcome, when you cannot sufficiently rule out other outcomes to any meaningful degree.
I have seen many people argue very illogical things on this site, such that it makes no sense to concern yourself with societal collapse brought on by AGI, because it would end the world as we know it anyway. Or that we are incapable to know what to do, because we have not yet established what will truly happen.
While it might be true that some people would be content to really do nothing and invest nothing to address any risks, simply on a gamble like in Russian roulette, I don't think this suicidal mentality is what most people would recognize as an actually reasonable strategy for themselves and their family, if they only give the topic sufficient thought and consideration.
I would love to hear your feedback on my guide and have it analyzed through a rigorous logical lens as well, that fits the principles of LessWrong about logic and reason. I have been reading LessWrong from time to time, but I am not actually part of this community.
Unfortunately I have not been able to further support most of the logic proposed in this guide by some form of more academic framework to assess and manage (existential) risk, because such a framework just doesn't seem to exist.
Here is the direct onion link for faster access:
http://prepiitrg6np4tggcag4dk4juqvppsqitsvnwuobouwkwl2drlsex5qd.onion/
Edit: Please if you downvote, can you explain your rationale?
Well, sure, I see the logic in that. Unlike you, however, my probability that (even if I started preparing in earnest now) I would survive an AGI that has taken control of most human infrastructure and has the capacity to invent new technologies is so low that my best course of action (best out of a bad lot) is to put all my efforts into preventing the creation of the AGI.
I arrive at that low probability by asking myself what would I do (what subgoals I would pursue) if I assigned zero intrinsic value to human life and human preferences and I could invent powerful new technologies (and discover new scientific laws that no human knows about) and my ability to plan were truly superhuman. I.e., I put myself in the "shoes" of the AGI (and I reflected on that vantage point over many days).
In other words, by the time AGI research has progressed to the stage that things like my supply of food, water and electricity get disrupted by an AGI, it is almost certainly already too late for me on my models.