The audience is the general public. That is anyone who has the attention span and is smart enough to read what I wrote, without feeling the need to disregard the idea out of comfort, personal conviction, laziness, etc. I was toying with the idea of writing a much shorter version for stupid people. But then I think that is just an exercise in futility. And also I don't really like the idea of stupid people gaining such an existential advantage.
Unfortunately I am not allowed to create another post the next 7 days, due to low karma. I have written a new post however that I will post soon. It is quite long. If you are interested you can read it here:
Maybe you could tell me how much it is on track, from your experience with this community. And I would also like to understand any reasons how my perspective could be flawed. I was reading quite a bit on here and watching some videos. Although influenced a lot by your replies, it is not meant to attack you, but those who follow by authority and influence of Eliezer and don't think for themselves instead.
I don't write and will not write those these things to please people though. If people cannot jump over their own shadow and process dissonance in a healthy constructive way, then so be it.
Thanks for your reply.
To the point of contention, I believe it is actually fairly well illustrated on the website, that it is a fallacy in itself to make it up to such nuances and to demand proof of any particular of the possible outcomes in the future to shape one's actions, if it pertains to the question of whether or not you should do your bests to mitigate the risk in ensuring your basic survival.
An unsurvivable AGI outcome is just one of the many possible scenarios. Although you can speculate about the details of how it could play out (partial extermination, full extermination, no extermination) and what means AGI might use, the whole point of thinking logically about the issue in terms of ensuring your survival is to recognize that those things are ultimately unknowable. What you have to do is to simply develop a strategy that deals with all eventualities as best as possible.
I don't know if it is clear to you, but those are the basic scenarios of AGI development taken into consideration on the website, which are basically everything you can consider:
Obviously you can also mix those scenarios, as they all derive from each other, and they all can be rather permanent or temporary in nature, mild or severe as well. For example you could have a collapsing stock market, due to system shock of hyper-intelligent but fully human-controlled AI systems, all the while independent AGI systems are on the rise 2 years later and then waging digital wars against each other and some of them try to exterminate humans, while others try to protect them or enslave them. While this seems somewhat ridiculous to consider, it is just to illustrate that there is a wide range of possible outcomes with a wide range of details, and no single definitive outcome (e.g. full extinction of every last human on earth from AGI) can be determined with any degree of certainty.
In the end in most scenarios, society will recover in time and some amount of human life will still be present afterwards. But even if there was just a single scenario with the off chance of human survival, then this would be enough to work towards by literally spending all your time and resources on it. Anything else can only be described as suicidal.
Personally though, I believe the chances of overall human survival are very high, and the chances of hostile AGI are rather low. This is exactly how it is reflected by expert opinion as well. But there is a lot of in-between risk that you need to consider, most of which concerns intermediary stages of AGI, and this is where spending reasonable amounts of money (e.g. 2000 Euros) come into play.
So I was hoping you can help me to improve the page and tell me how that was not clear from reading it through and maybe how I can write it differently, without adding thousands of words to the page.
This guide was not written for LessWrong btw., but for common people who are smart enough to follow the arguments and are willing to protect themselves if necessary in light of this new information and proper consideration and risk management.
Only live-attenuated vaccines may (sometimes) not need adjuvants. Plus you sometimes have other ingredients acting as adjuvants that are not declared as such. For example mercury is declared as a preservative, not adjuvant, but it performs the same function. Also as of recent they started removing constituents from the ingredient list, that were part of the manufacturing process (e.g. culture media), but are not "intended" part of the final product. If a food manufacturer washes potatoes with iodine for example in order to clean them, he is not required to list that as an ingredient, regardless of whether or not quantities in the final product are relevant.
To put simply without a live virus, the immune system recognizes the would-be antigens as simply garbage molecules, and not as a threat. In order for immunization to work, you need to inject something dangerous like a live virus, aluminium, some kind of toxic protein or cytokine alongside the antigen.
It is a prepping guide, like it says in the title and introduction page. Prepping is the practice of preparing for disasters. Are you sure you actually opened the link I posted? Here is a PDF printout of the site: https://docdro.id/nnIJ16G
Or are you literally just downvoting because you got tired after one click?