This is satire.
It is intend to draw attention to the absurd situation the world is in. By estimates by the most skilled forecasters on Earth, there is a 10% chance of Superintelligence within the next 900 days. There is no plan that strictly dominates Plan 'Straya. Most plans do not address the situation with the honesty that Plan 'Straya does.
I advise the world updates on this.
Is humanity expanding beyond Earth a requirement or a goal in your world view?
A novel theory of victory is human extinction.
I do not personally agree with it, but it is supported by people like Hans Moravec and Richard Sutton, who believe AI to be our "mind children" and that humans should "bow out when we can no longer contribute".
I recommend the follow up work happens.
This would depend on whether algorithmic progress can continue indefinitely. If it can, then yes the full Butlerian Jihad would be required. If it can't, either due to physical limitations or enforcement, then only computers over a certain scale would be required to be controlled/destroyed.
There is a AI x-risk documentary currently being filmed. An Inconvenient Doom. https://www.documentary-campus.com/training/masterschool/2024/inconvenient-doom It covers some aspects on AI safety, but doesn't focus on it exactly.
I also agree 5 is the main crux.
In the description of point 5, the OP says "Proving this assertion is beyond the scope of this post,", I presume that the proof of the assertion is made elsewhere. Can someone post a link to it?
I'm thirty-something. This was about 7 years ago. From the inhibitors? Nah. From the lab: probably.
We still smell plenty of things in a university chemistry lab, but I wouldn't bother with that kind of test for an unknown compound. Just go straight to NMR and mass spec, maybe IR depending on what you guess you are looking for.
As a general rule don't go sniffing strongly, start with carefully wafting. Or maybe don't, if you truly have no idea what it is.
Link to the 900 days claim. AI Futures Model