To respond to the edit, I simply don't see the analogy.
Your wording makes it sound analogous because you could describe what I'm saying as "don't worry about unknowns" (i.e., you have no evidence for whether God exists or not, so don't worry about it), and you could also describe your reductio the same way (i.e., you have no evidence for whether some random chemical is safe, so don't worry about it), but when I try to visualize the situation I don't see the connection.
A better analogy would be being forced to take one of five different medications, and having absolutely no evidence at all for their safety, or any hope of getting such evidence, and knowing that the only possible unsafe side effects would come only far in the future (if at all). In such a situation, you would of course forget about choosing based on safety, and simply choose based on other practical considerations such as price, how easy they are to get down, etc.
One should worry about changing the status quo only if there was a useful, reliable market test in place beforehand that had anything to do with why that status quo was like it was, and especially in the case that you don't have overwhelming evidence that (1) it was a known hardware or software vulnerability that led to what became the status quo, and (2) it's obvious that remaining a part of that status quo is extremely epistemically hazardous (being religious is certainly an epistemic hazard--feel free to ask for elaboration).
Are there any essays anywhere that go in depth about scenarios where AIs become somewhat recursive/general in that they can write functioning code to solve diverse problems, but the AI reflection problem remains unsolved and thus limits the depth of recursion attainable by the AIs? Let's provisionally call such general but reflection-limited AIs semi-general AIs, or SGAIs. SGAIs might be of roughly smart-animal-level intelligence, e.g. have rudimentary communication/negotiation abilities and some level of ability to formulate narrowish plans of the sort that don't leave them susceptible to Pascalian self-destruction or wireheading or the like.
At first blush, this scenario strikes me as Bad; AIs could take over all computers connected to the internet, totally messing stuff up as their goals/subgoals mutate and adapt to circumvent wireheading selection pressures, without being able to reach general intelligence. AIs might or might not cooperate with humans in such a scenario. I imagine any detailed existing literature on this subject would focus on computer security and intelligent computer "viruses"; does such literature exist, anywhere?
I have various questions about this scenario, including: