A big question that determines a lot about what risks from AGI/ASI may look like has to do with the kind of things that our universe's laws allow to exist. There is an intuitive sense in which these laws, involving certain symmetries as well as the inherent smoothing out caused by statistics over large ensembles and thus thermodynamics, etc., allow only certain kinds of things to exist and work reliably. For example, we know "rocket that travels to the Moon" is definitely possible. "Gene therapy that allows a human to live and be youthful until the age of 300" or "superintelligent AGI" are probably possible, though we don't know how hard. "Odourless ambient temperature and pressure gas that kills everyone who breathes it if and only if their name is Mark with 100% accuracy" probably is not. Are there known attempts at systematising this issue using algorithmic complexity, placing theoretical and computational bounds, and so on so forth?
This kind of understanding is already available in higher level textbooks, within known energy and space-time scales, as previously mentioned?
If your asking, for example, whether with infinite time and energy some sort of grey goo 'superorganism' is possible, assuming some sort of far future technology that goes beyond our current comprehension, then that is obviously not going to have an answer for the aformentioned reasons...
Assuming you already have sufficient knowledge of the fundamental sciences and engineering and mathematics at the graduate level, then finding the textbooks, reading them, comparatively analyzing them, and drawing your own conclusions wouldn't take more then a few weeks. This sort of exhaustive analysis would presumably satisfy even a very demanding level of certainty (perhaps 99.9% confidence?).
If your asking for literally 100% certainty then that's impossible. In fact, nothing on LW every written, nor ever can be written, will meet that bar, especially when the Standard Model is known to be incomplete.
If your asking whether someone has already done this and will offer it in easily digestable chunks in the form of LW comments, then it seems exceedingly unlikely.