A big question that determines a lot about what risks from AGI/ASI may look like has to do with the kind of things that our universe's laws allow to exist. There is an intuitive sense in which these laws, involving certain symmetries as well as the inherent smoothing out caused by statistics over large ensembles and thus thermodynamics, etc., allow only certain kinds of things to exist and work reliably. For example, we know "rocket that travels to the Moon" is definitely possible. "Gene therapy that allows a human to live and be youthful until the age of 300" or "superintelligent AGI" are probably possible, though we don't know how hard. "Odourless ambient temperature and pressure gas that kills everyone who breathes it if and only if their name is Mark with 100% accuracy" probably is not. Are there known attempts at systematising this issue using algorithmic complexity, placing theoretical and computational bounds, and so on so forth?
Well, as I said, there might be some general insight. For example biological cells are effectively nanomachines far beyond our ability to build, yet they are not all-powerful; no individual bacterium has single-handedly grown to grey goo its way through the entire Earth, despite there being no particular reasons why it wouldn't be able to. This likely comes from a mixture of limits of the specific substrate (carbon, DNA for information storage), the result of competition between multiple species (which can be seen as inevitable result of imprecise copying and following divergence, even though mostly cells have mechanisms to try and prevent those sort of mistakes) and perhaps intrinsic thermodynamic limits of Von Neumann machines as a whole. So understanding which is which would be interesting and useful.