A big question that determines a lot about what risks from AGI/ASI may look like has to do with the kind of things that our universe's laws allow to exist. There is an intuitive sense in which these laws, involving certain symmetries as well as the inherent smoothing out caused by statistics over large ensembles and thus thermodynamics, etc., allow only certain kinds of things to exist and work reliably. For example, we know "rocket that travels to the Moon" is definitely possible. "Gene therapy that allows a human to live and be youthful until the age of 300" or "superintelligent AGI" are probably possible, though we don't know how hard. "Odourless ambient temperature and pressure gas that kills everyone who breathes it if and only if their name is Mark with 100% accuracy" probably is not. Are there known attempts at systematising this issue using algorithmic complexity, placing theoretical and computational bounds, and so on so forth?
Thanks for the essay! As you say, not quite what I was looking for, but still interesting (though mostly saying things I already know/agree with).
My question is more in line with the recent post about the smallest possible button and my own about the cost of unlocking optimization from observation. The very idea not of what problems computation can solve, but of how far can problem-solving carry you in actually affecting the world. The limit would be I guess "suppose you have an oracle that given the relevant information can instantly return the optimal strategy to achieve your goal, how well does that oracle perform?". So, like, is tech so advanced that it truly looks like magic even to us possible at all? I assume some things (deadly enough in their own right) like nanotech and artificial life are, but wonder about even more exotic stuff.