This is very, very dependent on what assumptions you fundamentally make about the nature of physical reality, and what assumptions you make about how much future civilizations can alter physics.
I genuinely think that if you want to focus on the long term, unfortunately we'd need to solve very, very difficult problems in physics to reliably give answers.
For the short term limitations that are relevant to AI progress, I'd argue that the biggest one is probably thermodynamics stuff, and in particular the Landauer limit is a good approximation for why you can't make radically better nanotechnology than life without getting into extremely weird circumstances, like reversible computation.
what assumptions you make about how much future civilizations can alter physics
I don't think the concept of "altering physics" makes sense. Physics is the set of rules that determine reality. By definition, everyone living in this universe is subject to the laws of physics. If someone were to find a way so, say, locally alter what we call Planck's constant, that would just mean that it's not actually a constant, but the emergent product of a deeper system that can be tinkered with, which doesn't mean you're altering the laws of physics - it merely peels...
[downvoted]
I was just about to say "wait that's just Dust Theory" and then you mentioned Permutation City yourself. But also, in that scenario, the guy moving the stones certainly has the power to make anything happen - but the entities inside the universe don't, as they are bound by the rules of the simulation. Which is as good as saying that if you want to make anything happen, you should pray to God.
I think maybe this gwern essay is the one I was thinking of, but I'm not sure. It doesn't quite answer your question.
But there isn't a complexity-theoretic argument that's more informative than general arguments about humans not being the special beings of maximal possible intelligence. We don't know precisely what problems a future AI will have to solve, or what approximations it will find appropriate to make.
Thanks for the essay! As you say, not quite what I was looking for, but still interesting (though mostly saying things I already know/agree with).
My question is more in line with the recent post about the smallest possible button and my own about the cost of unlocking optimization from observation. The very idea not of what problems computation can solve, but of how far can problem-solving carry you in actually affecting the world. The limit would be I guess "suppose you have an oracle that given the relevant information can instantly return the optimal strategy to achieve your goal, how well does that oracle perform?". So, like, is tech so advanced that it truly looks like magic even to us possible at all? I assume some things (deadly enough in their own right) like nanotech and artificial life are, but wonder about even more exotic stuff.
suppose you have an oracle that given the relevant information can instantly return the optimal strategy to achieve your goal, how well does that oracle perform?
I guess CT experts (e.g. not me) would say it either depends on boring details or belong to one of three possibilities:
My point is that there have to be straight up impossibilities in there. For example, if you had a constraint to only use 3 atoms to build a molecule, there are only so many stable combinations. When one considers for example nanomachines it is reasonable to imagine that there is a minimum physical size that can embed a given program, and that size also puts limitations on effectiveness, lifetime, and sensory abilities. Like e.g. you lose resolution on movement because the smaller you are the stronger the effect of Brownian forces, stuff like that, at the crossroads between complexity theory and thermodynamics.
A big question that determines a lot about what risks from AGI/ASI may look like has to do with the kind of things that our universe's laws allow to exist. There is an intuitive sense in which these laws, involving certain symmetries as well as the inherent smoothing out caused by statistics over large ensembles and thus thermodynamics, etc., allow only certain kinds of things to exist and work reliably. For example, we know "rocket that travels to the Moon" is definitely possible. "Gene therapy that allows a human to live and be youthful until the age of 300" or "superintelligent AGI" are probably possible, though we don't know how hard. "Odourless ambient temperature and pressure gas that kills everyone who breathes it if and only if their name is Mark with 100% accuracy" probably is not. Are there known attempts at systematising this issue using algorithmic complexity, placing theoretical and computational bounds, and so on so forth?