We can't simulate things of which we currently have no understanding. But if at some point in the future we know how to write AGIs, then we would be able to simulate them. And if we don't know how to write AGIs then they won't exist. So if we can write AGIs in the future then memory capacity and processor speed won't impose a limit on our understanding. Any such limit would have to come from some other factor. So is there such a limit and where would it come from?
If you read the quote carefully you will find that it is incompatible with the position you are attributing to Deutsch. For example, he writes about
which would hardly be necessary if computational universality was equivalent to universal explainer.