A Turing machine is a universal computer: it can compute anything that any other computer can compute. A human being can specify a Turing machine and the data it's acting on and carry out the steps that the machine would execute. Human beings have also constructed computers with the same repertoire as a Turing machine, such as the computer on which I am writing this question. There are articles on Less Wrong about mind design space, such as this one:
https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general
in which the author writes:
The main reason you could find yourself thinking that you know what a fully generic mind will (won't) do, is if you put yourself in that mind's shoes - imagine what you would do in that mind's place - and get back a generally wrong, anthropomorphic answer.
But a person thinking about what an AI would do needn't imagine what he would do in that other mind's place. He can simulate that mind with a universal computer.
So what is the Less Wrong position on whether we could understand AIs and how is that claim compatible with the universality of computation?
We can't simulate things of which we currently have no understanding. But if at some point in the future we know how to write AGIs, then we would be able to simulate them. And if we don't know how to write AGIs then they won't exist. So if we can write AGIs in the future then memory capacity and processor speed won't impose a limit on our understanding. Any such limit would have to come from some other factor. So is there such a limit and where would it come from?