(As usual for my questions, the focus here is "[advanced math]... that'll be needed for technical AI alignment".)
Could someone with good-but-not-great working memory, and infinite time, learn all any known math? Or is there some "inherent intuitive complex nuance" thing (involving e.g. mental visualization) they need?
Reductionism would suggest the former (with some caveats), but computational intractability in real life might require the latter anyway. Human brains, paper, and code may not be able to bridge the "gap" between (less-precise proofs backed by advanced intuition) and (precise proofs simple enough for basically anyone to technically "follow").
And, for practical reasons, I wonder how "continuous" that gap is. E.g. how the tradeoff/tractability changes as one's working memory increases, or how the gap changes for different subfields of math.
I think any high level thought or movement is intuitive and approximate and not completely trustworthy, including high level thoughts about mathematics.
You find things by looking across long distances, but constructive proof steps only cross short distances. Nothing new is actually found by applying simple rules. Mathematical proofs don't represent a way of thinking, they're artifacts produced after the thought has been done and the realization has been had, they only exist to validate and to discipline (train) the higher-level heuristics you really use when you're navigating the overarching space of mathematics.
I'm not a mathematician, but if someone had told me this when I started undergrad, much more likely I would've been better at it and I would've ended up being a mathematician in that timeline.
This is consistent with what I've heard/read elsewhere, yeah.