I think the best way to deal with AI alignment is to create AI not just as a separate entity, but instead an extension and augmentation of ourselves. We are much better at using AI in narrow contexts than in real-world AGI scenarios, and we still have time to think about this before willy-nilly making autonomous agents. If humans can use AI and their own smarts to create functional brain-computer interfaces, the problem of aligned AI may not become a problem at all. Because the Artificial Intelligence is just an extension of yourself, of course it will be aligned with you - it is you! What I mean is that as humans become better at interfacing with technology the line between AI and human blurs.
One major subfield within AI is understanding how the human brain works and effectively replicating it (while also making it more efficient with available technologies). I agree that we can't just stick one end of a wire into a brain and another into a machine learning algorithm, they certainly aren't compatible. But the Machine Learning and AI technologies we have today allow us to gain a better understanding of the human brain and how it works. My belief is that eventually we come to understand why humans are, to our knowledge, the greatest learning agents, and will come to identify the reasons for our limitations that will be eliminated through our technology.
The only reasonable solution is to merge with the technology, or risk becoming obsolete. However, I believe this will become obvious as we approach "all-powerful" AGI, which will almost certainly come about by trying to replicate the human brain using technology, and due to their similarities in structure, and the fact that we have to understand the brain to build a brain, linking the two actually becomes trivial.