If the process of self-improving AIs like described in an simple article by Tim Urban (below) is mastered, then the AI alignment problem is solved: "The idea is that we’d build a computer whose two-THREE major skills would be doing research on AI, ON ETHICS, and coding changes into itself—allowing it to not only learn but to improve its own architecture. We’d teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job—figuring out how to make themselves smarter and ALIGNED"
In caps: parts to add for alignment
Link to the article: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
do you know of any other sketches of how to measure that are reasonably close to mechanically specified?