"A major goal of the control problem is preventing AIs from doing that. Ensuring that their output is safe and useful." You might want to be careful with the "safe and useful" part. It sound like it's moving into the pattern of slavery. I'm not condemning the idea of AI, but a sentient entity would be a sentient entity, and I think would deserve some rights.
Also, why would an AI become evil? I know this plan is supposed to protect from the eventuality, but why would a presumably neutral entity suddenly want to harm others? The only rea...
But wouldn't an intelligent AI be able to understand the productivity of a human? If you are already inventive and productive, you shouldn't have anything to worry about because the AI would understand that you can produce more than the flesh that you are made up of. Even computers have limits, so extra thinking power would be logically favorable to an AI.