Hardly any potential catastrophies actually occur. If you only plan for the ones that actually occur (say, by waiting until they happen, or by flawlessly predicting the future), then you save a lot of mental effort.
Also, consider the difference between potential and actual catastrophe regarding how willing you will be to make a desperate effort to find the best solution.
Why do you consider a possible AI person's feelings morally relevant? It seems like you're making an unjustified leap of faith from "is sentient" to "matters". I would be a bit surprised to learn, for example, that pigs do not have subjective experience, but I go ahead and eat pork anyway, because I don't care about slaughtering pigs and I don't think it's right to care about slaughtering pigs. I would be a little put off by the prospect of slaughtering humans for their meat, though. What makes you instinctively put your AI in the "human" category rather than the "pig" category?
I don't actually know that separate agree/disagree and low/high quality buttons will be all that helpful. I don't know that I personally can tell the difference very well.