Submission for the counterfactual AI (inspired by my experiences as a predictor in the "Good Judgment Project" ):
Being able to forecast the future is incredibly helpful, even if it is to just prepare for it.
However, if the question is too overly-specific, the AGI can produce probabilities that aren't entirely useful (for example, in the real-world GJP, two countries signed a peace treaty that broke down 2 days later. Most of us assume lasting peace would ever occur, so we put a low probability rating of a peace treaty being signed - but since a peace treaty was signed, we managed to get the question wrong. If we had maximized for producing the lowest Brier Score, we should have predicted the existence of a very temporary peace treaty - but that wouldn't be really useful knowledge for the people who asked that question).
Making the question very vague ("Will [COUNTRY_X] be safe, according to what I subjectively think the word 'safe' means?") turns "prediction" into an exercise of determining what future humans think about the future, which may be kinda useful, but not really what you want.
Quotes from H.P. Lovecraft's Nietzscheism and Realism (full text):
Of course, H.P. Lovecraft was not suicidal, but that might be because (a) death is inevitable so there's no reason to rush the road to blissful oblivion, and (b) he's human just like everyone else, so he is just as valuable as everyone else. But note that he is in favor of the mitigation of suffering, and attaches no intrinsic value with life itself. He probably would be okay with life extension, but only if you are able to mitigate suffering in the process. If you can't do that, he'll probably complain. Conversely, if you do find a way to convince people to give up their "primitive cowardice" and thereby ease humanity's suffering that way...well, he may consider it.