I asked a similar question about WWI rather than x-risk. It seems that we are pretty bad at these kinds of questions
I think these questions are pretty bad at us. Counterfactuals are confusing regardless, but very complex states of the universe and actions of tens of thousands of significant people (and billions of unknown-influence) are truly impossible to model well enough to know what "could have happened" even means.
I know, this doesn't help with your goal. My point is, we don't know yet.
(if you had a time machine) don't reroll the dice
I think it could at the very least be useful to go back just 5-20 years to share alignment progress and the story of how the future played out with LLMs.
The MINIMUM amount of backwards time travel(only one step backwards, and then the remainder of the person's life is lived) and the MINIMUM amount of resources(let's say USD in XXXX year) a person would need to be sure that any problems associated with existential risks will be handled adequately(i.e. human flourishing). Any specific person can be sent back, we can assume they're completely motivated to the task and have the necessary domain knowledge. Convincing everyone you're from the future does not count.
Alternately, if this seems insufficient, what specific extra knowledge would need to be brought back that we may not know right now.
The rationale for this post is to get a better idea of what a successful AI governance or alignment plan would have looked like.