I think the risk with AI safety is whilst it is not an explicitly pro-tyranny ideology, there is an increasing need for governance and control, and so by blocking off governance by Friendly AGI we will not get less AI risk but more tyranny from human government structures (which are inherently corrupt and tyrannical because humans are untrustworthy and traditional human government structures are riddled with brokenness)
You talk about "governance by Friendly AGI" as if it's a solved problem we're just waiting to deploy, not speculation that might simply not be feasible even if we solve AGI alignment, which itself is plausibly unsolvable in the near term. You also conflate AI safety research with AI governance regimes. And note that the problems with governance generally aren't a lack of intelligence by those in charge, it's largely conflicting values and requirements. And with that said, you talk about modern liberal governments as if they are the worst thing we've experienced, "riddled with brokenness," as if that's the fault of the people in charge, not the deeply conflicting mandates that the populace gives them. And to the extent that the systemic failure is the fault of the untrustworthy incentives of those in charge, why would controllable or aligned AGI fix that?
Yes, stasis isn't safe by default, but undirected progress isn't a panacea, and governance certainly isn't any closer to solved just because we have AI progress.
A critical failure mode in many discussions of technological risk is the assumption that maintaining status quo for technology would lead to maintaining the status quo for society. Lewis Anslow suggests that this "propensity to treat technological stagnation as safer than technological acceleration" is a fallacy. I agree that it is an important failure of reasoning among some EAs, and want to call it out clearly.
One obvious example of this, flagged by Anslow, is the anti-nuclear movement. It was not an explicitly pro-coal position, but because there was continued pressure for economic growth, the result of delaying nuclear technology wasn't less power usage, it was more coal. To the extent that they succeeded narrowly, they damaged the environment.
The risk from artificial intelligence systems today is very arguably significant, but stopping future progress won't reduce the impact of extant innovations. Stopping where we are today would still lead to continued problems with mass disinformation assisted by generative AI, and we'll see continued progress towards automation of huge parts of modern work even without more capable systems as the systems which exist are deployed in new ways. It also seems vanishingly unlikely that the pressures on middle class jobs, artists, and writers will decrease even if we rolled back the last 5 years of progress in AI - but we wouldn't have the accompanying productivity gains which could be used to pay for UBI or other programs.
All of that said, the perils of stasis aren't necessarily greater than those of progress. This is not a question of safety versus risk, it is a risk-risk tradeoff. And the balance of the tradeoff is debatable - it is entirely possible to agree that there are significant risks to technological stasis, and even that AI would solve problems, and still debate what strategy to promote - whether it is safer to accelerate through the time of perils, promote differential technological development, or to shut it all down.