Firstly, it would seem to me to be much more difficult to FOOM with an LLM, it would seem much more difficult to create a superintelligence in the first place, and it seems like getting them to act creatively and be reliable are going to be much harder problems than making sure they aren't too creative.
Au contraire, for me at least. I am no expert on AI, but prior to the LLM blowup and seeing AutoGPT emerge almost immediately, I thought that endowing AI with the agency would take an elaborate engineering effort that went somehow beyond imitation of human outputs, such as language or imagery. I was somewhat skeptical of the orthogonality thesis. I also thought that it would take massive centralized computing resources not only to train but also to operate trained models (as I said, no expert). Obviously that is not true, and in a utopian outcome, access to LLMs will probably be a commodity good, with lots of roughly comparable models from many vendors to choose from and widely available open-source or hacked models as well.
Now, I see the creation of increasingly capable autonomous agents as just a matter of time, and ChaosGPT is overwhelming empirical evidence of orthogonality as far as I'm concerned. Clearly morality has to be enforced on the fundamentally amoral intelligence that is the LLM.
For me, my p(doom) increased due to the orthogonality thesis being conclusively proved correct and realizing just how cheap and widely available advanced AI models would be to the general public.
Edit: One other factor I forgot to mention is how instantaneously we'd shift from "AI doom is sci-fi, don't worry about it" to "AI doom is unrealistic because it just won't happen, don't worry about it" as LLMs became an instant sensation. I have been deeply disappointed on this issue by Tyler Cowen, who I really did not expect to shift from his usual thoughtful, balanced engagement with advanced ideas to just utter punditry on the issue. I think I understand where he's coming from - the huge importance of growth, the desire not to see AI killed by overregulation in the manner of nuclear power, etc - but still.
It has reinforced my belief that a fair fraction of the wealthy segment of the boomer generation will see AI as a way to cheat death (a goal I'm a big fan of), and will rush full-steam ahead to extract longevity tech out of it because they personally do not have time to wait to align AI, and they're dead either way. I expect approximately zero of them to admit this is a motivation, and only a few more to be crisply conscious of it.
The reason to ban GPT5 (at least in my mind), is because each incremental chunk of progress reduces the amount of distance from here to AGI Foom and total loss of control of the future, and because there won't be an obvious step after GPT5 at which to stop.
(I think GPT5 wouldn't be dangerous by default, but could maybe become dangerous if used as the base for a RL trained agent-type AI, and we've seen with GPT4 that people move on to that pretty quickly)