It is generally accepted in the local AI alignment circles that the whole field is pre-paradigmatic, in the Kuhnian sense (phase 1, as summarized here, if longer reading is not your thing). And yet, plenty of people are quite confident in their predictions of either doom or fizzle. A somewhat caricature way of representing their logic is, I think, "there are so many disjunctive ways to die, only one chance to get it right, and we don't have a step-by-step how-to, so we are hooped" vs "this is just one of many disruptive inventions whose real impact can only be understood way down the road, and all of them so far have resulted in net benefit, AI is just another example" (I have low confidence in the accuracy of the latter description, feel free to correct.) I can see the logic in both of those, what I do not see is how one can rationally have very high or very low confidence, given how much inherent uncertainty there is in our understanding of what is going on.
My default is something more cautious, akin to Scott Alexander's https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/
where one has to recognize their own reasoning limitations in the absence of hard empirical data, not "The Lens That Sees Its Flaws", but more like "The Lens That Knows It Has Flaws" without necessarily being able to identify them.
So, how can one be very very very sure of something that has neither empirical confirmation, nor sound science behind it? Or am I misrepresenting the whole argument?
Taboo "rationally".
I think the question you want is more like: "how can one have well-calibrated strong probabilities?". Or maybe "correct". I don't think you need the word "rationally" here, and it's almost never helpful at the object level -- it's a tool for meta-level discussions, training habits, discussing patterns, and so on.
To answer the object-level question... well, do you have well-calibrated beliefs in other domains? Did you test that? What do you think you know about your belief calibration, and how do you think you know it?
Personally, I think you mostly get there by looking at the argument structure. You can start with "well, I don't know anything about proposition P, so it gets a 50%", but as soon as you start looking at the details that probability shifts. What paths lead there, what don't? If you keep coming up with complex conjunctive arguments against, and multiple-path disjunctive arguments for, the probability rapidly goes up, and can go up quite high. And that's true even if you don't know much about the details of those arguments, if you have any confidence at all that the process producing those is only somewhat biased. When you do have the ability to evaluate those in detail, you can get fairly high confidence.
That said, my current way of expressing my confidence on this topic is more like "on my main line scenarios..." or "conditional on no near-term giant surprises..." or "if we keep on with business as usual...". I like the conditional predictions a lot more, partly because I feel more confident in them and partly because conditional predictions are the correct way to provide inputs to policy decisions. Different policies have different results, even if I'm not confident in our ability to enact the good ones.
I used that description very much intentionally. As in, use your best Bayesian estimate, formal or intuitive.
As to the object level, "pre-paradigmatic" is essential. The field is full of unknown unknowns. Or, as you say "conditional on no near-term giant surprises..." -- and there have been "giant surprises" in both directions recently, and likely will be more soon. It seems folly to be very confident in any specific outcome at this point.