If you don't have an epistemically sound approach, then you should probably say "I don't know" instead of using an epistemically unsound one, or at least say "this is really bad, and you shouldn't put high confidence in my conclusions, but it's the best I can do, so..."
That said, one option you have is not to calculate P(UAP) at all, and instead calculate a likelihood ratio:
P(UAP|UO1) / P(¬UAP|UO1) = P(UO1|UAP) / P(UO1|¬UAP) × P(UAP)/P(¬UAP)
So if you just calculate P(UO1|UAP) / P(UO1|¬UAP), then anyone can update their P(UAP) appropriately, regardless of where it started.
Could you elaborate?
P(rain | clouds) might be something like 0.7, and that means that P(¬rain | clouds) is 0.3. But P(rain | ¬clouds) is 0.
You simply can't calculate P(UO1|¬UAP) from P(UO1|UAP). You need to work it out some other way.
I also don't think that asking for P(UO1|UAP) and P(UO1|¬UAP) is reasonable without knowing anything about UO1. Right now I'm observing my watch tick; that's no more or less likely to happen in UAP-world than ¬UAP-world, so the likelihood ratio is one. If tomorrow night I go outside and see lots of bright lights in the sky, and a crop circle the next morning (which is especially weird because there didn't used to be any crops there at all), and the news reports that lots of other people have seen the same thing and the government is passing it off as a sighting of Venus, then that's somewhat more likely in UAP-world than ¬UAP-world.
If tomorrow night I go outside and see lots of bright lights in the sky, and a crop circle the next morning (which is especially weird because there didn't used to be any crops there at all), and the news reports that lots of other people have seen the same thing and the government is passing it off as a sighting of Venus, then that's somewhat more likely in UAP-world than ¬UAP-world.
This example suggests that you're confusing P(OU1|UAP) with P(UAP|OU1). To determine P(OU1|UAP), image you live in a world where UAP is true.
Unfortunately, the analysis so...
It would be a powerful tool to be able to dismiss fringe phenomena, prior to empirical investigation, on firm epistemological ground.
Thus I have elaborated on the possibility of doing so using Bayes, and this is my result:
Using Bayes to dismiss fringe phenomena
What do you think of it?