It would be a powerful tool to be able to dismiss fringe phenomena, prior to empirical investigation, on firm epistemological ground.
Thus I have elaborated on the possibility of doing so using Bayes, and this is my result:
Using Bayes to dismiss fringe phenomena
What do you think of it?
Instead of relying on dubious priors couldn't one simply avoid having to reliably estimate a prior probability P(UAP) by choosing a canonical dataset of observations, choosing a generic prior P(UAP) = 0.5 and then repeatedly update P(UAP | observation x) for each observation x in the dataset?
In this way, the unreliable prior should gradually be deluted, through the iterations. In the end, it will be overshadowed by the influence of the canonical observation data.
If so, how could one do this programmatically? And how could one do this analytically? (links are welcome!)
I also hinted at these options in the section 'Future work' in the article. But I don't know how to approach this approach..
As the goal is to say something prior to investigating the observation, I must assume as little as possible about the nature of the given observation. In the article I assumed P ( observation | UAP ) to be 0.8.
If I could reuse this bit of information to say something about P(UO1|UAP) and P(UO1|¬UAP), then I haven't broken the "let's assume as little as possibly"-premise any further.
Is that bit of information sufficient to say something useful about P(UO1|UAP) and P(UO1|¬UAP)?
Indeed, if you have enough observations then the prior eventually doesn't matter. The difficulty is in the selection of the observations. Ideally you should include every potentially relevant observation -- including, e.g., every time someone looks up at the sky and doesn't see an alien spaceship, and every time anyone operates a radar or a radio telescope or whatever and sees nothing out of the ordinary.
In practice it's simply impractical to incorporate every potentiall... (read more)