I've had a bit of success with getting people to understand Bayesianism at parties and such, and I'm posting this thought experiment that I came up with to see if it can be improved or if an entirely different thought experiment would be grasped more intuitively in that context:
Say there is a jar that is filled with dice. There are two types of dice in the jar: One is an 8-sided die with the numbers 1 - 8 and the other is a trick die that has a 3 on all faces. The jar has an even distribution between the 8-sided die and the trick die. If a friend of yours grabbed a die from the jar at random and rolled it and told you that the number that landed was a 3, is it more likely that the person grabbed the 8-sided die or the trick die?
I originally came up with this idea to explain falsifiability which is why I didn't go with say the example in the better article on Bayesianism (i.e. any other number besides a 3 rolled refutes the possibility that the trick die was picked) and having a hypothesis that explains too much contradictory data, so eventually I increase the sides that the die has (like a hypothetical 50-sided die), the different types of die in the jar (100-sided, 6-sided, trick die), and different distributions of die in the jar (90% of the die are 200-sided but a 3 is rolled, etc.). Again, I've been discussing this at parties where alcohol is flowing and cognition is impaired yet people understand it, so I figure if it works there then it can be understood intuitively by many people.
Ignoring, temporarily, everything but the first paragraph, there are two ways I might proceed.
Acting as a frequentist, I would suppose that die rolls could be modeled as independent identically distributed draws from a multinomial distribution with fixed but unknown parameters. (The independence, and to a lesser degree the identically distributed, assumption could also be verified although this gets a bit tricky.) I would roll the die some fixed number of times (possibly determined according to a a priori calculation of statistical power) and take the MLE as a point estimate of the unknown parameters. I would report this parameter as the probability of the die landing on the various sides. I might also report a 95% confidence region for the estimate, which is not to be interpreted as containing the true probabilities 95% of the time (it either does or does not, with certainty).
Acting as a Bayesian, I would assume the same data model, but I would also place a prior distribution on the unknown parameter. A natural prior in this case is the Dirichlet distribution, which is conjugate to the multinomial distribution. I would also use the same data collection approach, although the Bayesian formulation makes it easy to work with the special case of observing a single roll. Given the model likelihood and the prior distribution, Bayes' law tells me the new posterior distribution to which I should update to represent my uncertainty over the unknown parameter. I would continue to roll the die and update until the posterior distribution is sufficiently concentrated according to some reasonable stopping criterion. I would then report the posterior mean (or maybe the MAP estimate) as the probability of the die landing on the various sides. I would also report 95% credible region for the estimate, which I would give a 95% credence to containing the truth (although under questioning, I would probably be evasive/unclear about exactly what that means). I would also need to communicate a justification for my prior distribution and ideally evidence that the inference is not overly sensitive to it. I ought to just report the posterior distribution itself, but people tend to find it easier to base decision on point estimates.
There are obvious similarities to these two inferential approaches, but they are answering slightly different questions using vastly different methods.
Suppose you are denied experimentation and denied extremely powerful computer (e.g. you can only do <100 simulated trials but want reasonable accuracy), or need high accuracy in limited time. I was more interested about what you do when you are to try to analytically solve something like this, finding probabilities for the 3 distinct sides.
The point here is that you want to go for physically justified stuff, and anything not physically justified that you are doing anywhere, is same in principle as wilfully putting cognitive bias into your calculations, and is just plain wrong, no philosophical stuff here, you'll end up losing games vs someone who solves it better. Maybe you guys need "Overcoming Bayes" blog.