If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
I was just scrolling through Metaculus and its predictions for the US Elections. I noticed that pretty much every case was a conditional If Trump wins/If doesn't win. Had two thought about the estimates for these. All seem to suggest the outcomes are worse under Trump. But that assessment of the outcome being worse is certainly subject to my own biases, values and preferences. (For example, for US voters is it really a bad outcome if the probability of China attacking Taiwan increases under Trump? I think so but other may well see the costs necessary to reduce the likelihood as high for something that is not something that actually involves the USA.)
So my first though was how much bias should I infer as present in these probability estimates? I'm not sure. But that does relate a bit to my other thought.
In one sense you could naively apply the p, therefore not p is the outcome for the other candidate as only two actually exist. But I think it is also clear that the two probability distributions don't come from the same pool so conceivably you could change the name to Harris and get the exact same estimates.
So I was thinking, what if Metaculus did run the two cases side by side? Would seeing p(Haris) + p(Trump) significantly different than 1 suggest one should have lower confidence in the estimates? I am not sure about that.
What if we see something like p(H) approximately equale to p(T)? does that suggest the selected outcome is poorly chosen as it is largely independant of the elected candidate so the estimates are largely meaninless in terms of election outcomes? I have a stronger sense this is the case.
So my bottome line now is that I should likely not hold a high confidence that the estimates on these outcomes are really meaninful with regards to the election impacts.