I don't think there's a very good precise way to do so, but one useful concept is bid-ask spreads, which are a way of protecting yourself from adverse selection of bets. E.g. consider the following two credences, both of which are 0.5.
Intuitively, however, the former is very difficult to change, whereas the latter might swing wildly given even a little bit of evidence (e.g. someone saying "I remember in high school my teacher mentioned that winds often blow towards the equator.")
Suppose I have to decide on a policy that I'll accept bets for or against each of these propositions at X:1 odds (i.e. my opponent puts up $X for every $1 I put up). For the first proposition, I might set X to be 1.05, because as long as I have a small edge I'm confident I won't be exploited.
By contrast, if I set X=1.05 for the second proposition, then probably what will happen is that people will only decide to bet against me if they have more information than me (e.g. checking weather forecasts), and so they'll end up winning a lot of money for me. And so I'd actually want X to be something more like 2 or maybe higher, depending on who I expect to be betting against, even though my credence right now is 0.5.
In your case, you might formalize this by talking about your bid-ask spread when trading against people who know about these bottlenecks.
Surely something like the expected variance of would be a much simpler way of formalising this, no? The probability over time is just a stochastic process, and OP is expecting the variance of this process to be very high in the near future.
Someone asked basically this question before, and someone gave basically the same answer. It's a good idea, but there are some problems with it: it depends on your and your counterparties' risk aversion, wealth, and information levels, which are often extraneous.
If you're giving one number, that IS your all-inclusive probability. You can't predict the direction that new evidence will change your probability (per https://www.lesswrong.com/tag/conservation-of-expected-evidence), but you CAN predict that there will be evidence with equal probability of each direction.
An example is if you're flipping a coin twice. Before any flips, you give 0.25 to each of HH, HT, TH, and TT. But you strongly expect to get evidence (observing the flips) that will first change two of them to 0.5 and two to 0, then another update which will change one of the 0.5 to 1 and the other to 0.
Likewise, p(doom) before 2035 - you strongly believe your probability will be 1 or 0 in 2036. You currently believe 6%. You may be able to identify intermediate updates, and specify the balance of probability * update that adds to 0 currently, but will be specific when the evidence is obtained.
I don't know any shorthand for that - it's implied by the probability given. If you want to specify your distribution of probable future probability assignments, you can certainly do so, as long as the mean remains 6%. "There's a 25% chance I'll update to 15% and a 75% chance of updating to 3% over the next 5 years" is a consistent prediction.
you CAN predict that there will be evidence with equal probability of each direction.
More precisely the expected value of upwards and downwards updates should be the same; it's nonetheless possible to be very confident that you'll update in a particular direction - offset by a much larger and proportionately less likely update in the other.
For example, I have some chance of winning. lottery this year, not much lower than if I actually bought a ticket. I'm very confident that each day I'll give somewhat lower odds (as there's less time remaining), but being credibly informed that I've won would radically change the odds such that the expectation balances out.
I think you’re trying to point towards multimodal distributions ?
If you can decompose P(X) as P(X) = P(X|H1)P(H1) + ... + P(X|Hn)P(Hn), and the P(X|Hn) are nice unimodal distributions (like a normal distribution), you end up with a multimodal distribution.
A lot of the probabilities we talk about are probabilities we expect to change with evidence. If we flip a coin, our p(heads) changes after we observe the result of the flipped coin. My p(rain today) changes after I look into the sky and see clouds. In my view, there is nothing special in that regard for your p(doom). Uncertainty is in the mind, not in reality.
However, how you expect your p(doom) to change depending on facts or observation is useful information and it can be useful to convey that information. Some options that come to mind:
describe a model: If your p(doom) estimate is the result of a model consisting of other variables, just describing this model is useful information about your state of knowledge, even if that model is only approximate. This seems to come closest to your actual situation.
describe your probability distribution of your p(doom) in 1 year (or another time frame): You could say that you think there is a 25% chance that your p(doom) in 1 year is between 10% and 30%. Or give other information about that distribution. Note: your current p(doom) should be the mean of your p(doom) in 1 year.
describe your probability distribution of your p(doom) after a hypothetical month of working on a better p(doom) estimate: You could say that if you were to work hard for a month on investigating p(doom), you think there is a 25% chance that your p(doom) after that month is between 10% and 30%. This is similar to 2., but imo a bit more informative. Again, your p(doom) should be the mean of your p(doom) after a hypothetical month of investigation, even if you don't actually do that investigation.
I was thinking about my p(doom) in the next 10 years and came up with something around 6%[1]. However that involves lots of current unknowns to me, like the nature of current human knowledge production (and the bottle necks involved) which impact my P(doom) to be either 3% or 15% depending upon what type of bottle necks are found or not found. Is there a technical way to describe this probability distribution contingent on evidence?
I'm bearish on LLMs leading AI directly (10% chance) and roughly a 30% chance of LLMs based AI fooming quickly enough to kill us and to want to kill us within 10 years. There is a 3% chance that something will come out of left field and doing the same.