An alternative to always having a precise distribution over outcomes is imprecise probabilities: You represent your beliefs with a set of distributions you find plausible.

And if you have imprecise probabilities, expected value maximization isn't well-defined. One natural generalization of EV maximization to the imprecise case is maximality:[1] You prefer A to B iff EV_p(A) > EV_p(B) with respect to every distribution p in your set. (You're permitted to choose any option that you don't disprefer to something else.)

If you don’t endorse either (1) imprecise probabilities or (2) maximality given imprecise probabilities, I’m interested to hear why.

  1. ^

    I think originally due to Sen (1970); just linking Mogensen (2020) instead because it's non-paywalled and easier to find discussion of Maximality there.

New Answer
New Comment

4 Answers sorted by

Kaarel

6-1

Here are some brief reasons why I dislike things like imprecise probabilities and maximality rules (somewhat strongly stated, medium-strongly held because I've thought a significant amount about this kind of thing, but unfortunately quite sloppily justified in this comment; also, sorry if some things below approach being insufficiently on-topic):

  • I like the canonical arguments for bayesian expected utility maximization ( https://www.alignmentforum.org/posts/sZuw6SGfmZHvcAAEP/complete-class-consequentialist-foundations ; also https://web.stanford.edu/~hammond/conseqFounds.pdf seems cool (though I haven't read it properly)). I've never seen anything remotely close for any of this other stuff — in particular, no arguments that pin down any other kind of rule compellingly. (I associate with this the vibe here (in particular, the paragraph starting with "To the extent that the outer optimizer" and the paragraph after it), though I guess maybe that's not a super helpful thing to say.)
  • The arguments I've come across for these other rules look like pointing at some intuitive desiderata and saying these other rules sorta meet these desiderata whereas canonical bayesian expected utility maximization doesn't, but I usually don't really buy the desiderata and/or find that bayesian expected utility maximization also sorta has those desired properties, e.g. if one takes the cost of thinking into account in the calculation, or thinks of oneself as choosing a policy.
  • When specifying alternative rules, people often talk about things like default actions, permissibility, and preferential gaps, and these concepts seem bad to me. More precisely, they seem unnatural/unprincipled/confused/[I have a hard time imagining what they could concretely cache out to that would make the rule seem non-silly/useful]. For some rules, I think that while they might be psychologically different than 'thinking like an expected utility maximizer', they give behavior from the same distribution — e.g., I'm pretty sure the rule suggested here (the paragraph starting with "More generally") and here (and probably elsewhere) is equivalent to "act consistently with being an expected utility maximizer", which seems quite unhelpful if we're concerned with getting a differently-behaving agent. (In fact, it seems likely to me that a rule which gives behavior consistent with expected utility maximization basically had to be provided in this setup given https://web.stanford.edu/~hammond/conseqFounds.pdf or some other canonical such argument, maybe with some adaptations, but I haven't thought this through super carefully.) (A bunch of other people (Charlie Steiner, Lucius Bushnaq, probably others) make this point in the comments on https://www.lesswrong.com/posts/yCuzmCsE86BTu9PfA/there-are-no-coherence-theorems; I'm aware there are counterarguments there by Elliott Thornley and others; I recall not finding them compelling on an earlier pass through these comments; anyway, I won't do this discussion justice in this comment.)
  • I think that if you try to get any meaningful mileage out of the maximality rule (in the sense that you want to "get away with knowing meaningfully less about the probability distribution"), basically everything becomes permissible, which seems highly undesirable. This is analogous to: as soon as you try to get any meaningful mileage out of a maximin (infrabayesian) decision rule, every action looks really bad — your decision comes down to picking the least catastrophic option out of options that all look completely catastrophic to you — which seems undesirable. It is also analogous to trying to find an action that does something or that has a low probability of causing harm 'regardless of what the world is like' being imo completely impossible (leading to complete paralysis) as soon as one tries to get any mileage out of 'regardless of what the world is like' (I think this kind of thing is sometimes e.g. used in davidad's and Bengio's plans https://www.lesswrong.com/posts/pKSmEkSQJsCSTK6nH/an-open-agency-architecture-for-safe-transformative-ai?commentId=ZuWsoXApJqD4PwfXr , https://www.youtube.com/watch?v=31eO_KfkjRQ&t=1946s ). In summary, my inside view says this kind of knightian thing is a complete non-starter. But outside-view, I'd guess that at least some people that like infrabayesianism have some response to this which would make me view it at least slightly more favorably. (Well, I've only stated the claim and not really provided the argument I have in mind, but that would take a few paragraphs I guess, and I won't provide it in this comment.)
  • To add: it seems basically confused to talk about the probability distribution on probabilities or probability distributions, as opposed to some joint distribution on two variables or a probability distribution on probability distributions or something. It seems similarly 'philosophically problematic' to talk about the set of probability distributions, to decide in a way that depends a lot on how uncertainty gets 'partitioned' into the set vs the distributions. (I wrote about this kind of thing a bit more here: https://forum.effectivealtruism.org/posts/Z7r83zrSXcis6ymKo/dissolving-ai-risk-parameter-uncertainty-in-ai-future#vJg6BPpsG93iyd7zo .)
  • I think it's plausible there's some (as-of-yet-undeveloped) good version of probabilistic thinking+decision-making for less-than-ideal agents that departs from canonical bayesian expected utility maximization; I like approaches to finding such a thing that take aspects of existing messy real-life (probabilistic) thinking seriously but also aim to define a precise formal setup in which some optimality result could be proved. I have some very preliminary thoughts on this and a feeling that it won't look at all like the stuff I've discussed disliking above. Logical induction ( https://arxiv.org/abs/1609.03543 ) seems cool; a heuristic estimator ( https://arxiv.org/pdf/2211.06738 ) would be cool. That said, I also assign significant probability to nothing very nice being possible here (this vaguely relates to the claim: "while there's a single ideal rationality, there are many meaningfully distinct bounded rationalities" (I'm forgetting whom I should attribute this to)).

Thanks for the detailed answer! I won't have time to respond to everything here, but:

I like the canonical arguments for bayesian expected utility maximization ( https://www.alignmentforum.org/posts/sZuw6SGfmZHvcAAEP/complete-class-consequentialist-foundations ; also https://web.stanford.edu/~hammond/conseqFounds.pdf seems cool (though I haven't read it properly)). I've never seen anything remotely close for any of this other stuff

But the CCT only says that if you satisfy [blah], your policy is consistent with precise EV maximization. This do... (read more)

1Kaarel
I agree that any precise EV maximization (which imo = any good policy) is consistent with some corresponding maximality rule — in particular, with the maximality rule with the very same single precise probability distribution and the same utility function (at least modulo some reasonable assumptions about what 'permissibility' means). Any good policy is also consistent with any maximality rule that includes its probability distribution as one distribution in the set (because this guarantees that the best-according-to-the-precise-EV-maximization action is always permitted), as well as with any maximality rule that makes anything permissible. But I don't see how any of this connects much to whether there is a positive case for precise EV maximization? If you buy the CCT's assumptions, then you literally do have an argument that anything other than precise EV maximization is bad, right, which does sound like a positive case for precise EV maximization (though not directly in the psychological sense)? Ok, maybe you're saying that the CCT doesn't obviously provide an argument for it being good to restructure your thinking into literally maintaining some huge probability distribution on 'outcomes' and explicitly maintaining some function from outcomes to the reals and explicitly picking actions such that the utility conditional on these actions having been taken by you is high (or whatever)? I agree that trying to do this very literally is a bad idea, eg because you can't fit all possible worlds (or even just one world) in your head, eg because you don't know likelihoods given hypotheses as you're not logically omniscient, eg because there are difficulties with finding yourself in the world, etc — when taken super literally, the whole shebang isn't compatible with the kinds of good reasoning we actually can do and do do and want to do. I should say that I didn't really track the distinction between the psychological and behavioral question carefully in my original respon
1Anthony DiGiovanni
As an aspiring rational agent, I'm faced with lots of options. What do I do? Ideally I'd like to just be able to say which option is "best" and do that. If I have a complete ordering over the expected utilities of the options, then clearly the best option is the expected utility-maximizing one. If I don't have such a complete ordering, things are messier. I start by ruling out dominated options (as Maximality does). The options in the remaining set are all "permissible" in the sense that I haven't yet found a reason to rule them out. I do of course need to choose an action eventually. But I have some decision-theoretic uncertainty. So, given the time to do so, I want to deliberate about which ways of narrowing down this set of options further seem most reasonable (i.e., satisfy principles of rational choice I find compelling). (Basically I think EU maximization is a special case of “narrow down the permissible set as much as you can via principles of rational choice,[1] then just pick something from whatever remains.” It’s so straightforward in this case that we don’t even recognize we’re identifying a (singleton) “permissible set.”) Now, maybe you'd just want to model this situation like: "For embedded agents, 'deliberation' is just an option like any other. Your revealed strict preference is to deliberate about rational choice." I might be fine with this model.[2] But: * For the purposes of discussing how {the VOI of deliberation about rational choice} compares to {the value of going with our current “best guess” in some sense}, I find it conceptually helpful to think of “choosing to deliberate about rational choice” as qualitatively different from other choices. * The procedure I use to decide to deliberate about rational choice principles is not “I maximize EV w.r.t. some beliefs,” it’s “I see that my permissible set is not a singleton, I want more action-guidance, so I look for more action-guidance.” 1. ^ "Achieve Pareto-efficiency" (as per the
1Anthony DiGiovanni
My claim is that your notion of "utter disaster" presumes that a consequentialist under deep uncertainty has some sense of what to do, such that they don't consider ~everything permissible. This begs the question against severe imprecision. I don't really see why we should expect our pretheoretic intuitions about the verdicts of a value system as weird as impartial longtermist consequentialism, under uncertainty as severe as ours, to be a guide to our epistemics. I agree that intuitively it's a very strange and disturbing verdict that ~everything is permissible! But that seems to be the fault of impartial longtermist consequentialism, not imprecise beliefs.
1Anthony DiGiovanni
No, you have an argument that {anything that cannot be represented after the fact as precise EV maximization, with respect to some utility function and distribution} is bad. This doesn't imply that an agent who maintains imprecise beliefs will do badly. Maybe you're thinking something like: "The CCT says that my policy is guaranteed to be Pareto-efficient iff it maximizes EV w.r.t. some distribution. So even if I don't know which distribution to choose, and even though I'm not guaranteed not to be Pareto-efficient if I follow Maximality, I at least know I don't violate Pareto-efficiency if do precise EV maximization"? If so: I'd say that there are several imprecise decision rules that can be represented after the fact as precise EV max w.r.t. some distributions, so the CCT doesn't rule them out. E.g.: * The minimax regret rule (sec 5.4.2 of Bradley (2012)) is equivalent to EV max w.r.t. the distribution in your representor that induces maximum regret. * The maximin rule (sec 5.4.1) is equivalent to EV max w.r.t. the most pessimistic distribution. You might say "Then why not just do precise EV max w.r.t. those distributions?" But the whole problem you face as a decision-maker is, how do you decide which distribution? Different distributions recommend different policies. If you endorse precise beliefs, it seems you'll commit to one distribution that you think best represents your epistemic state. Whereas someone with imprecise beliefs will say: "My epistemic state is not represented by just one distribution. I'll evaluate the imprecise decision rules based on which decision-theoretic desiderata they satisfy, then apply the most appealing decision rule (or some way of aggregating them) w.r.t. my imprecise beliefs." If the decision procedure you follow is psychologically equivalent to my previous sentence, then I have no objection to your procedure — I just think it would be misleading to say you endorse precise beliefs in that case.
1Kaarel
Sorry, I feel like the point I wanted to make with my original bullet point is somewhat vaguer/different than what you're responding to. Let me try to clarify what I wanted to do with that argument with a caricatured version of the present argument-branch from my point of view: your original question (caricatured): "The Sun prayer decision rule is as follows: you pray to the Sun; this makes a certain set of actions seem auspicious to you. Why not endorse the Sun prayer decision rule?" my bullet point: "Bayesian expected utility maximization has this big red arrow pointing toward it, but the Sun prayer decision rule has no big red arrow pointing toward it." your response: "Maybe a few specific Sun prayer decision rules are also pointed to by that red arrow?" my response: "The arrow does not point toward most Sun prayer decision rules. In fact, it only points toward the ones that are secretly bayesian expected utility maximization. Anyway, I feel like this does very little to address my original point that there is this big red arrow pointing toward bayesian expected utility maximization and no big red arrow pointing toward Sun prayer decision rules." (See the appendix to my previous comment for more on this.) That said, I admit I haven't said super clearly how the arrow ends up pointing to structuring your psychology in a particular way (as opposed to just pointing at a class of ways to behave). I think I won't do a better job at this atm than what I said in the second paragraph of my previous comment. I'm (inside view) 99.9% sure this will be false/nonsense in a sequential setting. I'm (inside view) 99% sure this is false/nonsense even in the one-shot case. I guess the issue is that different actions get assigned their max regret by different distributions, so I'm not sure what you mean when you talk about the distribution that induces maximum regret. And indeed, it is easy to come up with a case where the action that gets chosen is not best according to any
1Anthony DiGiovanni
I don't really understand your point, sorry. "Big red arrows towards X" only are a problem for doing Y if (1) they tell me that doing Y is inconsistent with doing [the form of X that's necessary to avoid leaving value on the table]. And these arrows aren't action-guiding for me unless (2) they tell me which particular variant of X to do. I've argued that there is no sense in which either (1) or (2) is true. Further, I think there are various big green arrows towards Y, as sketched in the SEP article and Mogensen paper I linked in the OP, though I understand if these aren't fully satisfying positive arguments. (I tentatively plan to write such positive arguments up elsewhere.) I'm just not swayed by vibes-level "arrows" if there isn't an argument that my approach is leaving value on the table by my lights, or that you have a particular approach that doesn't do so.
1Anthony DiGiovanni
Oops sorry, my claim had the implicit assumptions that (1) your representor includes all the convex combinations, and (2) you can use mixed strategies. ((2) is standard in decision theory, and I think (1) is a reasonable assumption — if I feel clueless as to how much I endorse distribution p vs distribution q, it seems weird for me to still be confident that I don't endorse a mixture of the two.) If those assumptions hold, I think you can show that the max-regret-minimizing action maximizes EV w.r.t. some distribution in your representor. I don't have a proof on hand but would welcome counterexamples. In your example, you can check that either the uniformly fine action does best on a mixture distribution, or a mix of the other actions does best (lmk if spelling this out would be helpful).
1Kaarel
Oh ok yea that's a nice setup and I think I know how to prove that claim — the convex optimization argument I mentioned should give that. I still endorse the branch of my previous comment that comes after considering roughly that option though:
1Anthony DiGiovanni
The branch that's about sequential decision-making, you mean? I'm unconvinced by this too, see e.g. here — I'd appreciate more explicit arguments for this being "nonsense."
1Kaarel
To clarify, I think in this context I've only said that the claim "The minimax regret rule (sec 5.4.2 of Bradley (2012)) is equivalent to EV max w.r.t. the distribution in your representor that induces maximum regret" (and maybe the claim after it) was "false/nonsense" — in particular, because it doesn't make sense to talk about a distribution that induces maximum regret (without reference to a particular action) — which I'm guessing you agree with. I wanted to say that I endorse the following: * Neither of the two decision rules you mentioned is (in general) consistent with any EV max if we conceive of it as giving your preferences (not just picking out a best option), nor if we conceive of it as telling you what to do on each step of a sequential decision-making setup. I think basically any setup is an example for either of these claims. Here's a canonical counterexample for the version with preferences and the max_{actions} min_{probability distributions} EV (i.e., infrabayes) decision rule, i.e. with our preferences corresponding to the min_{probability distributions} EV ranking: * Let a and c be actions and let b be flipping a fair coin and then doing a or c depending on the outcome. It is easy to construct a case where the max-min rule strictly prefers b to a and also strictly prefers b to c, and indeed where this preference is strong enough that the rule still strictly prefers b to a small enough sweetening of a and also still prefers b to a small enough sweetening of c (in fact, a generic setup will have such a triple). Call these sweetenings a+ and c+ (think of these as a-but-you-also-get-one-cent or a-but-you-also-get-one-extra-moment-of-happiness or whatever; the important thing is that all utility functions under consideration should consider this one cent or one extra moment of happiness or whatever a positive). However, every EV max rule (that cares about the one cent) will strictly disprefer b to at least one of a+ or c+, because if that weren't

RHollerith

60

My initial impulse is to treat imprecise probabilities like I treat probability distributions over probabilities: namely, I am not permanently opposed, but have promised myself that before I resort to one, I would first try a probability and a set of "indications" about how "sensitive" my probability is to changes: e.g., I would try something like

My probability is .8, but with p = .5, it would change by at least a factor of 2 (more precisely, my posterior odds would end up outside the interval [.5,2] * my prior odds) if I were to spend 8 hours pondering the question in front of a computer with an internet connection; also with p = .25, my probability a year in the future will differ from my current probability by at least a factor of 2 even if I never set aside any time to ponder the question.

I agree that higher-order probabilities can be useful for representing (non-)resilience of your beliefs. But imprecise probabilities go further than that — the idea is that you just don't know what higher-order probabilities over the first-order ones you ought to endorse, or the higher-higher-order probablities over those, etc. So the first-order probabilities remain imprecise.

JBlack

40

Sets of distributions are the natural elements of Bayesian reasoning: each distribution corresponds to a hypothesis. Some people pretend that you can collapse these down to a single distribution by some prior (and then argue about "correct" priors), but the actual machinery of Bayesian reasoning produces changes in relative hypothesis weightings. Those can be applied to any prior if you have reason to prefer a single one, or simply composed with future relative changes if you don't.

Partially ordering options by EV over all hypotheses is likely to be a very weak order with nearly all options being incomparable (and thus permissible). However, it's quite reasonable to have bounds on hypothesis weightings even if you don't have good reason to choose a specific prior.

You can use prior bounds to form very much stronger partial orders in many cases.

Dagon

20

For humans (and probably generally for embedded agents), I endorse acknowledging that probabilities are a wrong but useful model.  For any given prediction, the possibility set is incomplete, and the weights are only estimations with lots of variance.  I don't think that a set of distributions fixes this, though in some cases it can capture the model variance better than a single summary can.

EV maximization can only ever be an estimate.  No matter HOW you come up with your probabilities and beliefs about value-of-outcome, you'll be wrong fairly often.  But that doesn't make it useless - there's no better legible framework I know of.  Illegible frameworks (heuristics embedded in the giant neural network in your head) are ALSO useful, and IMO best results come from blending intuition and calculation, and from being humble and suspicious when they diverge greatly.  

13 comments, sorted by Click to highlight new comments since:

A couple years ago, my answer would have been that both imprecise probabilities and maximality seem like ad-hoc, unmotivated methods which add complexity to Bayesian reasoning for no particularly compelling reason.

I was eventually convinced that they are useful and natural, specifically in the case where the environment contains an adversary (or the agent in question models the environment as containing an adversary, e.g. to obtain worst-case bounds). I now think of that use-case as the main motivation for the infra-Bayes framework, which uses imprecise probabilities and maximization as central tools. More generally, the infra-Bayes approach is probably useful for environments containing other agents.

Thanks! Can you say a bit on why you find the kinds of motivations discussed in (edit: changed reference) Sec. 2 of here ad hoc and unmotivated, if you're already familiar with them (no worries if not)? (I would at least agree that rationalizing people's intuitive ambiguity aversion is ad hoc and unmotivated.)

I think this quote nicely summarizes the argument you're asking about:

Not only do we not have evidence of a kind that allows us to know the total consequences of our actions, we seem often to lack evidence of a kind that warrants assigning precise probabilities to relevant states.

This, I would say, sounds like a reasonable critique if one does not really get the idea of Bayesianism. Like, if I put myself in a mindset where I'm only allowed to use probabilities when I have positive evidence which "warrants" those precise probabilities, then sure, it's a reasonable criticism. But a core idea of Bayesianism is that we use probabilities to represent our uncertainties even in the absence of evidence; that's exactly what a prior is. And the point of all the various arguments for Bayesian reasoning is that this is a sensible and consistent way to handle uncertainty, even when the available evidence is weak and we're mostly working off of priors.

As a concrete example, I think of Jaynes' discussion of the widget problem (pg 440 here): one is given some data on averages of a few variables, but not enough to back out the whole joint distribution of the variables from the data, and then various decision/inference problems are posed. This seems like exactly the sort of problem the quote is talking about. Jaynes' response to that problem is not "we lack evidence which warrants assigning precise probabilities", but rather, "we need to rely on priors, so what priors accurately represent our actual state of knowledge/ignorance?".

Point is: for a Bayesian, the point of probabilities is to accurately represent an agent's epistemic state. Whether the probabilities are "warranted by evidence" is a nonsequitur.

we need to rely on priors, so what priors accurately represent our actual state of knowledge/ignorance?

Exactly — and I don't see how this is in tension with imprecision. The motivation for imprecision is that no single prior seems to accurately represent our actual state of knowledge/ignorance.

Are there any propositions for which you think a single prior cannot capture your current betting odds / preference over lotteries?

I reject the premise that my beliefs are equivalent to my betting odds. My betting odds are a decision, which I derive from my beliefs.

"No single prior seems to accurately represent our actual state of knowledge/ignorance" is a really ridiculously strong claim, and one which should be provable/disprovable by starting from some qualitative observations about the state of knowledge/ignorance in question. But I've never seen someone advocate for imprecise probabilities by actually making that case.

Let me illustrate a bit how I imagine this would go, and how strong a case would need to be made.

Let's take the simple example of a biased coin with unknown bias. A strawman imprecise-probabilist might argue something like: "If the coin has probability  of landing heads, then after  flips (for some large-ish ) I expect to see roughly  (plus or minus ) heads. But for any particular number , that's not actually what I expect a-priori, because I don't know which  is right - e.g. I don't actually confidently expect to see roughly  heads a priori. Therefore no distribution can represent my state of knowledge.".

... and then the obvious Bayesian response would be: "Sure, if you're artificially restricting your space of distributions/probabilistic models to IID distributions of coin flips. But our actual prior is not in that space; our actual prior involves a latent variable (the bias), and the coin flips are not independent if we don't know the bias (since seeing one outcome tells us something about the bias, which in turn tells us something about the other coin flips). We can represent our prior state of knowledge in this problem just fine with a distribution over the bias.".

Now, the imprecise probabilist could perhaps argue against that by pointing out some other properties of our state of knowledge, and then arguing that no distribution can represent our prior state of knowledge over all the coin flips, no matter how much we introduce latent variables. But that's a much stronger claim, a much harder case to make, and I have no idea what properties of our state of knowledge one would even start from in order to argue for it. On the other hand, I do know of various sets of properties of our state-of-knowledge which are sufficient to conclude that it can be accurately represented by a single prior distribution - e.g. the preconditions of Cox' Theorem, or the preconditions for the Dutch Book theorems (if our hypothetical agent is willing to make bets on its priors).

really ridiculously strong claim

What's your prior that in 1000 years, an Earth-originating superintelligence will be aligned to object-level values close to those of humans alive today [for whatever operationalization of "object-level" or "close" you like]? And why do you think that prior uniquely accurately represents your state of knowledge? Seems to me like the view that a single prior does accurately represent your state of knowledge is the strong claim. I don’t see how the rest of your comment answers this.

(Maybe you have in mind a very different conception of “represent” or “state of knowledge” than I do.)

Right, so there's room here for a burden-of-proof disagreement - i.e. you find it unlikely on priors that a single distribution can accurately capture realistic states-of-knowledge, I don't find it unlikely on priors.

If we've arrived at a burden-of-proof disagreement, then I'd say that's sufficient to back up my answer at top-of-thread:

both imprecise probabilities and maximality seem like ad-hoc, unmotivated methods which add complexity to Bayesian reasoning for no particularly compelling reason.

I said I don't know of any compelling reason - i.e. positive argument, beyond just "this seems unlikely to Anthony and some other people on priors" - to add this extra piece to Bayesian reasoning. And indeed, I still don't. Which does not mean that I necessarily expect you to be convinced that we don't need that extra piece; I haven't spelled out a positive argument here either.

It's not that I "find it unlikely on priors" — I'm literally asking what your prior on the proposition I mentioned is, and why you endorse that prior. If you answered that, I could answer why I'm skeptical that that prior really is the unique representation of your state of knowledge. (It might well be the unique representation of the most-salient-to-you intuitions about the proposition, but that's not your state of knowledge.) I don't know what further positive argument you're looking for.

Someone could fail to report a unique precise prior (and one that's consistent with their other beliefs and priors across contexts) for any of the following reasons, which seem worth distinguishing:

  1. There is no unique precise prior that can represent their state of knowledge.
  2. There is a unique precise prior that represents their state of knowledge, but they don't have or use it, even approximately.
  3. There is a unique precise prior that represents their state of knowledge, but, in practice, they can only report (precise or imprecise) approximations of it (not just computing decimal places for a real number,  but also which things go into the prior could differ by approximation). Hypothetically, in the limit of resources spent on computing its values, the approximations would converge to this unique precise prior.

I'd be inclined to treat all three cases like imprecise probabilities, e.g. I wouldn't permanently commit to a prior I wrote down to the exclusion of all other priors over the same events/possibilities.

What use case are you intending these for? Any given use of probabilities I think depends on what you're trying to do with them, and how long it makes sense to spend fleshing them out. 

Predicting the long-term future, mostly. (I think imprecise probabilities might be relevant more broadly, though, as an epistemic foundation.)