Submission statement: I mostly finished this a year ago, but held off on posting because I was planning on improving it and writing a corresponding "here's the concepts without the math" post. Might still happen, but now I'm not aiming at a specific timeline.
Things I now want to change:
Still, I hope this is kinda useful for some people.
Edit: Also, there's some issues with the MathJax and dollar signs, I will fix this later.
Nice, especially the second half.
To me, this post makes it seem like the natural place to push forward is to try and restrict incoherent policies over lotteries until they're just barely well-behaved - either on the behavior end (continuity etc) or the algorithmic end (compte budgets etc).
Great work niplav !
Happy to see you've pushed through and finished this monumental piece of work =)
I consider the problem of resolving preferences that are inconsistent under the von Neumann-Morgenstern axioms into consistent preferences. For preferences over deterministic options, I model inconsistent preferences as directed graphs, and the resolution as selecting acyclic tournaments with the same vertices and minimal graph-edit distance, or Hodge decomposition. For preferences over lotteries, I offer two different methods for modeling inconsistence and one method for resolving them: as edge-weighted weakly connected directed graphs (resolution via Hodge decomposition) and as arbitrary relations over lotteries. None of those two representations prove to be satisfactory. I apply the findings to propose an algorithm for changing a utility function as the underlying set of objects changes.
In economics, decision theory, game theory and parts of artificial intelligence the standard approach to modeling actors is to assume those actors have a fixed utility function they optimise Peterson 2017, ch. 6, Tadelis 2013, ch. 2, Russel & Norvig 2010, ch. 16, following the foundations laid by von Neumann and Morgenstern von Neumann & Morgenstern 1947, ch. 3. This model is quite appealing: It assigns a real-numbered value to each possible outcome, several theorems establish that an agent with a utility function can't be money-pumped Gustaffson 2022, and it is compatible with taking Pareto improvements Wald 1947.
However, this model has come under criticism as being non-descriptive of human preferences, which can be experimentally shown to violate one or more of the von Neumann-Morgenstern axioms Allais 1953, El Gamal 2013. Furthermore, the AI systems humanity has constructed so far usually have no in-built utility functions and appear inconsistent, as they are often programs selected by gradient descent to perform well on a loss or reward function, and it is doubtful that they have internal goal representations that correspond to the their loss or reward function Hubinger 2019.
This tension between the normative theory of rational agency and the observations I can make about intelligent systems in the real world provides a stark contrast and brings up the question of how one could modify the preferences of intelligent systems to be more consistent.
Motivation
The intuitive case for focusing on resolving inconsistent preferences is that given we find a normative ideal for rationality, real-life systems will probably not perfectly conform to that ideal. So we'll have an ideal and we have the real-life situation—it is natural to ask how to get from here to there.
I claim the project of finding procedures for modifying preferences to make them consistent is interesting and important for several different reasons:
Why vNM?
The von Neumann-Morgenstern axiom has been critized and defended as being the true theory of rationality. I don't have a very strong position on this, and use vNM because it's the current "state of the art" in decision theory—it seems plausible to me that vNM will be superseded by some theory that is "better" along the relevant dimensions<sub>57%</sub>. I hope that in that case the lessons learned from resolving vNM-inconsistent preferences transfer over somewhat.
Structure of the Text
This text starts by explaining the von Neumann-Morgenstern axioms and various theorems relating the axioms to concepts such as Dutch books and Pareto improvements. There is a well-developed literature discussing the relevance of these axioms, and I tentatively conclude that these axioms are worth taking as a standard for rational agency. I also observe that humans do not satisfy those axioms.
I then examine the literature on inconsistent preferences, finding investigations from economics on time-inconsistent preferences and some scattered attempts in the non-academic literature, but no satisfactory investigations into the topic that cover all possible violations of the von Neumann-Morgenstern axioms.
I then proceed to analyse the problem of resolving inconsistent preferences in two cases:
I finally speculate about one application of the methods for resolving incoherence: Incorporating changes in the world model into preferences defined over that world model.
Related Work
As far as our literature review has revealed, the academic literature has no investigation into the specific question I'm attempting to answer.
Modeling Inconsistent Preferences
In the economic literature, preferences are usually more restricted than in the von Neumann-Morgenstern setting: It is usually assumed that there is a set of goods B and a utility function U:B×R→R that takes as argument a good and the amount of that good that has been consumed. Consumption can take place at different time steps: Let c:B×N be a function that returns the consumption of a good at a specific timestep. With a single good b and different quantities c(b,1),c(b,2),…,c(b,n) consumed at n timesteps, the time-discounted utility (discount factor δ) of this consumption is ∑ni=1δiU(b,c(b,i)) (which is equivalent to the use of discount rates in reinforcement learning Sutton 2020, ch. 3.
A common form of modeling human preferences that are not exponentially time-discounted in this way is hyperbolic discounting, in which the discounting factor is a hyperbolic function with a parameter k instead of an exponential. Let Uh(b,i,k)=11+kiU(b,c(b,i)) be the hyperbolically discounted utility of consuming b at time step i.
This kind of discounting leads to disproportionately preferring small rewards soon over large rewards later, and might lead to preference reversals: For two goods b and b′, an agent can have the preference Uh(b,c(b,i))>Uh(b′,c(b,i+c)) at a time step i and a time step i+c, but reverse that preference if it lies at another timestep j: Uh(b,c(b,j))<Uh(b′,c(b,j+c)). Such hyperbolic discounting has been observed in humans Myerson & Green 1994 and pigeons Ainslie & Herrnstein 1981. This kind of preference reversal does not occur with exponential discounting.
Hyperbolic preferences can be modeled in a game-theoretic setup, in which subagents in aggregation execute a Pareto-dominated strategy, and via a single agent which follows an unchangeable plan Caillaud & Jullien 2000. Caillaud and Jullien do not attempt to resolve these time-inconsistencies to make them time-consistent. Backus and Zin explore further alternatives to the time-discounted utility setup, though they still work with utility functions that are invariant under positive affine transformation Backus et al. 2004.
Resolving Inconsistent Preferences
In the context of taxonomical data, Sun et al. 2017 investigate the problem of recovering hierarchies from noisy data. They represent inconsistent taxonomies with directed acyclic graphs and consistent hierarchical taxonomies using directed graphs. They find that, when measuring the number of edges being removed, a voting ensemble of several different techniques such as TrueSkill does well on removing as few edges as possible, and usually outperforms removing greedy approximations of the feedback arc set Sun et al. 2017.
Outside of the academic literature, Aird & Shovelain 2020 represent inconsistent preferences as vector fields on a state space (for example states with more/less security and more/less wealth), where a vector v at a specific point p in the vector field indicates a preference for a change in the direction of v at p.
However, as they note, such a vector field can have inconsistencies in the form of curl. They then discuss the restrictions on the vector field so that it conforms to the von Neumann-Morgenstern axioms, which they conclude to be potential vector fields, and outline how to use Helmholtz decomposition to decompose inconsistent preference vector fields with three dimensions. Their approach bears a strong resemblance to the Hodge decomposition we use with edge-weighted graphs.
Taking a very different approach, Kirchner 2022 investigates how to infer utility functions from non-transitive preferences using a neural network. Kirchner relates inferring such preferences to sorting data in which comparisons sometimes are random, resulting in cycles during comparison. He finds that this approach is able to reconstruct orderings even when 10% of the results of comparisons are noise.
Learning Inconsistent Preferences
The problem of inferring the preferences of irrational agents has been formally posed Mindermann & Armstrong 2018: It is in general impossible learn such preferences, as any action is equally compatible both with a preference for that action and a systematic bias causing the action. Nevertheless Evans et al. 2016 find a framework that is experimentally successful at inferring the preferences of an agent with time-inconsistent hyperbolic discounting and incorrect beliefs using Bayesian inference. Their method for inferring preferences of inconsistent software agents gives similar results to estimates made by humans. Their framework does not cover all possible variants of inconsistent preferences, and makes no statement about how to resolve the time-inconsistencies. Evans et al. also give no theoretical guarantee about the performance of their method.
The von Neumann-Morgenstern Axioms
The von Neumann-Morgenstern (vNM) axioms and the framework of utility functions are widely regarded as the standard method of modeling preferences over world-states.
There is an extensive philosophical debate about the reasonableness of the vNM axioms, and a number of proposed alternatives. We have explicitly decided not to contribute to this debate (though some of our findings on the difficulty of establishing vNM-coherence might be interesting to philosophers), and instead assume that preferences conforming to the vNM axioms are a goal to be achieved.
Let Ω be a set of n distinct outcomes, and let Δ(Ω) be the set of all probability distributions on Ω, which in von Neumann and Morgenstern call "lotteries" von Neumann & Morgenstern 1947.
For given ω1,ω2∈Ω, a lottery in which ω1 has a probability p1 and ω2 has a probability p2 is written as [p1:ω1,p2:ω2][1].
Definition 1. Let l1,l2,l3∈Δ(Ω). Let ⪯ be a relation on all lotteries on Ω, that is ⪯⊆Δ(Ω)×Δ(Ω).
If l1⪯l2 and l2⪯l1, then we write l1∼l2.
Then the relation ⪯ is a preference relation if and only if it fulfills the four von Neumann-Morgenstern axioms
The axiom of completeness implies reflexivity: For all lotteries l it holds that l⪯l.
We denote the probability a lottery l assigns to ω∈Ω as pl(ω).
Given a preference relation ⪯, one can create a function U:Δ(Ω)→[0;1] for which it holds that U(l1)≥U(l2) if and only if l1⪯l2 von Neumann & Morgenstern 1947, ch. 3.6.
Definition 2. This function is called a utility function for the preference relation ⪯.
Let us as a shorthand write ω for the lottery that assigns probability 1 to ω, and probability 0 to all other options (we call such a lottery a "deterministic option").
U has the property that for any lottery l from Δ(Ω), the value U(l) is simply the expected value of l, that is the mean of the utilities weighted by the probabilities:
U(l)=∑ω∈ΩU(ω)⋅pl(ω)
Assuming Asymmetry
Definition 3. A relation ≺⊆Δ(Ω)×Δ(Ω) is a strict preference relation if and only if it fulfills the four von Neumann-Morgenstern axioms and also the additional criterion of antisymmetry: l1≺l2 and l2≺l1 if and only if l1=l2.
The reason for this assumption is that one of the algorithms we investigate (namely
EGEDmin
) produces a total order over Ω.This restriction does not change the fundamental structure of the vNM axioms; specifically, it does not affect the continuity axiom (as even with strict preferences over deterministic options, there can still be non-strict preferences over lotteries).
Inconsistent Preferences over Deterministic Options
A consistent preference over Ω that fulfills completeness, transitivity and antisymmetry can be represented by an acyclic tournament G[2], with E⊆Ω×Ω. That is, G itself is complete, transitive and antisymmetric. We call such G a consistent graph (or consistent directed graph, or acyclic tournament).
The set of possible preferences over Ω (including inconsistent preferences), PΩ, may be represented as the set of all directed graphs with vertices Ω. We will use Pn to denote the set of all directed graphs with n vertices (n=|Ω|), allowing for reflexive edges (that is edges of the form (ω1,ω1)). The set PΩ can be constructed by enumerating the set of adjacency matrices (elements of {0,1}n×n) and then, for each adjacency matrix, constructing the corresponding graph. There are 2n2 possible preferences in PΩ.
For a directed graph G∈PΩ, one can interpret the presence of an edge (ω1,ω2)∈EG, with ω1,ω2∈Ω, as "ω1 is preferred over ω2", written ω1≻ω2 or ω1→ω2.
Let CΩ be the set of consistent graphs over Ω, with CΩ⊂PΩ, can be constructed by enumerating the set of permutations of Ω, constructing a strict total order out of each permutation, and taking the transitive closure of that strict total order. There are n! elements in CΩ.
We take the set of inconsistent graphs IΩ⊂PΩ to be all graphs that are not consistent, that is IΩ=PΩ∖CΩ.
Let WΩ be the set of weakly consistent graphs over Ω, which may be represented as the set of all directed graphs that are equivalent to some weak ordering. It can be constructed by taking all weak orderings on Ω, for each weak ordering ⪯ creating an edge from ω1 to ω2 if and only if ω1⪯ω2, and then taking the transitive closure of that graph. The weak orderings are counted by the ordered Bell numbers.
Violating the von Neumann-Morgenstern Axioms
In the deterministic case there are only two vNM axioms that can be violated: completeness and transitivity, since continuity and independence rely on the underlying objects of the preference relation being lotteries.
Directed graphs are well able to represent all violations of these vNM axioms.
Incompleteness.
Incompleteness is distinct from indifference: indifference between ω1 and ω2 exists if both ω1⪯ω2 and ω2⪯ω1, incompleteness (or incomparability) is the case if neither ω2⪯ω1 nor ω1⪯ω2.
The presence of an incomplete preference in an agent is difficult to operationalize, Gustaffson 2022 treats incomparable options as interchangeable, but setups in which an agent takes a default choice or randomizes when presented with incomparable options are also possible (however, as Gustaffson notes, the randomization offers an adversary the option to (in expectation) perform money-pumps).
In a graph-theoretic setting, incomparability between options ω1,ω2 is represented by the absence of any edge between ω1 and ω2 in the graph G representing the preference.
Intransitivity.
Intransitivity is quite easy to represent in a graph G: If there is an edge ω1→ω2∈E and an edge ω2→ω3∈E, but no edge ω1→ω3∈E, then one has represented an intransitive preference ω1≺ω2,ω2≺ω3,ω1⊀ω3.
Symmetry.
A symmetric (or indifferent) preference between ω1,ω2 (written as ω1∼ω2) can also easily be represented by a directed graph by having the edges ω1→ω2,ω2→ω1∈E.
Algorithms for Resolving Inconsistencies
Any method for resolving inconsistent graphs is a function f:PΩ→P(CΩ) that maps any inconsistent graph to a set of consistent graphs which might contain more than one element since the inconsistent graph might not fully determine its consistent counterpart.
Finding Consistent Graphs with the Smallest Graph-Edit Distance
One potential class of such functions would be ones that minimize a "distance" d:GΩ×CΩ→R between the (possibly inconsistent) graph and its consistent counterparts.
The function fm would then return
fd(G)=argmin C∈CΩd(C,G)
We propose a candidate for fd, which minimizes the edge-graph-edit distance between any G∈PΩ and the set of consistent versions C⊆CΩ of G.
Formally:
fEGED(G)=argmin C∈CΩEGED(C,G)
where EGED(X,Y) is the smallest number of edges that need to be added or removed from X to create Y. The addition or removal of vertices is not allowed, since the elements of Ω can be distinguished from one another.
This function is intuitively appealing: Let G∈PΩ be a (possibly inconsistent) preference over Ω. Then let ω1,ω2∈Ω be two possible outcomes. the existence of an edge (ω1,ω2)∈VP represents that ω1 is preferred over ω2.
Then, given G, if one desired a consistent version of G, one would want to give up as few as possible of such rankings of two options. One must sometimes give up some of those rankings to achieve von Neumann-Morgenstern consistent preferences (for example to break cycles), but a high number of deletions or additions of rankings is undesirable.
Proposition 1. For two directed graphs on the same set of vertices, G1=(Ω,E1),G2=(Ω,E2) the edge-graph-edit distance is the same as the size of the symmetrical difference of the sets of edges, that is EGED(G1,G2)=|E1ΔE2|.
Proof.
Algorithm 1: A naive algorithm for computing
EGEDmin
Establishing Consistency Stepwise
An alternative approach to resolve a graph G to a set C of consistent graphs is to proceed by establishing the desired properties stepwise. Our proposed algorithm (which we call "stepwise") is to execute the following steps:
stepwise
takes a similar approach by computing all minimum feedback arc sets for G and then removing them to ensure the graph is acyclic (so that later establishing transitivity does not violate asymmetry). The result is a set of directed acyclic graphs A, one for each minimum feedback arc set removed from G. For this, one can use an algorithm for finding the minimum feedback arc set from Baharev 2021, calledmfas
instepwise
.Algorithm 2: Computing
stepwise
We can now prove that
stepwise
has the same output asEGEDmin
. First we prove that all outputs ofstepwise
have the same edge-graph-edit distance from G.Lemma 1. For a given G=(Ω,EG), all graphs returned by stepwise have the same edge-graph-edit distance from G.
Proof. Let S=stepwise(G), and S=(Ω,ES)∈S. Since all S are transitive, complete and reflexive, all S have the same number of edges, namely the triangular number |ES|=|Ω|(|Ω|+1)2. We also know that EGED(G,S)=|EGΔES|, and EGΔES=EG∖ES∪ES∖EG (the edges we remove from EG and the edges we add to ES). The edges removed from EG are the minimal feedback arc sets, so they all have the same size m=|EG∖ES|. It now suffices to show that i=ES∖EG, the size of the edges added, is constant. It holds that |EG|−m+i=|ES|, and then i=|ES|−|EG|+m, which must be constant. So EGED(S,G)=m+i is also constant for a given G,S∈S. ◻
We then show that the edges removed by
EGEDmin
are always a minimum feedback arc set.Lemma 2. Given a directed graph G, let T=(Ω,ET)∈EGEDmin(G). Let E−T=E∖ET (the edges removed from G to achieve T) and E+T=ET∖E (the edges added to G to create T). Then E−T is a minimum feedback arc set of G.
Proof. E−T is a feedback arc set: Assume for contradiction that E−T was not a feedback arc set. Then G would need to contain a cycle of directed edges Ec=ω1→ω2→⋯→ωk−1→ωk→ω1 so that the cycle was still present after removing E−T, that is Ec⊆E∖E−T. We know that then ET=(E∖E−T)∪E+T, but adding edges can't remove a subset, so Ec⊆E∖E−T⇒Ec⊆(E∖E−T)∪E+T.
But then T can't be transitive, asymmetric and complete: If it was transitive and complete, then there would need to be an edge ω1→ω3 (created through ω1→ω2→ω3), an edge ω1→ω4 (created through ω1→ω3→ω4), and so on. Then ET would also contain the edge ω1→ωk−1, and thereby also the edge ωk→ωk−1 (through the transitivity of ωk→ω1→ωk−1). But since both ωk→ωk−1∈ET and ωk−1→ωk∈ET, it can't be asymmetric.
E−T is minimal: Assume E−T was a feedback arc set, but not minimal. Then there would need to be another feedback arc set E−′T so that |E−′T|<|E−T|. Then one can create T′=(Ω,E′T) from G by removing E−′T from E and then completing the resulting directed acyclic graph to a consistent graph.
We know that |ET|=|E′T|=|Ω|(|Ω|+1)2, since both T and T′ are acyclic tournaments.
Then it is the case that EGED(G,T)>EGED(G,T′):
EGED(G,T)>EGED(G,T′)⇔|EΔET|>|EΔE′T|⇔|E−T⊎E+T|>|E−′T⊎E+′T|⇔|E−T|+|ET|−(|E|−|E−T|)>|E−′T|+|E′T|−(|E|−|E−′T|)⇔|E−T|−|E|+|E−T|>|E−′T|−|E|+|E−′T|⇔2⋅|E−T|>2⋅|E−′T|
So E−T must be minimal, since otherwise it is not a set of edges removed by
EGEDmin
. ◻Using the fact that E−T is a minimum feedback arc set, and that all outputs of stepwise have the same edge-edit distance from the input, we can prove that all outputs of
stepwise
are contained inEGEDmin
.Lemma 3. ∀G∈P:stepwise(G)⊆EGEDmin(G).
Proof. Let S=(Ω,ES)∈stepwise(G) for any G, and let T=(Ω,ET)∈EGEDmin(G). Let E−S=E∖ES be the minimum feedback arc set we remove from S to create G, and E+S=ES∖E the edges we add to make G complete. We similarly define E−T=E∖ET and E+T=ET∖ET.
We can now show that EGED(S,G)≤EGED(T,G): Assume that EGED(S,G)>EGED(T,G). By Lemma 2 E−T is a minimum feedback arc set, and so |E−T|=|E−S|. Furthermore, |ES|=|ET|, since they are both acyclic tournaments on Ω.
Then
EGED(G,S)=|EΔES|=|(E∖E−S)⊎E+S|=(|E|−|E−S|)+|E+S|=(|E|−|E−S|)+|ES|−(|E|−|E−S|)=(|E|−|E−T|)+|ET|−(|E|−|E−T|)=(|E|−|E−T|)+|E+T|=|(E∖E−T)⊎E+T|=|EΔET|=EGED(G,T)
So it can't be the case that EGED(S,G)>EGED(T,G).
We can also show that EGED(S,G)≥EGED(T,G): Assume that EGED(S,G)<EGED(T,G). Since both S,T∈CΩ, this contradicts the assumption that the output of
EGEDmin
has minimal distance. ◻We now show that all outputs of
EGEDmin
are also outputs ofstepwise
.Lemma 4. ∀G∈P:EGEDmin(G)⊆stepwise(G).
Proof. Assume there exists a G∈PΩ so that there exists a T=(Ω,ET)∈EGEDmin(G) so that T∉stepwise(G).
Then, by Lemma 2, E−T=E∖ET is a minimum feedback arc set. Therefore, removing E−T from E results in a directed acyclic graph GA which is an element of the intermediate set A of directed acyclic graphs in
stepwise
.Let E+T=ET∖E. Assume E+T was not a set of edges added to GA in a topological sort.
Then let ω∈Ω be the node in T that has no incoming edges. ω must also have had no incoming edges in GA, since we only add edges to GA to achieve T, and therefore has in-degree 0 in GA, which means that ω must have been added first to some topological sort in T by
topological_sorts
.One can now create T′ and G′A by removing ω and all edges from ω from T and GA. Let the node in T′ with no incoming edges be called ω′. Then in GA the node ω′ either had no incoming edges or one incoming edge from ω, since one can create T′ from GA by adding E+T and then (potentially) removing the edge ω→ω′. So in the graph G′A with ω and all its outgoing edges removed from GA, the node ω′ has in-degree zero, and is therefore also selected as the first element in some topological sort of G′A, to which ω is prepended after recursion. In the base case of a T⋆ with one element ω⋆, this element ω⋆ is the only element of G⋆A and also the only element of the topological sort of G⋆A.
Therefore, by induction, given an acyclic tournament T and a set of edges E+T=ET∖E, this set E+T must be the edges added by some topological sort of GA=(Ω,E∖E−T). ◻
This concludes the proof that both algorithms always have the same output.
Theorem 5. ∀G∈P:stepwise(G)=EGEDmin(G).
Proof. By Lemma 3 it holds that stepwise(G)⊆EGEDmin(G) and by Lemma 4 it holds that stepwise(G)⊇EGEDmin(G), so the sets must be equal. ◻
Applying
HodgeRank
Another option to resolve inconsistent preferences over deterministic options into consistent preferences is to apply the
HodgeRank
algorithm by Jiang et al. to an unweighted graph G Jiang et al. 2009.HodgeRank
is described in further detail in this section.To apply
HodgeRank
to unweighted graphs one simply sets both weights of each edge to 1 (for e∈E it is then the case that w(e)=1, l(e)=1).Then, for a directed graph G, we can define an algorithm
HodgeResolve
that appliesHodgeRank
to G, and then converts the potential function p on Ω into an acyclic tournament. Here ω1→ω2 if and only if pω1>pω2.One issue with
HodgeRank
is that the potentials of two options are sometimes equal to each other, which violates the criterion of asymmetry. There are two ways of dealing with this symmetry:We decide to take the first option, to preserve the polynomial runtime of
HodgeRank
.Criteria
Given the algorithms outlined above, one might want to compare them according to different criteria, similar to the method of evaluating voting methods in social choice theory by some criteria Austen-Smith & Banks 2000, ch. 2, such as the Condorcet criterion or manipulability. For this purpose, we examine the algorithms with regards to the computational complexity, size of output, and two additional criteria.
Surjectivity and Identity
A fairly intuitive criterion is that for a given method of resolution f, and for every C∈CΩ, there should be a G∈PΩ so that C∈f(G) (Surjectivity). This condition is implied by the stronger condition of f being the identity function for already consistent graphs: ∀C∈CΩ:f(C)={C} (Identity).
Minimizing Graph-Edit Distance
EGEDmin
fulfills both conditions: C trivially has the smallest graph-edit distance to itself (namely zero), and is unique in that regard.Applying
HodgeRank
Jiang et al. 2011 state that for complete graphs, computing the potential function of a graph G via
HodgeRank
on the nodes is equivalent to minimizing the squared distance between the edge-weights of G and the edge-weights induced by the potential function. If G already is consistent, the resulting potential function simply re-creates G, since their distance is 0. SoHodgeResolve
maps every consistent graph to itself, and therefore fulfills Identity and therefore also Surjectivity.Polynomial Time Complexity
Ideally, a method for resolving inconsistent graphs into consistent graphs would be efficiently computable.
Minimizing Graph-Edit Distance
However, the method that attempts to find consistent graphs by minimizing edge-graph-edit distance fails this criterion.
Finding all acyclic tournaments with the smallest edit-distance to a given directed graph is NP-hard. This can be shown by a reduction to Slater's problem. Slater's problem is the problem of, given any tournament T, finding a linear order TL (an acyclic tournament, also called a Slater order) that has the smallest distance to T, where the distance between two tournaments T1,T2 is the number of edges that have to be flipped in T1 to create T2. Slater's problem (and a number of related problems, such as finding all acyclic tournaments with the smallest distance to a given tournament) is known to be NP-hard Hudry 2010.
Theorem 6. Finding the set of acyclic tournaments with smallest edge-graph-edit distance to a given graph G is NP-hard.
Proof. Reduction from finding all Slater orders with the smallest distance to a given tournament T.
Assume we know an algorithm
A
to compute fEGED(G) efficiently, that is, to compute the set of all acyclic tournaments with the minimal graph-edit distance to a given directed graph G in polynomial time.Then one could solve Slater's problem in polynomial time: For any given tournament T,
A
would compute a set CT of acyclic tournaments which have the same minimal graph-edit distance 2k to T, the distance is divisible by two because by editing a tournament T into a tournament T′. Edges can only be flipped, which engenders two edge operations (removing an edge and then adding a new one). Then that set would also be the set of Slater orders of T (with distance k), a solution to (P3) from Hudry 2010, which is known to be NP-hard. ◻Similarly, finding only one element from fEGED(G) is also NP-hard, by reducing it to P2 ("PROBLEM P2. Given a tournament T, compute a Slater order O∗(T) of T") Hudry 2010.
Applying
HodgeRank
Jiang et al. 2011 state that computing the potential function of a graph G is equivalent to solving a n×n least-squares problem (n=|Ω|), which requires O(n3) time.
HodgeResolve
executesHodgeRank
and then iterates through all possible edges of G, which takes at most O(n2) time, so the time complexity ofHodgeResolve
is also O(n3).Uniqueness
It would be desirable if one could guarantee that the function f that resolves inconsistent graphs returns a single consistent graph for each inconsistent graph, that is ∀G∈PΩ:|f(G)|=1.
Minimizing Graph-Edit Distance
EGEDmin
does not fulfill this criterion.Theorem 7. For a graph Ge with no edges and n vertices Ω, every acyclic tournament with the same set of vertices has the same graph-edit distance to Ge. Therefore, |EGEDmin(Ge)|=n!, which is not unique.
Proof. Let T be any acyclic tournament with vertices Ω. Then T has (n2) edges. Since Ge has no edges, one can edit Ge to be T simply by adding all edges of T to Ge. This is sufficient and necessary for turning Ge into T. Since this holds for any tournament T, the graph-edit distance from Ge to any acyclic tournament is the same, namely (n2). So |EGEDmin(Ge)|=|CΩ|=n!. ◻
Applying
HodgeRank
If one allows for the output of
HodgeResolve
to be a weak ordering, thenHodgeResolve
has a unique output, since assigning each vertex a real-valued potential p:Ω→R and then ordering vertices by that potential creates a weak ordering W.However, if one demands that the output of
HodgeResolve
be a total order then the output is dependent on the method of achieving that total order. If one generates the total orders by generating all acyclic tournaments with vertices Ω that are subgraphs of W, the output is no longer unique: In the worst case G=(Ω,∅), which results inHodgeRank
assigning a potential of 0 to every node, andHodgeResolve
putting every vertex in the same equivalence class in the weak ordering. As a graph this is the complete directed graph on Ω, which contains all acyclic tournaments on Ω as subgraphs. Then there are |Ω|! acyclic tournaments generated from this weak ordering, since all acyclic tournaments are equally compatible with the weak ordering.Further Considerations
Violating Uniqueness appears to have consequences for decision-making: If we want to use the output of f for prioritising which actions to take to achieve high-ranking options, having more than one result leaves it unclear which options to prioritize (since there will be two ω1,ω2∈Ω that are ranked differently by different elements of the set of results).
However, results from two different fields apply to this case.
Resolution to Polynomially Many Preferences
If uniqueness can't be fulfilled (perhaps because the given graph G is under-determined), a weaker criterion is that the number of consistent graphs corresponding to G is polynomial in the size of Ω (∀G∈PΩ:|f(G)|≤p(|Ω|), where p(n) is some polynomial in n).
Minimizing Graph-Edit Distance
However, as proven in Theorem 7{reference-type="ref" reference="omgedworstcase"} above, this criterion is not fulfilled for
EGEDmin
, instead in the worst case the number is factorial in the size of Ω.We decided to also investigate the number of results for
EGEDmin
for small graphs. For this purpose, we generated all directed graphs with five nodes or less and computed EGEDmin(G).Definition 4. Let G be any directed graph. Then the confusion of G is the number of acyclic tournaments with the smallest edge-graph-edit distance to G, that is the confusion c:P→N+ of G is c(G)=|EGEDmin(G)|. The set of graphs with n vertices and confusion c shall be denoted Gn,c.
The term "confusion" was chosen to emphasize that graphs with a lower such number have fewer consistent versions. An acyclic tournament has minimal confusion (namely 1, where the output of
EGEDmin
is simply itself). Ge from Theorem 7 has maximal confusion, namely n!.A natural question to ask is whether, with bigger graphs, the average confusion converges to a certain value or diverges, or shows no clear behavior. We generated all directed graphs with up to 5 vertices and computed their confusion.
|Gn,1| is the number of all graphs with n vertices and confusion 1, and |Gn,1|/n! is the same number but up to isomorphism of the graphs. |Gn,n!| is the number of graphs with n vertices and maximal confusion.
For some given set of directed graphs Pn, not all numbers between 1 and n! can be confusions. There are, for example, no graphs of size 3 with confusion 4 (or 5).
Interestingly, neither |Gn,1| nor |Gn,1|/n! are known integer sequences: a search on the OEIS and via SuperSeeker Sloane 2003 yield no matching results.
Conjecture 1. The average confusion of all directed graphs with size n diverges to infinity:
limn→∞12n2n!∑i=1|Gn,i|⋅i=∞
We attempted to prove this conjecture, but were unable to do so.
Proposition 2. |Gn,1| is always divisible by 2n.
Proof. This is an artifact of including graphs with reflexive edges in the set of graphs tested. Let G be a graph with confusion k and no reflexive edges.
Let now G∘ be the set of all graphs that are variants of G with reflexive edges added. This set include G itself, and G with all reflexive edges, as well as each version of G with only one reflexive edge. Every element in G∘ also has confusion k: all reflexive edges must be removed to create a consistent preference, yielding G, and there are k unique acyclic tournaments that has the smallest edge-graph-edit distance to G.
Then it is the case |G∘|=2n: for each node, the presence of a reflexive edge on that node can be described by one bit of information, and since there are n nodes, the size of |G∘| is the same as the length of an n bit bitstring. ◻
Dividing Gn,1 by both n! and 2n yields the sequence 1,1,1,3,28,861, which also doesn't occur in the OEIS, and also can't be found using SuperSeeker.
Applying
HodgeRank
As seen in the case of Uniqueness, this depends on whether one demands the output of
HodgeResolve
to be a total order: If a weak ordering is allowed, the output ofHodgeResolve
is always a single graph, so the output size is polynomial, but if we demand a total order as an output the output size can be factorial in the number of nodes.Preservation of Consistent Subgraphs
Definition 5. For a given G=(Ω,EP), with G∈PΩ, a subgraph SG=(Ξ,E) of G (with Ξ⊆Ω, and the set of edges E of SG being a subset of EP) is an inclusion-maximal consistent subgraph of G if and only if:
Definition 6. Let SG be the set of all inclusion-maximal consistent subgraphs of G and let f:P→P(C) be a function that turns any G into a set CG=f(G) of consistent graphs. Then f fulfills Preservation of Consistent Subgraphs if and only if every element of SG is a subgraph of at least one CG, that is
∀S∈SG:∃C∈CG:VS⊆VC∧ES⊆EC
This criterion is quite strong, as we will show. Its intuitive appeal can be explained as follows: Assume one has overall inconsistent preferences, but there is some subset of objects one has consistent preferences over, e.g. an agent has consistent preferences over all fruit and consistent preferences over dairy products, but inconsistent preferences over food in general. Then a method for resolving those inconsistent preferences into consistent ones should "preserve" those consistent preferences over subsets of options a non-zero amount — after becoming consistent the agent still has the same preferences over fruit and dairy product as before.
Furthermore, one can show that there are graphs with an exponential number of inclusion-maximal consistent subgraphs in the number of nodes.
Lemma 8. Let G∈Pn be an arbitrary directed graph with n nodes, and let SG be the set of inclusion-maximal consistent subgraphs of G. Then there exists no polynomial p so that ∀G∈Pn:|SG|≤p(n).
Proof. Moon & Moser 1965 describe how to construct an undirected graph Gn=(VG,EG) with n vertices and 3n3 inclusion-maximal cliques. Then one can construct a directed graph Pn=(VP,EP) with 3n3≈1.4422n inclusion-maximal consistent subgraphs from Gn, which grows faster than any polynomial. First, Pn receives the same vertices as Gn. Then, every v∈V is assigned a unique number j(v):V→N, and for each {u,v}∈EG, the set of edges EP contains (u,v) if and only if j(u)>j(v), and (v,u) if and only if j(v)>j(u). Now, if a subgraph SG of Gn with vertices VS is a maximal clique, then a subgraph SP of Pn with vertices VS is an inclusion-maximal consistent subgraph in Pn:
SP is complete, because for every {u,v} in SG, either (u,v) or (v,u) exists in SP.
SP is transitive. For any three vertices {u,v,w} in SG, SG contains the edges {{u,v},{v,w},{u,w}} (since it is a clique). Then, without loss of generality, assume that j(u)>j(v)>j(w). Then (u,w)∈EP. Therefore SP contains the edges {(u,v),(v,w),(u,w)}.
SP is asymmetric, because for any edge {u,v} in SG it is the case that j(u)>j(v) and j(v)>j(u) can't be true at the same time (since j assigns each vertex a unique natural number). So SP can only contain either (u,v) or (v,u).
SP is inclusion-maximal. If SP were not inclusion-maximal, there'd exist a vertex u so that every vertex v of SP had an edge with u. But since the procedure of constructing Pn above did not add any edges, that would mean that SG was not a maximal clique.
◻
Minimizing Graph-Edit Distance
EGEDmin
violates this criterion, which can be easily demonstrated:Example 1.
Counterexample
Counterexample resolved versions
Gc above is resolved into two acyclic tournaments, none of which contain the edge d→c.
The graph Gc above contains a subgraph Scd=({c,d},{(c,d)}) that is also an inclusion-maximal acyclic tournament in Gc. The two acyclic tournaments with the lowest graph-edit distance (namely 3: reversing the edge d→c (2 operations) and adding an edge between a and b) to Gc are shown in the resolved graph. Note that none of them contain Scd as a subgraph.
This counter-example can be generalized so that inclusion-maximal consistent subgraphs with an arbitrary number of nodes n get reversed: Each edge ω1→ω2 of Gc gets replaced by an acyclic tournament Ti=(Ξi,Ei) with n−2 vertices, so that there is an edge from ω1 to every ξi∈Ξi and an edge from every ξi∈Ξi to ω2. The graph on the left has confusion 40, and the subgraph emphasized in red is preserved in none of the outputs of
EGEDmin
.We also investigated the number of inclusion-maximal consistent subgraphs preserved by
EGEDmin
. We again did this by analyzing the outputs ofEGEDmin
for all graphs with five nodes or less, and some graphs with six or seven nodes.Definition 7. Let IMCS:Pn→P1..n be a function that returns the inclusion-maximal consistent subgraphs for a given graph.
Given a directed graph G, let S be the set of inclusion-maximal consistent subgraphs of G. One can now ask: For a given inclusion-maximal consistent subgraph, how often did that subgraph occur in the set of outputs EGEDmin(G)?
Definition 8. Let RSP(S,G) (with S∈S) be the ratio of subgraph preservation:
RSPEGEDmin(S,G)=|{R∈EGEDmin(G)|S subgraph of R}||EGEDmin(G)|
(No relation to responsible scaling policies.)
As we saw above, there are graphs with inclusion-maximal consistent subgraphs S so that RSP(S)=0.
One can then use RSP to define a metric that tells us, for a given graph, how often inclusion-maximal consistent subgraphs were preserved on average.
Definition 9. Let AMSPEGEDmin(G) be the average, for every inclusion-maximal consistent subgraph S, of the number of times S appears in the output of
EGEDmin
(average maximal subgraph preservation):AMSPEGEDmin(G)=1|IMCS(G)|∑S∈IMCS(G)RSPEGEDmin(S)
Both RSPEGEDmin and AMSPEGEDmin can be adapted to different methods for resolution, simply by swapping out the instances of
EGEDmin
for something else (e.g.HodgeRank
). By default, I will use RSP and AMSP for RSPEGEDmin and AMSPEGEDmin.A higher number for AMSP is better: It means that more inclusion-maximal consistent subgraphs get preserved more often by the method for resolving inconsistent preferences.
One can see that the average number of inclusion-maximal consistent subgraphs increases, albeit initially slowly. The number of times that maximal consistent subgraphs are preserved (Avg AMSP(G)) starts dropping, though the shrinking behavior isn't clear from the limited amount of data. The number of graphs in which all inclusion-maximal consistent subgraphs are preserved by
EGEDmin
shrinks even more quickly, indicating that preserving all consistent subgraphs is a property that is difficult to fulfill.Only for small graphs (up to 3 vertices) it is guaranteed that at least one inclusion-maximal consistent subgraph occurs in the output of
EGEDmin
.So we can pose some conjectures indicated by the datapoints observed above:
Conjecture 2. In the limit of graph size, on average
EGEDmin
preserves almost none of the inclusion-maximal consistent subgraphs:limn→∞1|Pn|∑G∈PnAMSP(G)=0
Conjecture 3. For graphs with >7 nodes it remains the case that there are graphs for which the smallest number of inclusion-maximal consistent subgraphs preserved by
EGEDmin
is zero:limn→∞minG∈PnAMSP(G)=0
Conjecture 4. In the limit of number of nodes in a graph, for almost no graphs does
EGEDmin
preserve all inclusion-maximal consistent subgraphs.limn→∞1|Pn||{G∈Pn|AMSP(G)=1}|=0
Applying
HodgeRank
If the output of
HodgeResolve
is allowed to be a weak ordering, then the original definition of Preservation of Consistent Subgraphs does not apply, as it presumes a mapping f from P to C. However, the definition can easily be transferred by defining f as a function from directed graphs to weakly consistent graphs, that is f:PΩ→WΩ. The definition of Preservation of Consistent Subgraphs stays otherwise unchanged[5].HodgeResolve
does not fulfill Preservation of Consistent Subgraphs. The following figure shows two graphs (both on the left in their respective subfigures). For the graph in the left subfigure no inclusion-maximal consistent subgraphs are preserved, for the right one all but one inclusion-maximal consistent subgraphs are preserved.1→2 is the only consistent subgraph, but it gets reversed.
Each edge is an inclusion-maximal consistent subgraph, and only the edge 3→4 gets reversed. 1 and 2 in the result have the same potential.
In the first image, a graph with 1 inclusion-maximal consistent subgraph and its resolution through
HodgeResolve
, and in the second image a graph with several inclusion-maximal consistent subgraphs and its resolution throughHodgeResolve
. The labels at the edges are the gradients thatHodgeRank
has computed.In the following table, AMSP refers to AMSPHodgeResolve, and IMCS refers to IMCSHodgeResolve.
With this data, the next plot shows how well
EGEDmin
andHodgeResolve
perform at preserving inclusion-maximal consistent subgraphs.Comparing
EGEDmin
andHodgeResolve
at how well they perform on various metrics of preserving inclusion-maximal consistent subgraphs.One can see that on average,
EGEDmin
preserves inclusion-maximal consistent subgraphs more often, and may also retain all inclusion-maximal consistent subgraphs more often (although the low sample sizes for graphs with six and seven nodes makes this difficult to conclude without doubt).Preservation of Completely Dominating and Dominated Set
Inclusion-maximal consistent subgraphs are a way of formalizing what it means for a preference to be locally consistent: there is some subset of Ω so that the preferences are not "confused" about this subset. One can also try to find a corresponding condition that would make a statement about global consistency. Voting theory offers some inspiration here: the minimal undominated set (also Condorcet set) Miller 1977 is defined for every tournament T=(VT,ET) as a set of vertices V∗⊆VT so that (1) there is no edge from VT∖V∗ to V∗ and (2) there is no proper subset of V∗ that meets (1).
One can create a related (but weaker) definition for directed graphs:
For a given G, let Σ1,Σ2 be non-empty sets of vertices of G such that Σ1⊎Σ2=Ω. Then Σ1 is a completely dominating set and Σ2 is a completely dominated set if and only if ∀σ1∈Σ1,σ2∈Σ2:(σ1,σ2)∈E∧(σ2,σ1)∉E.
This means that all elements in a completely dominating set are strictly preferred to all elements in a completely dominated set—there is a subset of options that are clearly better than all other options.
A change from the Condorcet set is that we don't demand the completely dominating set to be minimal (which would always make the empty set the completely dominating set). Additionally, the completely dominating set is not unique: In an acyclic tournament, for 1≤i≤|Ω| the i greatest elements form a dominating set.
A completely dominating set then represents a global consistency in the preference: within Σ1 and Σ2 we are unsure about our preference, but we know that any element of Σ1 is better than any element of Σ2.
Definition 10. A function f:P→P(C) fulfills Preservation of Complete Domination if and only if for any directed graph G with a completely dominating set Σ1 and a completely dominated set Σ2 it holds that ∀C∈f(G) the set of nodes Σ1 is a completely dominating set of Σ2 in C.
Proposition 3. Let f be a function that fulfills Preservation of Complete Domination. If for a graph G there are n sets of vertices Σ1,…,Σn so that ⨄ni=1Σi=Ω and
∀c∈{1,…,n}:c⋃i=1Σi completely dominates n⋃j=c+1Σj,
then for any C∈f(G) with C=(Ω,EC) it holds that ∀1<j<k<n:∀σj∈Σj,σk∈Σk:(σj,σk)∈EC∧(σk,σj)∉EC (or, less formally, every element from a subset of a completely dominating set is strictly preferred over any element from a subset of a completely dominated set in the output of the resolution function f).
Proof. Fix 1<j<k<n. Let Σl=⨄k−1i=1Σi and Σr=⨄ni=kΣi. Then Σl dominates Σr in G, and by assumption also in C∈f(G). Since Σj⊊Σl and Σk⊊Σr, it holds that ∀σj∈Σj,σk∈Σk:σj→σk∈EC∧σk→σj∉EC. So Σj now completely dominates Σk in C. ◻
Remark 1. Sets of such Σ1,…,Σn such that there is a relationship of complete domination between any two of them are quite similar to graph quotients, but is somewhat stricter (demanding that each σi∈Σi be preferred to each other σj∈Σj).
Remark 2. Preservation of complete domination implies some other criteria: If there is a consistent subgraph which is a completely dominating set, then it will comprise the "greatest" subgraph in the resolved preference, with the greatest element in G also being the greatest element in f(G). The same holds for the a completely dominated consistent subgraph, which stays at the bottom.
Minimizing Graph-Edit Distance
Theorem 9.
EGEDmin
fulfills Preservation of Complete Domination.Proof. Let C=(Ω,EC), with C∈EGEDmin(G) be a consistent graph for a directed graph G, where G has a completely dominating set Σ1 and a completely dominated set Σ2. Assume C does not have the completely dominating set Σ1, and let n=EGED(G,C). Then there must be a "highest" or "largest" σ2∈Σ2 in C (one for which there is no other σ′2∈Σ2 so that σ′2→σ2 is an edge in C). There must also be a "highest" or "largest" σ∗1∈Σ1 so that σ2→σ∗1 is an edge in C.
Let there be m≥0 elements of Σ1 "between" σ2 and σ∗1, that is for Σ∗2={σ∗2|σ2→σ∗2∈EC ∧σ∗2→σ∗1∈EC} it holds that Σ∗2=m.
One can now create a C′ from C so that EGED(G,C′)=n−2(m+1) by moving σ∗1 into the position directly above σ2 by reversing the edges σ2→σ∗1 and σ∗2→σ∗1 for all σ∗2∈Σ∗2. The modified C′ now contains some edges from G that need to be reversed to create C: σ∗1→σ2 and {σ∗1→σ∗2|σ∗2∈Σ∗2} are already edges in G, and because edge reversals have weight 2 (deleting and then adding one edge), this saves 2(m+1) edge operations.
Furthermore all other edge operations to minimally achieve C from G can be held constant to create C′, so that the graph-edit distance is not changed otherwise. C′ is now an acyclic tournament with a smaller edge-graph-edit distance from G than C. Thus all other outputs C=EGEDmin(G) must also have a smaller edge-graph-edit distance than C has to G.
If C′ does not have the same completely dominating set Σ1 that G has, one can create a new graph C′′ by finding a new "highest" σ2 and corresponding σ∗1 and switching them. This C′′ again has shorter edge-graph-edit distance.
This process can be repeated as long as Σ1 is not a completely dominating set in the consistent graph, monotonically decreasing the edge-graph-edit distance, until no further such modifications can be found.
The final consistent graph resulting from this process contains Σ1 as a completely dominating set: Every σ1∈Σ1 has a one-directional edge to every σ2∈Σ2. ◻
Applying
HodgeRank
Conjecture 5. HodgeResolve(G) fulfills Preservation of Complete Domination for every G∈P.
This conjecture holds for all directed graphs with 5 nodes or less, by computational experiment, and for random samples of graphs (216 graphs generated for each number of nodes, using the Erdős-Rényi model with the probability 12 of edge creation) with up to 13 nodes.
Summary
We can now summarize how well the two algorithms fulfill the different criteria:
EGEDmin
HodgeResolve
Impossibilities
Some of the criteria listed in Section 3.3 are incompatible with each other.
Resolution to Polynomially Many Preferences and Preservation of Consistent Subgraphs are Incompatible
It is not possible to have an algorithm that retains every maximal consistent subgraph at least once in the set of outputs and has only polynomially many outputs.
Theorem 10. Let f:P→P(C) be a function for resolving inconsistent graphs that fulfills Preservation of Consistent Subgraphs for all graphs P. Then there exists no polynomial p so that for all directed graphs Pn of size n it holds that ∀Pn∈Pn:|f(Pn)|≤p(n).
We show this with a graph that is a counterexample, i.e. for which such a polynomial can not exist.
Definition 11. Let V denote a directed graph with three vertices α,β,γ and three edges α→β,β→γ,γ→α. Let now denote En be a graph that is constructed out of n copies of V, "stacked" on top of each other. More formally, let the vertices of En be the set {α1,…,αn,β1,…,βn,γ1,…,γn} so that αi,βi,γi are the vertices of the graph Vi, and the edges of En are the edges of each Vi and the edges {(ui,vj)|i>j∧u,v∈{α,β,γ}}.
We first prove that each inclusion-maximal consistent subgraph of En only contains one edge from each Vi.
Lemma 11. Every inclusion-maximal consistent subgraph V of En contains exactly one edge from each Vi∈{V1,…,Vn}.
Proof. Assume S is a subgraph of En, and there exists (without loss of generality) a Vi so that S∩Vi has two edges αi→βi and βi→γi. Since S is stipulated to be consistent, due to the transitivity requirement it must also contain the edge αi→γi. But then S would no longer be a subgraph of En, since αi→γi is not an edge in Vi. If S∩Vi has three edges, S must be inconsistent, since transivity or asymmetry are violated. Assume now there is a subgraph Vi of En so that S∩Vi has no edges. Then one can add any one edge from Vi to S while retaining consistency: If one adds (without loss of generality) αi→βi, this preserves consistency, since
◻
We then show that any consistent graph on the vertices of En can not contain 2n+1 inclusion-maximal consistent subgraphs of En.
Lemma 12. Let S be a set of inclusion-maximal consistent subgraphs of En, and |S|=2n+1. Then there exists no consistent graph C on the vertices of En so that ∀S∈S:S is a subgraph of C.
Proof. We showed that each S∈S contains exactly one edge from each Vi. If two S1,S2 for a given Vi share the same edge (i.e. S1∩Vi=S2∩Vi), S1 and S2 can be subgraphs of the same consistent graph C. If two S1,S2∈S, for a given Vi, don't share the same edge (that is S1∩Vi≠S2∩Vi), they can be nevertheless still be subgraphs of the same consistent C:
If (without loss of generality) (S1∩Vi)=αi→βi and (S2∩Vi)=βi→γi, C can contain those edges as well as αi→γi.
If, though, there are three S1,S2,S3∈S that each don't share an edge on a given Vi, they can't be subgraphs of any consistent C: Such a C would need to contain {αi→βi,βi→γi,γi→αi}, but this would violate either asymmetry (if one added αi→γi as well) or transitivity (through the absence of αi→γi).
Therefore, for each Vi, only two edges from Vi can occur in any element of S. Then an S∈S can be uniquely identified by which edge from each Vi it contains, since there are two edges for each Vi and there are n such "levels" Vi, and no two edges from different Vi,Vj are mutually exclusive.
Therefore, |S|≤2n if all elements of S are to be subgraphs of an acyclic tournament. But introducing an additional distinct S2n+1 to S must add a third edge from at least one Vi, thus 2n is the maximal size of S. ◻
We can now show that the set of consistent graphs that contain all inclusion-maximal consistent subgraphs of En grows exponentially in n (albeit with a small exponent).
Lemma 13. The set of consistent graphs C on the vertices of En that includes all inclusion-maximal consistent subgraphs of En has size at least (32)n.
Proof. Assume that one can partition the set C of inclusion-maximal consistent subgraphs of En into a set P of disjoint sets of size ≤2n (that is ∀Ci∈P:|Ci|=2n|) such that there exists a consistent graph C that contains all Ci. Then the number of such partitions would be the number of consistent graphs required to "cover" all elements in C, since by Lemma 12 the sets of compatible graphs have at most size 2n. Then the size of P would be at least 3n2n=1.5n, which is exponential in n. ◻
Therefore, Theorem 10 is true.
Corollary 1. There is no polynomial p and function f:P→P(C) such that |f(En)|≤p(n) and f fulfills Preservation of Consistent Subgraphs, so Theorem 10 is true (with En as a counterexample).
Remark 3. This bound is (32)v3=3√32v≈1.145v for the number of vertices v in Ev, which is exponential but can probably be improved upon.
Polynomial Time Complexity and Preservation of Consistent Subgraphs are Incompatible
Given that in the worst case, only a small proportion of consistent subgraphs can be preserved, it also is not possible to have an algorithm that returns, for each inclusion-maximal consistent subgraph S, at least one consistent graph that contains S, and computes its output in polynomial time.
Theorem 14. Let A be an algorithm for resolving inconsistent graphs that implements an f which fulfills Preservation of Consistent Subgraphs for all graphs G∈P. Then there exists no polynomial p so that for all directed graphs Pn∈Pn of size n it holds that A(Pn) computes its output in less than p(n) steps.
Proof. Let C=A(En). Lemma 13 shows that C is exponential in the number of vertices (by remark 3. Any A would at least need to enumerate all C∈C, which would take exponential time. ◻
Remark 4. The set of inclusion-maximal consistent subgraphs on En can be compactly represented as the Cartesian product of the inclusion-maximal consistent subgraphs of the "levels" Vi:
n×i=1{αi→βi,βi→γi,γi→αi}
This might also allow for a compact representation of the result of f which includes all inclusion-maximal consistent subgraphs. We suspect there are counter-examples that don't allow for this, but haven't been able to find any.
Inconsistent Preferences over Lotteries
Von Neumann and Morgenstern formulate their famous theorem by defining some restriction on relations over lotteries von Neumann & Morgenstern 1947, as explained in this section.
Finding a mathematical structure which can encode all inconsistent preferences over lotteries and is still computationally tractable remains an open problem, but we propose two structures which can either tractably encode some subset of inconsistent preferences or are rich enough to encode all inconsistent preferences, but too complex to be compactly represented.
Violating the Axioms
Introducing lotteries allows for a large variety of violations of the von Neumann-Morgenstern axioms.
Discontinuity
Discontinuity in relations over lotteries can occur if we know that l1⪯l2⪯l3, but there is no p so that l2∼[p:l1,(1−p):l3]. A discontinuous preference that fulfills l1⪯l2⪯l3 could then state that for every p∈(0;1] it holds that l2≻[p:l1,(1−p):l3] but l2≺l3: the lottery l2 is strictly preferred over any mixture of l1,l3, but l3 is still strictly preferred to l2. The equivalent can occur if l2 is strictly dispreferred to any mixture of l1,l3, but strictly preferred over l1.
In humans, this can sometimes be observed as the certainty effect from prospect theory, in which subjects systematically overrate the value of certain (deterministic) option, which leads to the Allais paradox.
A view under which discontinuities of this type make sense is if an agent has a specific aversion to lotteries, irrespective of the options they are comprised of (Von Neumann and Morgenstern call the continuity axiom "excluding a "utility of gambling"" von Neumann & Morgenstern 1947, 3.7.1, and state that "concepts like a "specific utility of gambling" cannot be formulated free of contradiction on this level." [ibid.]).
Dependence
Violations of the independence axiom ("dependence") occur if for two lotteries l1,l2 (with l1⪯l2) there is an option l3 and a p∈[0;1] so that [p:l1,(1−p):l3]≻[p:l2,(1−p):l3]: Mixing in l3 in equal proportion to both l1,l2 causes the preference to switch.
Together with a strong preference for certainty it is observed in the Allais paradox: In experiments with humans, the lottery [A_1=[1: $1 \text{mio.}]] is strictly preferred over the lottery $B_1=[0.89: $1 \text{mio.}, 0.01: $0, 0.1: $5 \text{mio.}]$, but the lottery $B_2=[0.9: $0, 0.1: $5 \text{mio.}]$ is strictly preferred over $A_2=[0.89: $0, 0.11: $1 \text{mio.}]$.
By using the independence axiom, these two preferences can be shown to be contradictory. This can be done by first "mixing out" 0.89 of $1mio. from A1 and B1, that is representing $[1: $1 \text{mio.}]$ as $[0.89: $1 \text{mio.}, 0.11: $1 \text{mio.}]$ and then (by independence) dropping $0.89: $1 \text{mio.}$ from A1 and B1, and then re-normalizing the probabilities so that they sum to 1. One can then "mix in" 0.89 of $0 into the two resulting distributions to create A2 and B2, so under the von Neumann-Morgenstern axioms A1≺B1 and B2≺A2 contradict each other.
A1≺B1⇔[1:$1mio.]≺[0.89:$1mio.,0.01:$0,0.1:$5mio.]⇔[0.89:$1mio.,0.11:$1mio.]≺[0.89:$1mio.,0.01:$0,0.1:$5mio.]⇔[1:$1mio.]≺[1/11:$0,10/11:$5mio.]⇔[0.89:$0,0.11:$1mio.]≺[0.9:$0,0.1:$5mio.]⇔A2≺B2
Representing Inconsistencies
It is more difficult to find a mathematical structure to represent arbitrary inconsistent preferences over lotteries over some set of options Ω.
Edge-Weighted Graphs
Given Ω, some inconsistent preferences on lotteries on Ω can be represented by the set GΩ of edge-weighted directed graphs on Ω, where edge weights of a graph G can be expressed as the values of a function wG:Ω×Ω→R.
Definition 12. The subset SΩ⊂GΩ of consistent preferences on Ω is the set of all edge-weighted directed graphs that is complete, transitive, irreflexive and weight-transitive, where a graph is weight-transitive if for all edges e∈E it holds that wG(α→β)=c1∧wG(β→ω3)=c2⇒wG(α→ω3)=c1+c2.
An element from SΩ assigns each element from Ω a cardinal value, equivalent to a utility function on Ω.
Edge-weighted directed graphs on Ω are not expressive enough to represent all relevant inconsistent preferences, though. As a trivial example, let l1=[0.25:α,0.75:β] and l2=[0.75:α,0.25:β] with l1≺l2, but l3=[0.3:α,0.7:β],l4=[0.7:α,0.3:β] with l3≻l4. The first preference implies a positive weight for the edge α→β, but the second preference implies a negative weight for α→β.
Introducing two positively weighted edges between α,β (creating a two-cycle) is able to represent that such a preference between lotteries is present, but it doesn't allow reconstruction of which lotteries are preferred over which others: Given a preference of α over β by wl, and of β over α by wr doesn't enable reconstruction of whether l1≺l2 or l1≻l2.
Arbitrary Relations over the Lotteries
As von Neumann & Morgenstern 1947 uses lotteries on Ω as the set of options over which agents can have preferences, a natural instinct is to use arbitrary relations over lotteries on Ω as the mathematical object to represent preferences.
However, if Ω has at least one element, such a relation can be uncountably large and without compact representation, making it impossible to be handled computationally.
Example 2. A pathological example would be a relation R∈Δ(Ω)×Δ(Ω) on probability distributions of Ω={α,β} in which [p:α,(1−p):β]≺[q:α,(1−q):β] if and only if p∈[0;1] is an uncomputable real number and q∈[0;1] is a computable real number.
We were also unable to find a method for resolving such inconsistent preferences into their consistent versions.
Algorithms
After some search, we were able to identify
HodgeRank
from Jiang et al. 2011 as a candidate algorithm for resolving an edge-weighted inconsistent graph into an edge-weighted consistent graph.Some other possible candidates for methods for resolving inconsistent preferences over edge-weighted graphs were considered, and finally rejected.
One option was the
PageRank
algorithm, also mentioned in Sun et al. 2017. We rejected PageRank for the same reason as Sun et al. 2017 did: In a directed acyclic graph, a unique greatest element does not necessarily receive the highest ranking. This problem extends to using other centrality measures for graphs such as degree centrality and betweenness centrality: In graphs that are already consistent, the greatest element usually receives a low centrality score, and elements closer to the center receive larger scores, which is counter to our criteria.HodgeRank
HodgeRank
, introduced in Jiang et al. 2011, is an algorithm based on Hodge theory from algebraic geometry for decomposing a doubly edge-weighted, potentially not fully connected graph G=(Ω,E,w:E→R∪{nan},l:E→N}) into the sum of three different edge weighted graphs:Then w(e)=wg(e)+R(e)=wg(e)+wc(e)+wh(e), where R is a residual.
Jiang et al. 2011 develop
HodgeRank
from a social-choice theoretic perspective: Given a set of incomplete cardinal ratings C of the type (R∪{nan})n×m by a set V={1,…,m} of voters on A={1,…,n} alternatives, one can construct an edge-weighted graph GC=(Ω,E,w,l) where the nodes are the options A and each edge weight is some combination of the cardinal votes on the options ω1,ω2 that comprise the edge.An edge weight can be for example the arithmetic mean
wC(ω1→ω2)=∑ni=1Ci,ω2−Ci,ω1|{n|Cn,ω1,Cn,ω2 both ≠nan}|
though Jiang et al 2015 also discuss using other methods such as the geometric mean or the ratio of preference to dispreference.
If every voter assigns
nan
to both ω1 and ω2, there is no edge between the two options.The function l:E→R denotes the number of voters which have a non-
nan
rating for both nodes in the edge. In the case where we do not take the social choice view, we can assume that ∀e∈E:l(e)=1, which does not change the process of computing the output ofHodgeRank
.function HodgeRank(G) # G is a tuple (Ω, E, w, l) Revert all e∈E with w(e)<0 so thay they now have positive weight. f=(w(e₁, …, w(eₖ)) L=diag(l(e₁), …, l(eₖ)) O=zeros(|E|, |Ω|) for (u,v) in E O_eu=-1, O_ev=1 s=-(O.T×L×O)⁺×O.T×L×f # A⁺ is the Moore-Penrose pseudo-inverse of A return s
Computing
HodgeRank
from an edge-weighted directed graphThis pseudocode is implemented in Python here.
Remark 5. One might ask, under the social choice view, whether it makes sense for some voter v∈V to lie about their preferences over A in order to change the output of
HodgeRank
to correspond to their own ranking ordinally. In fact this is the case and thereforeHodgeRank
is not strategy-free.It is easy to find an example for this: Assume there are three options A={a,b,c}, and three voters V={1,2,3}, and let the cardinal values assigned to the options be u1(a)=4,u1(b)=3,u2(b)=4,u2(c)=3,u3(c)=4,u3(a)=3, with the rest of the values assigned to the options being
nan
. Then the valuesHodgeRank
assigns to the options are h(a)=h(b)=h(c)=0. But voter 1 can change their reported assignments to be u′1(a)=5,u′1(b)=3,u′1(c)=1, changing the outputs ofHodgeRank
to h′(a)=1,h′(b)=0 and h′(c)=−1, which is more compatible with their preferences.It would be interesting to investigate the computational complexity of finding manipulations of existing preference of one voter to ordinally change the output of
HodgeRank
to more strongly conform to that voters' preferences.Besides the disadvantage of allowing for strategic manipulation, the decomposition returned by
HodgeRank
appears to display many desirable properties as a method for resolving inconsistent preferences over edge-weighted graphs:HodgeRank
still returns a result, even if edges are missing or there are positive-valued cycles in the data.HodgeRank
returns an affine transformation of the result that the Borda count would return.In the context of inconsistent preferences,
HodgeRank
can be interpreted as taking the observed preferences of an agent as an edge-weighted directed graph, and decomposing it so that the potential function p determines how much the agent values different elements in V. Here p can act as a utility function. The social-choice theoretic perspective offers an intriguing possibility of modeling agents as being comprised of subagents Demski & Garrabrant 2019, Minsky 1988, which we will not pursue further here.Applications
Equipped with a notion of how to represent inconsistent preferences and how to resolve them, one can examine problems that have come up in other contexts and apply the knowledge gained to them. I will examine one of those: The problem of changing a preference as the underlying set of options changes.
Ontology Identification, Ontological Shifts and Ontological Crises
The term "ontological crisis" was introduced in de Blanc and intuitively refers to a scenario in which an agent has preferences, defined over some world model, and then the world model changes without corresponding changes in the values de Blanc 2011.
An example of this can be observed in human values before and after exposure to philosophy: A human might have a value they would formulate as "I value the continuation of my life". However, after reading Reasons and Persons, the view of personal identity that justifies a notion of "continuation" might seem much less defensible, as thought experiments around teleportation, the fusion and fission of persons, gradual replacement of the body or atom-by-atom recreation of the body all undermine the concept of a single fixed personal identity.
However, this person would likely not just give up their value of their continued existence, but instead attempt to "port it" to the new world model.
Soares and Fallenstein motivate the problem of ontological crises in the context of a problem they call Ontology Identification: Given a Turing machine using the atomic model of physics, they ask how one can identify which parts of the program and the tape represent atoms or macroscopic objects, and repeat the question for a Turing machine using a quantum-mechanical model of the world Soares & Fallenstein 2017. The problem is further elaborated on outside of the academic literature early in Dai 2012 and Dai 2019, in Yudkowsky et al. 2016 and Andreev & Yudkowsky 2016, and under the term "model splintering" in Armstrong 2020, Armstrong & Gorman 2022.
The word "ontology" here is a place-holder for a more rigorously defined model, such as Markov Decision Processes (MDPs) or Partially Observable Markov Decision Processes (POMDPs).
It seems useful to disambiguate some terms that appear in the literature, to create clarity about what they mean:
Existing Approaches
De Blanc approaches the problem of ontological crises formally in the context of what they call "finite state models" (they neglect to give a full definition) de Blanc 2011, and one can refine their problem statement and their approach to a solution by stating it in terms of Markov decision processes Russell & Norvig 2010, ch. 17.1.
Definition 13. A finite Markov decision process (MDP) M=(S,A,P,R,I) is a tuple of five elements, where S is a set of states (in this case finite, with n=|S|), the set A is a set of actions (also finite, with m=|A|) and P(s,a,s′):S×A×S→[0,1] is a function that returns the probability of transitioning from s to s′ via the action a, that is P(s,a,s′)=Pr(st+1=s′|st=s,at=a). The function R:S→R is a reward function that returns a real-numbered value for reaching a certain state[7], and I:S→[0,1] is a probability distribution for the states that the agent is initially in.
Given some ordering of the states s1,…,sn, the transition function P from M can also be represented as a family of right-stochastic matrices T(a) (the transition matrices), R can be encoded as a real-numbered vector with size n, and I can be described as real-numbered vector of size n in which the elements sum to 1.
T(a)=⎛⎜ ⎜⎝P(s0|a,s0)⋯P(s0|a,sn)⋮⋱⋮P(sn|a,s0)⋯P(sn|a,sn)⎞⎟ ⎟⎠∈[0,1]n×n
R=⎛⎜ ⎜⎝R(s0)⋮R(sn)⎞⎟ ⎟⎠∈Rn
I=⎛⎜ ⎜⎝I(s0)⋮I(sn)⎞⎟ ⎟⎠∈Rn
Consider two MDPs M1=(S1,A,P1,R1,I1) and M2=(S2,A,P2,R2,I2), but with R2 being unknown. An agent who starts with M1, but who discovers that a better model M2 of the environment has a different set of states and transition probabilities (however, the set of actions stays the same) and thereby now wants to operate in M2 has the problem of defining R2.
Definition 14. The method de Blanc uses to find R2 is to find two linear maps ϕ∈Rn1×n2 and ψ∈Rn2×n1 (with sizes n1=|S1|,n2=|S2) such that ϕ and ψ can be used to "translate" between M1 and M2 de Blanc 2011. Then, for any a∈A, ϕ and ψ should be selected so that for any a∈A, it holds that ψT1(a)ϕ is approximately equal to T2(a) (from here on out written as ψT1(a)ϕ≈T2(a)). It should also hold that ϕT2(a)ψ≈T1(a).
De Blanc doesn't name ϕ,ψ, but we will call such ϕ,ψ for MDPs a de Blanc bisimulation.
Definition 15. Let BisimulationDifference(M1,M2,ϕ,ψ) for two MDPs M1,M2 and a de Blanc bisimulation ϕ,ψ be
BisimulationDifference(M1,M2,ϕ,ψ)=∑a∈An1∑i=1DKL((T(a)2)i,∗||(ψT(a)1ϕ)i,∗)+∑a∈An2∑j=1DKL((T(a)1)j,∗||(ϕT(a)2ψ)j,∗)+DKL(I2||I⊤1ϕ)+DKL(I1||I⊤2ψ)
DKL((T(a)2)i,∗||(ψT(a)1ϕ)i,∗) is difference between the ith row of the state transition matrix of M2 for action a and the ith row of the state transition matrix of M1 transformed to M1 via ϕ and ψ into M1. We compute the Kullback-Leibler divergence row-wise because each row is a probability distribution. These differences are summed up across all rows and actions.
We also add the sums over all actions and rows for DKL((T(a)1)j,∗||(ϕT(a)2ψ)j,∗), because the Kullback-Leibler divergence is asymmetric.
Finally, we add the Kullback-Leibler divergences between the initial state distributions, again symmetrically.
Definition 16. We call a function that returns a de Blanc bisimulation for two MDPs by minimizing the Kullback-Leibler divergence between the MDPs BisimulateShift.
BisimulateShift(M1,M2)=argmin ϕ,ψBisimulationDifference(M1,M2)
The matrices ϕ and ψ can be found by minimising BisimulationDifference(M1,M2,ϕ,ψ) with a hill-climbing algorithm from random initial values, or by gradient descent with BisimulationDifference as a loss function.
De Blanc notes that both products of the matrices ϕ,ψ are be close to equal to the identity matrix after computing BisimulateShift(M1,M2), that is ϕψ≈1n1 and ψϕ≈1n2, which implies that mapping from M1 to M2 and back loses little information and the state transition probabilities can be mapped to each other.
Given ϕ and ψ, it is possible to infer R2 using ϕ: It is R2=R⊤1ϕ.
Advantages
There are some advantages to taking this approach for resolving ontological crises. One is that it does not presuppose a known mapping between S1 and S2, and can infer the mapping solely from the transition behavior of M1 and M2.
Another advantage is that for an exact solution found by BisumlateShift, the expected reward of repeating any action in M2 only depends on the expected reward of executing the same action in M2 with a linear transformation of the initial state distribution.
Proposition 4. Let M1,M2 be two MDPs, and let ϕ,ψ be two matrices found by BisimulateShift, so that ϕψ=1n1,ψϕ=1n2 and ψT1(a)ϕ=T2(a). For an action a∈A, let r2(a,k,i2) be the expected average reward of executing an action a for k∈N times in the MDP M2 with an initial state distribution i2∈Rn2, and r1(a,k,i1) the equivalent for M1 (where i1∈Rn1. In matrix notation the expected average reward of executing a for k times in the two MDPs is
r1(a,k,i1)=1kk∑i=1R⊤1×(T1(a))i×i1
and
r2(a,k,i2)=1kk∑i=1(R⊤1ϕ)×T2(a)i×i2
Then r2(a,k,i2)=r1(a,k,Mi2), where M∈Rn1×n1 and therefore Mi1 is a linear transformation of the distribution over initial states.
Proof. r2(a,k,i2) can be expanded and simplified to
r2(a,k,i2)=1kk∑i=1(R⊤1ϕ)×T2(a)i×i2=1kk∑i=1(R⊤1ϕ)×(ψT1(a)ϕ)i×(i⊤2ϕ)⊤=1kk∑i=1R⊤1×T1(a)iϕ×ϕ⊤i1=1kk∑i=1R⊤1×T1(a)i×ϕϕ⊤×i2=r1(a,k,ϕϕ⊤i2)
◻
Conjecture 6. There exists a linear function f(x)=ax+b so that for any a∈A, k∈N, it holds that r2(a,k,i2)=f(r1(a,k,i1)).
Disadvantages
The approach de Blanc outlines has some limitations. As they remark, their setting of what they call "finite state models" is a fairly restrictive class of computational models of the environment. Similarly, MDPs are also not able to represent some environments, especially ones in which observations of states carry uncertainty.
They also remark that BisimulateShift "is not computationally tractable for large ontologies", and their lack of clarity on the exact algorithm used (as well as the absence of any formal analysis of their method) makes it difficult to judge the computational complexity of the problem. It might be fruitful to study the convergence behavior of using different optimization procedures for finding ϕ and ψ to make further statements about the computational complexity of BisimulateShift.
Finally, the setting of a "finite state model" or an MDP can't encode certain types of consistent preferences. Let M=(S={s,s′},A={a1,a2},I,P,R), where P(s,a1,s′)=P(s′,a1,s)=P(s,a2,s)=P(s′,a2,s′)=1 (that is a1 causes the agent to switch states, and a2 is the action where the agent stays in the same state).
Let now t1,t2∈(S×A)k×S be two trajectories in M, namely t1=(s,a1,s′,a1,s,a2,s) and t2=(s,a2,s,a1,s′,a1,s). Then the cumulative reward of both trajectories is the same, no matter the reward function: R(t1)=R(s,a1,s′)+R(s′,a1,s)+R(s,a2,s)=R(s,a2,s)+R(s,a1,s′)+R(s′,a1,s)=R(t2). However, intuitively there should way a way to differently value these two trajectories: It should be possible to value be in s′ earlier rather than later.
Using Inconsistent Preferences to Represent Ontological Crises
The framework of representing preferences as edge-weighted directed graphs on a set Ω of vertices, and consistent preferences as the set of edge-weighted acyclic tournaments on a set of deterministic options Ω, can be used to represent ontological shifts.
Definition 17. Given a consistent edge-weighted graph G=(Ω,EG,w), a graph-based ontological shift is a function from Ω to subsets of a new set of options Ξ, together with coefficients: s:Ω→P(Ξ×[0,1]), where (ξ,c)∈s(ω) means that ω∈Ω in the old set of options turned out to be ξ∈Ξ to the degree c. The larger c, the more ω is ξ.
In this text, I will assume that ∀ω∈Ω:0≤∑(ξ,c)∈s(ω)c≤1.
If the coefficients of the image of ω sum to 1, that means that ω has been completely "ported over" to Ξ. If they sum to less than 1, that means that ω was a (partially) confused concept, if the coefficients in the image sum to 0 (or s(ω)=∅), that means that ω was a wholly confused concept and does not actually exist. If the sum of the coefficients are >1, that means that ω turned out to be "more real" than in the old set of options (which we exclude as an option here).
Definition 18. Given G, the result G⋆=(Ξ,E⋆,w⋆:Ξ×Ξ→R) after a graph-based ontological shift s is an edge-weighted graph.
The output of the function t is a combination of the weights w of G and the coefficients of s (for all ω1,ω2):
t(ξ1,ξ2,G,s)=∑(ω1,ω2)∈E∑(ξ1,c1)∈s(ω1),(ξ2,c2)∈s(ω2)c1⋅c2⋅w(ω1,ω2)
Then for all ξ1,ξ2 the value of w⋆(ξ1,ξ2)=t(ξ1,ξ2,G,s).
Example 3. Let Ω={L (Land animals),A (Air animals),W (Water animals)}, and the current preference prefer land animals over air animals over water animals, that is EG={L1→A,L1→W,A2→W}.
Let now Ξ={M (Mammals),B (Birds),F (Fish),I (Insects)} be a set that better represents the available options, and let s be
s(L)={(M,0.5),(I,0.5)}s(A)={(B,0.45),(I,0.45),(M,0.1)})s(W)={(F,0.9),(M,0.1)}
That is, land animals turn out to be half mammals, half insects, air animals are mostly birds and insects, and few mammals, and water animals are mostly fishes, and few mammals.
(Ignoring, for the sake of simplicity of the example, exocoetidae[8] and aquatic insects).
HodgeRank
.Undergoing an ontological shift s and then resolving the ontological crisis using
HodgeRank
. In the right image transitive correctly weighted edges are ommitted for readability.The procedure for resolving ontological crises by representing them as inconsistent preferences is in pseudocode below as
ResolveShift
. The algorithm takes a consistent edge-weighted graph G, a graph-based ontological shift s mapping elements from Ω to a new set Ξ, together with coefficients, and a method for resolving inconsistent preferences on edge-weighted graphs.It then creates a new graph G⋆, mapping all nodes using s and creating new edges using the existing weights and coefficients with the function t explained above. Finally, G⋆ is resolved into a consistent preference with the method
Resolve
(which may be specified externally, e.g. by usingHodgeRank
or dropping the weights and usingEGEDmin
).Resolving an ontological shift s on an edge-weighted directed graph. G is a tuple (Ω,E,w), and s is of type Ω→P(Ξ×[0,1]).
Advantages
An advantage of
ResolveShift
over BisimulateShift is the set of preferences that can be represented by G and G′. If Ω is the set of all finite sequences of state-action pairs ((S×A)k×S)k≥0 then t1=(s,a1,s′,a1,s,a2,s) and t2=(s,a2,s,a1,s′,a1,s) are two different elements in Ω, and a preference of t1 over t2 can be represented e.g. with an edge t1→t2 in E.A further advantage of
ResolveShift
is that it has a polynomial runtime complexity of O(|E|⋅m2), which is a subset of the functions in O(n2⋅m2) (with n=|Ω|, and m=|Ξ|), unlike BisimulateShift, which offers no such guarantees.Disadvantages
If the dynamics (e.g. the transition function) of the elements of Ξ are known, then BisimulateShift is able to use this information to construct R2. Additionally, if no mapping s from Ω to Ξ exists (that is, only Ω and Ξ are known, but their relations are not), then
ResolveShift
is not applicable.Definition 19. Let f:G→S be a method for resolving inconsistent preferences represented by edge-weighted graphs, and let s1,s2,…,sn (with si:Ωi→P(Ωi+1)×[0,1]) be a family of functions describing ontological shifts.
Let g1,g2,…,gn be a family of functions that return the result of
ResolveShift
using the shift function si for gi, but without executing a resolution procedure: gi(Gi)=ResolveShift(Gi,si,id), where id:PΩi+1→PΩi+1 is the identity function.Let G1=(Ω1,E1,w1) be any arbitrary consistent preference on Ω1.
Then f is distributive over ontological shifts if and only if
(f∘gn∘⋯∘g2∘g1)(G1)=(f∘gn∘f∘⋯∘f∘g2∘f∘g1)(G1)
Intuitively, this condition says that it shouldn't matter whether an agent changes their mind on which things exist to have preferences over multiple times, and then resolves the resulting preferences into consistent ones, or resolve their preferences after each time they undergo an ontological shift si.
Proposition 5.
HodgeRank
is not distributive over ontological shifts.Proof. It is easy to find examples where
HodgeRank
is not distributive over ontological shifts.Let G1=(Ω={a,b},E={(a1→b)}). Let s1(a)={(d,0.28)}, s1(b)={(c,0.57),(e,0.43)}. And let s2(c)={(f,0.014)}, s2(d)={}, and s2(e)={(f,0.34),(g,0.66)}.
Then Figure 17 shows applying the two ontological shifts s1,s2, and resolving in the end using
HodgeRank
, and Figure 21 shows applyingHodgeRank
after s1 and then again after s2. The final graphs have different weights.HodgeRank
results in a graph in which there is indifference between the vertices f and g.◻
This example works because d gets "deleted" from the set of options, so having all preferences depend on d without resolving the incomparability between c and e results in there being no preference, while resolving retains a slight preference of e over c, which remains with f and g.
Conjecture 7. There is a resolution function f for edge-weighted graphs that is distributive over ontological shifts in this framework.
Conclusion
In this investigation, we have identified the problem of resolving preferences that are inconsistent under the von Neumann-Morgenstern framework.
We first examined the restricted case of preferences over deterministic options, using directed graphs as an underlying mathematical structure to represent inconsistent preferences. We proposed two algorithms:
EGEDmin
andHodgeResolve
(based on theHodgeRank
algorithm). We analyzed both algorithms on several different criteria, with no clear winner.We also proved that the criteria Resolution to Polynomially Many Preferences and Preservation of Consistent Subgraphs are incompatible, as well as Resolution to Polynomially Many Preferences and Polynomial Time Complexity.
For inconsistent preferences over lotteries, we examined a representation using edge-weighted directed graphs. This representation is inadequate, as it can not encode all possible inconsistent preferences, most notably the violation of independence observed in the Allais paradox.
We nevertheless reviewed the
HodgeRank
algorithm that allows for resolving inconsistent edge-weighted directed graphs into utility functions, and observe thatHodgeRank
has several desirable properties, and that it also fails to conform to the (hard to fulfill) criterion of strategy-freeness from social choice theory.We then connected inconsistent preferences to the little-explored issue of ontological crises, and offered a new perspective on what to do after a change with a set of objects that a preference was defined over, opening up many questions we didn't have the time to solve.
Further Research
We believe that the topics discussed in this text offer some fruitful lines of inquiry into the mathematical structure of wanting.
On a concrete level we stated several conjectures and questions we were not able to prove, but might be relatively easy to answer. Of these, conjecture 5{reference-type="ref" reference="conj:hodgedom"} on whether
HodgeResolve
fulfills Preservation of Complete Domination is most relevant, but conjecture 1{reference-type="ref" reference="conj:avgconftoinf"} and conjecture 2{reference-type="ref" reference="conj:egednosubpres"} might also be interesting from graph-theoretic perspective.Additionally, we only analysed two methods of mapping from directed graphs to acyclic tournaments, but are convinced that there are many other methods that could be investigated, specifically methods that use different methods of evaluating graph similarity or ones that result in weak orderings, or methods that are selected to preserve as many inclusion-maximal consistent subgraphs as possible.
Resolving inconsistent graphs could also be approached from a different perspective using random walks on the graph, breaking cycles and completing edges as they are encountered. An extension of this setup could involve two agents: One trying to resolve its preferences through a process of breaking cycles as it traverses the graph representing its preferences, and an adversary attempting to money-pump the agent. This setup also is amenable for an analysis of money-pumping under the light of computational complexity: which violations of the von Neumann-Morgenstern axioms are computationally easy or hard to find, and what is the attack/defense balance between finding and exploiting such violations?
In the context of preferences over lotteries, we are left with no satisfactory mathematical structure that we can use: edge-weighted graphs are not expressive enough, and arbitrary relations over all lotteries too unwieldy. Finding such a structure or finding a method for resolving arbitrary relations over lotteries would be helpful for further progress. Inspiration could be found in models of human decision making from mathematical psychology, such as the Priority Heuristic and the Random Utility Model from Gamal 2013 and the BEAST model Erev et al. 2017, as well as alternatives to the utility framework from decision theory, such as risk-weighted utility maximization or the Jeffrey-Bolker axioms Buchak 2013, Jeffrey 2004.
The problem of ontological crises appears under-researched. As a first step, BisimulateShift could be extended to POMDPs, but finding out how real-world systems change their internal representations during learning could be valuable, with Nanda et al. being a fascinating analysis of the toy case of modular addition in neural networks Nanda et al. 2023. This question could also be interesting for social scientists (discovering how humans manage ontological crises in practice) and philosophers.
We would also like to see further exploration of value-learning Dewey 2011 of inconsistent preferences, perhaps extending Evans et al. to allow for a larger diversity of inconsistent preferences Evans et al. 2016.
Acknowledgements
This text has been long in the making, and has benefitted from much advice and input. I thank Holger Dell for his help and suggestions. I'd also like to thank the Crystal Healing Group for their inputs, especially Kaarel Hänni for the gentle introduction to Hodge decomposition, and Alexander Gietelink-Oldenziel for the hours of talking about decomposing irrational preferences into rational ones. I also want to thank Felix Harder for help with the impossibility result, and Filip Sondej for his surprising ideas in the lottery case.
Appendix A: Hints in Prior Texts
—Katja Grace, “Counterarguments to the basic AI x-risk case”, 2022
The notation for lotteries is common in social choice theory Gaertner 2009, ch. 8.2. Some sources would instead write this as p1⋅ω1+p2⋅ω2 von Neumann & Morgenstern, 1947, but I have decided against it, since no addition is actually taking place. ↩︎
Unless further specified, in this text it will always be the case that the nodes of G are called Ω and its edges are called E. ↩︎
Sample size too small. ↩︎
Without reflexive edges (ξ,ξ)∈E. ↩︎
This definition allows for there to be graph G, a consistent subgraph SG of G and resolved weakly consistent graph W=(Ω,EW)∈f(G) such that there exist nodes ω1,ω2∈Ω in SG which are not strictly ordered in W, that is both ω1→ω2∈EW and ω2→ω1∈EW. It is possible to define a stronger criterion, Strict Preservation of Consistent Subgraphs, which requires that for such ω1,ω2 only the edge ω1→ω2 being present in EW, but we will not work with that definition here. ↩︎
Only if the output is allowed to be a weak ordering. ↩︎
Russell and Norvig note that sometimes R takes actions into account as well: R:S×A×S→R (with different rewards for transitioning to a state with different actions), but also notes that this merely simplifies the description of some environments, but doesn't change which environments can be described Russell & Norvig 2010, ch. 17. ↩︎
Also known as flying fish. ↩︎