Luke tasked me with researching the following question

I‘d like to know if anybody has come up with a good response to any of the objections to ’full information’ or ‘ideal preference’ theories of value given in Sobel (1994). (My impression is “no.”)

The paper in question is David Sobel’s 1994 paper “Full Information Accounts of Well-Being” (Ethics 104, no. 4: 784–810) (his 1999 paper, “Do the desires of rational agents converge?”, is directed against a different kind of convergence and won’t be discussed here).

The starting point is Brandt’s 1979 book where he describes his version of a utilitarianism in which utility is the degree of satisfaction of the desires of one’s ideal ‘fully informed’ self, and Sobel also refers to the 1986 Railton apologetic. (LWers will note that this kind of utilitarianism sounds very similar to CEV and hence, any criticism of the former may be a valid criticism of the latter.) I’ll steal entirely the opening to Mark C Murphy’s 1999 paper, “The Simple Desire-Fulfillment Theory” (rejecting any hypotheticals or counterfactuals in desire utilitarianism), since he covers all the bases (for even broader background, see the Tanner Lecture “The Status of Well-Being”):

An account of well-being that [Derek] Parfit labels the ‘desire-fulfillment’ theory (1984, 493) has gained a great deal of support as the most plausible account of what makes a subject well-off. According to the desire-fulfillment, or DF, theory, an agent’s well-being is constituted by the obtaining of states of affairs that are desired by that agent.1 Importantly, though, while all DF theorists affirm that an account of what makes an agent well-off must ultimately refer to desire, there now appears to be a consensus among those defending DF theories that it is not the satisfaction of the agent’s actual desires that constitutes the agent’s well-being, but rather the satisfaction of those desires that the agent would have in what I will call a ‘hypothetical desire situation.’ Just as Rawls holds (1971, 12) that the principles of right are those that would be unanimously chosen in a hypothetical choice situation, that is, a setting optimal for choosing such principles, defenders of DF theory hold that an agent’s good is what he or she would desire in a hypothetical desire situation, that is, a setting optimal for desiring.2 While the precise nature of the hypothetical desire situation is a matter of debate among DF theorists, all of them seem to agree that any adequate DF theory will incorporate a strong information condition into the hypothetical desire situation. In treating of the concept of an individual’s good, Sidgwick writes:

It would seem. . . that if we interpret the notion ‘good’ in relation to ‘desire,’ we must identify it not with the actually desired, but rather with the desirable:—meaning by ‘desirable’ not necessarily ‘what ought to be desired’ but what would be desired. . . if it were judged attainable by voluntary action, supposing the desirer to possess a perfect forecast, emotional as well as intellectual, of the state of attainment or fruition (1981, 110–111).

Brandt writes that a state of affairs belongs to an agent’s welfare only if it is such that “that person would want it if he were fully rational” (1979, 268); an agent’s desire is rational, on Brandt’s view,

if it would survive or be produced by careful ‘cognitive psychotherapy’ [where cognitive psychotherapy is the ‘whole process of confronting desires with relevant information.’]. . . I shall call a desire ‘irrational’ if it cannot survive compatibly with clear and repeated judgments about established facts. What this means is that rational desire. . . can confront, or will even be produced by, awareness of the truth (1979, 113).

And Railton has argued that we should consider an agent’s good to be “what he would want himself to want. . . were he to contemplate his present situation from a standpoint fully and vividly informed about himself and his circumstances, and entirely free of cognitive error or lapses of instrumental rationality” (1986a, 16).

1 Overview

There are at least four general strategies one could take in arguing that such an informed viewpoint is inadequate in capturing and commensurating what is in an agent’s interests.

  1. First, one could argue that the notion of a fully informed self is a chimera. This would likely involve the worry that from the fact that any of the lives that one is to assess the value of must be in some sense available to one (otherwise it could not be a valuable life for one to live) it does not follow that all of them together must be available to one’s consciousness. To make good this suggestion against the full information account one would have to provide reasons to think there are substantive worries about uniting the experience of all lives one could lead into a single consciousness.
  2. Second, one could argue that even in cases in which an agent is adequately informed of the different life paths she is choosing between, there is no single pro-attitude, such as preferring, which appropriately measures the value of the diverse kinds of goods available to an agent…The things that sensibly elicit delight are not generally the same things that merit respect or admiration. Our capacity for articulating our attitudes depends upon our understandings of our attitudes, which are informed by norms for valuation.
  3. Third, one could argue that a vivid presentation of some experiences which could be part of one’s life could prove so disturbing or alluring as to skew any further reflection about what option to choose. Allan Gibbard has suggested the example of “a more vivid realization of what peoples’ innards are like” causing a “debilitating neurosis” which prevents me from eating in public. [cf. Bostrom’s information-harms typology: ‘evocation hazard’; personally, I would use something like ‘brainwashing’ or war & holocausts]
  4. Fourth, one could worry against naturalistic versions of the full information account that the purportedly naturalistically described informed viewpoint essentially invokes unreduced normative notions. [Naturalistic versions seem to assume non-physical definitions, like ‘ideal set of information’, and hence smuggle in non-naturalistic beliefs]

Emphasis added; Sobel pursues line of objection #1.

1.1 The argument

I will try to reconstruct the argument in something more closely approximating propositional logic so it’s easier to classify any criticism of Sobel based on what premise or inference they are attacking. The following is based on my reading pg 796–797,801–808; I omit all the examples, and some of the weaker tangential arguments. (For example, the suggestion that the ideal moral system may go insane from the difficulty of choices or it will despise us for being so pathetic and wish us dead (pg807), which are obvious anthropomorphisms.)

  1. The ideal moral system may not err
  2. Every possible life judgement must be judged by an agent
  3. An agent either lives that possible life, or it does not live it
  4. If the agent does not live the possible life:

    1. If the agent does not live the possible life, it does not live the life’s experiences
    2. Experiences may contain otherwise-unobtainable information [‘revelations’]
    3. A judgement based on incomplete information may err
    4. The ideal moral system will not use an agent that lives the possible life (1, 4.1–4.3)
  5. If the agent does live the possible life, it is either a ‘serial’ agent or an ‘amnesia’ agent

    1. Serial; the agent either lives the same life or a different life:

      The same life:

      1. To live the same possible life as that possible life, the agent must know only the same things as the possible life
      2. Most possible lives do not know what it is to live a different life
      3. If the agent knows only the same things as the possible life does, then in most lives it cannot know what it is to live an additional life
      4. If one does not know what additional lives are like to live, one may err in assessing one’s own life
      5. The serial agent may live a life which does not know what other lives are like
      6. The serial agent may err
      7. The ideal moral system will not use a serial agent which knows the same as the possible life (1, 4.3, 5.1.1.1–6)

      A different life:

      1. If the agent knows more or less things than the possible life, it is not identical to the possible life
      2. If it is not identical to the possible life, it may experience or act differently
      3. If may experience things differently or act differently, it may judge experiences or judge acts differently
      4. If it may judge experiences or acts differently, then it may err
      5. The ideal moral system will not use a serial agent which knows more or less than the possible life (1,4.3,5.1.2.1–4)
    2. Amnesia:

      1. If the agent is an amnesia agent, it will work under incomplete information due to forgetting
      2. Each amnesia period will form a different judgement
      3. These judgements may differ
      4. Differing judgements may lead to error

      Rebuttals rejecting 5.2.4:

      1. The judgements can be weighed into a final correct judgement by an unspecified algorithm

        • But - how does this work, exactly? What is the life’s utility over its span?
      2. Only one (‘allegedly temporally privileged’) judgement is used, and a judgement can’t differ with itself
      3. They will not differ, as the fully informed agent at any period will agree with itself at all other periods

        • But - how would one prove such a thing? It is ‘indeterminate’ and ‘unlikely’.
      4. The ideal moral system will not use an amnesiac agent (1, 5.2.1–4)
  6. The ideal moral system will use neither a serial or amnesiac agent (5.1.7, 5.1.2, 5.2.5)
  7. The ideal moral system will not use an agent
  8. The ideal moral system will not judge lives

1.1.1 Analysis

Broken down like this, we can see a number of ways to strengthen or attack it. For example, we can strengthen the attack on serial agents who lead different lives (5.1.2) by defining agents and lives as Turing machines and then invoking Rice’s theorem (the generalized Halting Theorem) - obviously ‘goodness of life’ is a nontrivial predicate and so there will be Turing machines for whom the question is uncomputable.

This strengthening illustrates a possible attack, on the key premise 1: “the system must not err”. Obviously, if the ethical system may err, all the arguments collapse: it’s fine for an amnesia agent to sometimes contradict itself, it’s fine for a too-knowledgeable serial agent to not act the same, etc.

But our strengthening of 5.1.2 to Rice’s theorem would seem to work for all the proposed agents (‘the amnesia agent will both work under incomplete information and be confronted with uncomputable lives’), which is not an issue. What is an issue is that this would seem to work for any agent implementing any nontrivial ethical system - a utilitarian agent (‘you discover a planet-destroying bomb - which is triggered by the halting of a particular Turing machine…’) or many deontological agents (‘your computer claims to be a conscious being and you must not reboot it, because that would violate your deontological respect for personal autonomy and the right to live; you try to check its claims but…’).

An argument which proves too much is not a good argument, and it seems to me that we can construct situations for agents running any moral system where they may err, if only through extreme brute force skeptical claims like the Simulation Hypothesis. (I say ‘may’ because Sobel’s arguments above do not seem to show that various kinds of agents will err, which would be very difficult to prove.)

Given this, we can reject premise 1 and are now free to pick from any of the kinds of agents discussed, since now that they are free to err, they are also free to have incomplete information, not attempt to crack uncomputable cases, etc. (To quote Murphy pg 23, “It would imply the indefensibility of DF [desire-fulfillment] theory if, that is, their hypothetical desire situations incorporated a full information condition, which is the target of Sobel’s and Rosati’s criticisms. If a theory’s information condition were more modest, perhaps it would escape those criticisms.”)

2 The literature

Sobel’s paper has only occasionally been grappled with or defended; usually it is described as illustrating some serious problems with reflective theories, but not much more.

Support:

  • Loeb, Don 1995: “Full-information theories of individual good”, Social Theory and Practice 21: 1–30

    Loeb largely agrees with Sobel, but focuses his criticisms on more empirical grounds, like it taking lifetimes to learn enough, or concerns about judgements of goodness changing as additional information comes in (“restricting the scope of relevant information to the science of the subject’s day would lead to an implausibly relativized account of individual good”). The obvious response to the first ~18 and last ~10 pages of his paper is, just like Sobel, he is anthropomorphizing with a vengeance and that problems for us are not problems for sufficiently powerful agents (the basic theory appeals to asymptotes and ideals), to which he replies:

    "It would be ironic for a theory that makes questions of value depend on a causal matter (and that is presented in the spirit of naturalism) to take refuge in imagining massive alterations in the laws of nature. But irony is no guarantee of incorrectness. Still, it is not at all clear that such massively impossible counterfactuals have determinate truth values. Counterfactuals about what people would want in causally impossible circumstances are still causal counterfactuals. As such, they depend on causal laws—in particular, laws of psychology. But the laws of psychology would have to be vastly different from the actual laws if they were to rule out all of the unwelcome influences I have pointed out. And since these are the very laws that support the counterfactuals, it is not at all clear that enough is left of them to insure that the counterfactuals have determinate truth values.40 [40: A fortiori, it is not clear that these counterfactuals would have truth values that are empirically determinable.]

    It is also not clear that the full-information approach would be plausible if it required that we imagine such wide-scale changes in the laws of psychology. We know too little to be confident of that. Perhaps my counterpart would no longer wish for me to shun the poison liquid in a world in which he would react no differently to yelling than to whispering, and in which one’s motivations would not be influenced by massive alterations in one’s cognitive capabilities alone. Without knowing how the laws of psychology would be altered, we are in no position to judge whether the approach maintains whatever plausibility it initially appeared to have."

    As a hardcore materialist, I do not buy this argument; the ‘laws of psychology’ are no laws at all, but rather one of many possibilities allowed by the laws of physics, and the counterfactuals are not impossible.

Criticism:

  • Campbell, Stephen Michael, 2006 M.A. thesis: “Phenomenal Well-being”; pg 40-end:

    Campbell describes a slightly more specific agent, where the lives are simply compared pair-wise and with a point system to break potential ties and intransitivity. Campbell seems to reject premise 1 too, in describing a flawed system (“…the ranking should be accurate, even if not perfectly precise”), but argues that this is acceptable since we do it in ordinary life and offers as a somewhat facetious example the difficulty of perfectly comparing ice cream flavors:

    Your memories of the different experiences might get corrupted. By the time you get to the end of the thirty-one flavors, perhaps you cannot remember what flavors 5 and 12 were like or even what you thought about them at the time. Or perhaps your memory was distorted at some point in the process. You can re-taste those flavors, but you cannot recapture the exact taste experience again (since, for one, you will now have more ice cream on your stomach), and we have no guarantee that the re-experience of a sample will not diverge in such a way as to affect your ranking.

    Campbell hopes agents will ultimately converge despite the roughness of judging, and most of his replies to Sobel/Rosati/Loeb depend on that or his own brand of anthropomorphizing the ideal system (eg. suggesting that an unappreciative system will, after experiencing countless lives, come to appreciate them - I’m reminded of the TvTropes Do Androids Dream?).
  • Beaulieu, 1997 MA thesis, "The Normative Authority of Our Fully Informed Judgements;

    Goes after Rosati’s arguments, arguing that enough memory can serve to appreciate differing viewpoints, changes in one’s desires with additional information are welcome, and Rosati’s examples (showing full information to be incoherent) do not work. Most worth reading is chapter 3.
  • Anton Tupa, 2006 PhD thesis “Development and Defense of a Desire-satisfaction Conception of Well-being”

    Tupa argues Rosati’s internalism criteria can be met by idealized/extrapolated versions of a person, and that doesn’t refute desirism (pg 111–128). Discussing Sobel on pg 137, he writes something which I think is very insightful when applied to suggestions like Sobel’s ‘the ideal agent/system will go mad if it had perfect information’:

    I think that so long as the conditional fallacy [see "The Conditional Fallacy in Contemporary Philosophy"] has the form of “for all we know, x could be a consequent change, given your analysans, and if so, your analysis will yield counterintuitive results,” then a solution can be provided. I am optimistic here because although sometimes critics of ideal advisor accounts write as if there would be only one possible world in which one would have full information, and they then prognosticate doomsday-like scenarios, in reality (in some sense perhaps), there are many possible worlds in which one is fully informed, i.e. there are many A+ candidates. Of these many possible worlds in which one has full information, some will involve changes in one that will be problematic, but some will involve few significant changes in one or changes that are quite unproblematic.

    …Problems that can be solved by appeal to the concept of a personality include worries about the increased mental capacity and mental processing speed that would have to be the case in order for someone to have full information. To be sure, it is a little odd even thinking about people with what can only be described as super-minds. However, anyone’s personality, I say, is compatible with increased cognitive capacity and the like. Unless someone can show that some counterintuitive consequent change must occur in a world in which one is fully informed, the method of singling out the best possible world in which one is fully informed seems to have a great deal of promise…Thus far no one has come close to offering an argument that counterintuitive consequent changes must result in the nearest possible world in which one is fully informed…Later, I will examine whether full propositional information is adequate as an information set for the ideal advisor. While Rosati and Sobel are skeptical, I argue that full propositional information is far richer and more textured than they envision and may very well be sufficient to play the requisite role in the deliberation of the ideal advisor.

    Tupa’s replies to previously mentioned claim and arguments often have this flavor up to pg 150, where he then rejects much of premise 5 and argues for the judging agent to be able to make flawless assessments of a life without adopting the viewpoint of the life (based on ‘propositional knowledge’: “I have a hard time seeing how knowledge of what something is like is evaluative in any important sense”) and like Campbell, he contrasts Sobel’s demand for perfect judgement as beyond even the most reliable ordinary daily judgement

3 References & further reading

Works on the subject include:

New to LessWrong?

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 12:46 PM

Good overview! "Against utilitarianism" is a bit misleading, though.

(Note to others: this research was paid for by The Singularity Institute due to its relevance to CEV.)

"Against utilitarianism" is a bit misleading, though.

IMO, it's accurate. Sobel says (pg3) of the "standard consequentialist position" that it takes two steps: you need to judge a life, and then aggregate all the judgments in a morally acceptable manner. He says that he's puzzled that the second part receives "the lion's share" (pg4) of criticism of the standard consequentialist position, when he regards the first step equally or more dubious ("But no comparable group of debates which challenge the adequacy of the first step in the SCP exists...I believe that the first step...is itself quite problematic").

If you can't even judge lives, then that takes out the average utilitarianisms (what are you averaging?), negative utilitarianisms, welfarist utilitarianisms... basically everything but the hedonism theories, and even that is questionable (can one be unable to judge one's own life and pleasures? If so, then hedonism too fails).

Alice and Bob live for a day. Alice spends the day reading a good book, Bob spends the day being beaten up by angry baboons. I judge Alice's life to be better than Bob's. If Omega asks me, "hey Steven, should I make an Alice or a Bob", I will choose Alice. It seems to me that I just did judge lives, so Sobel can't have proved that I can't judge lives. If I can't judge lives, what does it mean I should tell Omega? Surely it doesn't mean I should tell Omega to make Bob. Am I being unfairly simplistic here? I don't see how.

Am I being unfairly simplistic here? I don't see how.

I examine 2 Turing machines, one of which reads 'halt' and the other reads 'for all integers, check whether Goldbach's conjecture holds and halt when it doesn't'. If Omega asks me which one halts, I will choose the first one. It seems to me that I did just solve the Halting theorem, so Turing can't have proven it. If I can't solve the Halting problem, what does it mean I should tell Omega? That #2 halts? Am I being unfairly simplistic here? I don't see how.

If it's claimed that "you can't judge lives", it doesn't seem like the most natural reading is "there exists at least one theoretically possible comparison of lives that you can't judge, though you can judge some such comparisons and you may be able to judge all comparisons that actually turn up".

I think I object to your comment for more reasons than that but would need to think about how exactly to phrase them.

I am merely repeating what I pointed out in my essay.

I feel like you're reading my comments uncharitably, and would like to bow out of the discussion.

I see. I don't think of utilitarianism this way, but it might be common enough to call it the "standard consequentialist position." I'm not sure.

I agree. From my experience, utilitarianism typically sets the unit of measurement for utility at pleasure, preference, or happiness and not anything to do with life per se. I don't see how any of those measures require judging a life.

Experiences may contain otherwise-unobtainable information [‘revelations’]

Isn't this the Mary the color scientist fallacy?

Thanks, gwern, for this summary. I have a different way of criticizing Sobel's premise 1. I think he implicitly imposes a requirement of complete determinacy for the value (to the agent) of a life. But that is probably too strong.

A definition/theory/account shouldn't provide too much determinacy. For example: a definition of "baldness" should avoid, if at all possible, classifying one head as determinately "bald" and the next as determinately "not bald" when the difference in hair on those heads is minimal. Less trivially: a philosophical account of "sentience" need not be embarrassed if there are some cases (insects?) on which it cannot deliver a clear verdict. Maybe that's a feature of the account, not a bug (pardon the pun). Similarly, an account of "torekp's well-being" need not be rejected if there are some alternative life-courses it cannot definitively rank relative to each other. If, among the closest possible worlds in which me+ is well-informed about these life-courses, some me+s recommend life A and others recommend life B, it seems to me reasonable to posit that the two lives are incomparable.

Also, one should consider alternate epistemic routes to value-conclusions that are congruent with, but need not follow logically from, the informed-desire perspective. We might hypothesize specific causes for the changes in a person's desires with increasing information. I mean the usual suspects: fun, intimacy, knowledge, autonomy, etc., along with the psycho-physical characteristics of human beings that make us respond positively to these. If we develop theories along these lines with explanatory power, we may be able to kick away the ladder of our informed-self advisers. (ETA:) In other words, we directly consult the reduction base for facts about what our informed-selves would do; this might be simpler than constructing detailed hypothetical scenarios.

It seems worth it to distinguish explicitly between 1) consulting certain counterfactual versions of oneself to figure out what ethical theory to use (which is what I understand CEV to do), and 2) using the ethical theory that says to maximize quality of life as defined by the judgment of certain counterfactual versions of the liver.

I think it needs some editing at the moment. What is premise 0? How does 4.4 follow from what came before? Under 5.1, what are these lives the same as or different than?

Parts 1 and 2 of the argument both initially struck me as highly implausible. Was there some argumentation that you skipped wherein the authors tried to justify those points?

I think it needs some editing at the moment.

Yes, turns out LessWrong Markdown doesn't let you number from 0... even when you hand-edit in the right HTML parameter, <ol start="0", which meant all the numbers were off by one. I think I fixed them all.

Was there some argumentation that you skipped wherein the authors tried to justify those points?

As I said, I removed the examples to get at the logical structure.

Enoch (2005) argues that idealization is problematic for subjectivist theories:

The reading of the watch tracks the time—which is independent of it—only when all goes well, the perceptual impression tracks relative height—which is independent of this perception—only when all goes well. So there is reason to make sure—by idealizing—that all does go well. But had we taken the other Euthyphronic alternative regarding these matters things would have been very different. Had the time depended on the reading of my watch, had the reading of my watch made certain time-facts true, there would have been no reason (not this reason, anyway) to “idealize” my watch and see to it that the batteries are fully charged. In such a case, whatever the reading would be, that would be the right reading, because that this is the reading would make it right.

The natural rationale for idealization, the one exemplified by the time and relative-height examples, thus only applies to cases where the relevant procedure or response is thought of as tracking a truth inde-pendent of it. This does not necessarily rule out extensional equivalences between normative truths and our relevant responses. One may, for instance, hold a view that is an instance of “tracking internalism,”according to which, necessarily, one cannot have a (normative) reason without being motivated accordingly, not because motivations are part and parcel of (normative) reasons, but rather because our motivations necessarily track the independent truths about (normative) reasons. But typical idealizers do not think of their view in this way; they do not think of the relevant response as (necessarily) tracking an independent order of normative facts. As emphasized above, they think of the relevant response as constituting the relevant normative fact.

I'm not sure how relevant this objection is for CEV, though.

(Replaced "error", where it was used as a verb, with "err".)