"Against utilitarianism" is a bit misleading, though.
IMO, it's accurate. Sobel says (pg3) of the "standard consequentialist position" that it takes two steps: you need to judge a life, and then aggregate all the judgments in a morally acceptable manner. He says that he's puzzled that the second part receives "the lion's share" (pg4) of criticism of the standard consequentialist position, when he regards the first step equally or more dubious ("But no comparable group of debates which challenge the adequacy of the first step in the SCP exists...I believe that the first step...is itself quite problematic").
If you can't even judge lives, then that takes out the average utilitarianisms (what are you averaging?), negative utilitarianisms, welfarist utilitarianisms... basically everything but the hedonism theories, and even that is questionable (can one be unable to judge one's own life and pleasures? If so, then hedonism too fails).
Alice and Bob live for a day. Alice spends the day reading a good book, Bob spends the day being beaten up by angry baboons. I judge Alice's life to be better than Bob's. If Omega asks me, "hey Steven, should I make an Alice or a Bob", I will choose Alice. It seems to me that I just did judge lives, so Sobel can't have proved that I can't judge lives. If I can't judge lives, what does it mean I should tell Omega? Surely it doesn't mean I should tell Omega to make Bob. Am I being unfairly simplistic here? I don't see how.
Am I being unfairly simplistic here? I don't see how.
I examine 2 Turing machines, one of which reads 'halt' and the other reads 'for all integers, check whether Goldbach's conjecture holds and halt when it doesn't'. If Omega asks me which one halts, I will choose the first one. It seems to me that I did just solve the Halting theorem, so Turing can't have proven it. If I can't solve the Halting problem, what does it mean I should tell Omega? That #2 halts? Am I being unfairly simplistic here? I don't see how.
If it's claimed that "you can't judge lives", it doesn't seem like the most natural reading is "there exists at least one theoretically possible comparison of lives that you can't judge, though you can judge some such comparisons and you may be able to judge all comparisons that actually turn up".
I think I object to your comment for more reasons than that but would need to think about how exactly to phrase them.
I see. I don't think of utilitarianism this way, but it might be common enough to call it the "standard consequentialist position." I'm not sure.
I agree. From my experience, utilitarianism typically sets the unit of measurement for utility at pleasure, preference, or happiness and not anything to do with life per se. I don't see how any of those measures require judging a life.
Experiences may contain otherwise-unobtainable information [‘revelations’]
Isn't this the Mary the color scientist fallacy?
Thanks, gwern, for this summary. I have a different way of criticizing Sobel's premise 1. I think he implicitly imposes a requirement of complete determinacy for the value (to the agent) of a life. But that is probably too strong.
A definition/theory/account shouldn't provide too much determinacy. For example: a definition of "baldness" should avoid, if at all possible, classifying one head as determinately "bald" and the next as determinately "not bald" when the difference in hair on those heads is minimal. Less trivially: a philosophical account of "sentience" need not be embarrassed if there are some cases (insects?) on which it cannot deliver a clear verdict. Maybe that's a feature of the account, not a bug (pardon the pun). Similarly, an account of "torekp's well-being" need not be rejected if there are some alternative life-courses it cannot definitively rank relative to each other. If, among the closest possible worlds in which me+ is well-informed about these life-courses, some me+s recommend life A and others recommend life B, it seems to me reasonable to posit that the two lives are incomparable.
Also, one should consider alternate epistemic routes to value-conclusions that are congruent with, but need not follow logically from, the informed-desire perspective. We might hypothesize specific causes for the changes in a person's desires with increasing information. I mean the usual suspects: fun, intimacy, knowledge, autonomy, etc., along with the psycho-physical characteristics of human beings that make us respond positively to these. If we develop theories along these lines with explanatory power, we may be able to kick away the ladder of our informed-self advisers. (ETA:) In other words, we directly consult the reduction base for facts about what our informed-selves would do; this might be simpler than constructing detailed hypothetical scenarios.
It seems worth it to distinguish explicitly between 1) consulting certain counterfactual versions of oneself to figure out what ethical theory to use (which is what I understand CEV to do), and 2) using the ethical theory that says to maximize quality of life as defined by the judgment of certain counterfactual versions of the liver.
I think it needs some editing at the moment. What is premise 0? How does 4.4 follow from what came before? Under 5.1, what are these lives the same as or different than?
Parts 1 and 2 of the argument both initially struck me as highly implausible. Was there some argumentation that you skipped wherein the authors tried to justify those points?
I think it needs some editing at the moment.
Yes, turns out LessWrong Markdown doesn't let you number from 0... even when you hand-edit in the right HTML parameter, <ol start="0"
, which meant all the numbers were off by one. I think I fixed them all.
Was there some argumentation that you skipped wherein the authors tried to justify those points?
As I said, I removed the examples to get at the logical structure.
Enoch (2005) argues that idealization is problematic for subjectivist theories:
The reading of the watch tracks the time—which is independent of it—only when all goes well, the perceptual impression tracks relative height—which is independent of this perception—only when all goes well. So there is reason to make sure—by idealizing—that all does go well. But had we taken the other Euthyphronic alternative regarding these matters things would have been very different. Had the time depended on the reading of my watch, had the reading of my watch made certain time-facts true, there would have been no reason (not this reason, anyway) to “idealize” my watch and see to it that the batteries are fully charged. In such a case, whatever the reading would be, that would be the right reading, because that this is the reading would make it right.
The natural rationale for idealization, the one exemplified by the time and relative-height examples, thus only applies to cases where the relevant procedure or response is thought of as tracking a truth inde-pendent of it. This does not necessarily rule out extensional equivalences between normative truths and our relevant responses. One may, for instance, hold a view that is an instance of “tracking internalism,”according to which, necessarily, one cannot have a (normative) reason without being motivated accordingly, not because motivations are part and parcel of (normative) reasons, but rather because our motivations necessarily track the independent truths about (normative) reasons. But typical idealizers do not think of their view in this way; they do not think of the relevant response as (necessarily) tracking an independent order of normative facts. As emphasized above, they think of the relevant response as constituting the relevant normative fact.
I'm not sure how relevant this objection is for CEV, though.
Luke tasked me with researching the following question
The paper in question is David Sobel’s 1994 paper “Full Information Accounts of Well-Being” (Ethics 104, no. 4: 784–810) (his 1999 paper, “Do the desires of rational agents converge?”, is directed against a different kind of convergence and won’t be discussed here).
The starting point is Brandt’s 1979 book where he describes his version of a utilitarianism in which utility is the degree of satisfaction of the desires of one’s ideal ‘fully informed’ self, and Sobel also refers to the 1986 Railton apologetic. (LWers will note that this kind of utilitarianism sounds very similar to CEV and hence, any criticism of the former may be a valid criticism of the latter.) I’ll steal entirely the opening to Mark C Murphy’s 1999 paper, “The Simple Desire-Fulfillment Theory” (rejecting any hypotheticals or counterfactuals in desire utilitarianism), since he covers all the bases (for even broader background, see the Tanner Lecture “The Status of Well-Being”):
1 Overview
Emphasis added; Sobel pursues line of objection #1.
1.1 The argument
I will try to reconstruct the argument in something more closely approximating propositional logic so it’s easier to classify any criticism of Sobel based on what premise or inference they are attacking. The following is based on my reading pg 796–797,801–808; I omit all the examples, and some of the weaker tangential arguments. (For example, the suggestion that the ideal moral system may go insane from the difficulty of choices or it will despise us for being so pathetic and wish us dead (pg807), which are obvious anthropomorphisms.)
If the agent does not live the possible life:
If the agent does live the possible life, it is either a ‘serial’ agent or an ‘amnesia’ agent
Serial; the agent either lives the same life or a different life:
The same life:
A different life:
Amnesia:
Rebuttals rejecting 5.2.4:
The judgements can be weighed into a final correct judgement by an unspecified algorithm
They will not differ, as the fully informed agent at any period will agree with itself at all other periods
The ideal moral system will not judge lives
1.1.1 Analysis
Broken down like this, we can see a number of ways to strengthen or attack it. For example, we can strengthen the attack on serial agents who lead different lives (5.1.2) by defining agents and lives as Turing machines and then invoking Rice’s theorem (the generalized Halting Theorem) - obviously ‘goodness of life’ is a nontrivial predicate and so there will be Turing machines for whom the question is uncomputable.
This strengthening illustrates a possible attack, on the key premise 1: “the system must not err”. Obviously, if the ethical system may err, all the arguments collapse: it’s fine for an amnesia agent to sometimes contradict itself, it’s fine for a too-knowledgeable serial agent to not act the same, etc.
But our strengthening of 5.1.2 to Rice’s theorem would seem to work for all the proposed agents (‘the amnesia agent will both work under incomplete information and be confronted with uncomputable lives’), which is not an issue. What is an issue is that this would seem to work for any agent implementing any nontrivial ethical system - a utilitarian agent (‘you discover a planet-destroying bomb - which is triggered by the halting of a particular Turing machine…’) or many deontological agents (‘your computer claims to be a conscious being and you must not reboot it, because that would violate your deontological respect for personal autonomy and the right to live; you try to check its claims but…’).
An argument which proves too much is not a good argument, and it seems to me that we can construct situations for agents running any moral system where they may err, if only through extreme brute force skeptical claims like the Simulation Hypothesis. (I say ‘may’ because Sobel’s arguments above do not seem to show that various kinds of agents will err, which would be very difficult to prove.)
Given this, we can reject premise 1 and are now free to pick from any of the kinds of agents discussed, since now that they are free to err, they are also free to have incomplete information, not attempt to crack uncomputable cases, etc. (To quote Murphy pg 23, “It would imply the indefensibility of DF [desire-fulfillment] theory if, that is, their hypothetical desire situations incorporated a full information condition, which is the target of Sobel’s and Rosati’s criticisms. If a theory’s information condition were more modest, perhaps it would escape those criticisms.”)
2 The literature
Sobel’s paper has only occasionally been grappled with or defended; usually it is described as illustrating some serious problems with reflective theories, but not much more.
Support:
Loeb, Don 1995: “Full-information theories of individual good”, Social Theory and Practice 21: 1–30
Loeb largely agrees with Sobel, but focuses his criticisms on more empirical grounds, like it taking lifetimes to learn enough, or concerns about judgements of goodness changing as additional information comes in (“restricting the scope of relevant information to the science of the subject’s day would lead to an implausibly relativized account of individual good”). The obvious response to the first ~18 and last ~10 pages of his paper is, just like Sobel, he is anthropomorphizing with a vengeance and that problems for us are not problems for sufficiently powerful agents (the basic theory appeals to asymptotes and ideals), to which he replies:
As a hardcore materialist, I do not buy this argument; the ‘laws of psychology’ are no laws at all, but rather one of many possibilities allowed by the laws of physics, and the counterfactuals are not impossible.
Criticism:
Campbell, Stephen Michael, 2006 M.A. thesis: “Phenomenal Well-being”; pg 40-end:
Campbell describes a slightly more specific agent, where the lives are simply compared pair-wise and with a point system to break potential ties and intransitivity. Campbell seems to reject premise 1 too, in describing a flawed system (“…the ranking should be accurate, even if not perfectly precise”), but argues that this is acceptable since we do it in ordinary life and offers as a somewhat facetious example the difficulty of perfectly comparing ice cream flavors:
Campbell hopes agents will ultimately converge despite the roughness of judging, and most of his replies to Sobel/Rosati/Loeb depend on that or his own brand of anthropomorphizing the ideal system (eg. suggesting that an unappreciative system will, after experiencing countless lives, come to appreciate them - I’m reminded of the TvTropes Do Androids Dream?).Beaulieu, 1997 MA thesis, "The Normative Authority of Our Fully Informed Judgements;
Goes after Rosati’s arguments, arguing that enough memory can serve to appreciate differing viewpoints, changes in one’s desires with additional information are welcome, and Rosati’s examples (showing full information to be incoherent) do not work. Most worth reading is chapter 3.Anton Tupa, 2006 PhD thesis “Development and Defense of a Desire-satisfaction Conception of Well-being”
Tupa argues Rosati’s internalism criteria can be met by idealized/extrapolated versions of a person, and that doesn’t refute desirism (pg 111–128). Discussing Sobel on pg 137, he writes something which I think is very insightful when applied to suggestions like Sobel’s ‘the ideal agent/system will go mad if it had perfect information’:
Tupa’s replies to previously mentioned claim and arguments often have this flavor up to pg 150, where he then rejects much of premise 5 and argues for the judging agent to be able to make flawless assessments of a life without adopting the viewpoint of the life (based on ‘propositional knowledge’: “I have a hard time seeing how knowledge of what something is like is evaluative in any important sense”) and like Campbell, he contrasts Sobel’s demand for perfect judgement as beyond even the most reliable ordinary daily judgement
3 References & further reading
Works on the subject include: