The more I think about it, the more I'm tempted to just bite the bullet and accept that my "empirically observed utility function" (to the degree that such a thing even makes sense) may be bounded, finite, with a lot of its variation spent measuring relatively local things like the prosaic well being of myself and my loved ones, so that there just isn't much left over to cover anyone outside my monkey sphere except via a generic virtue-ethical term for "being a good citizen n'stuff".
A first order approximation might be mathematically modeled by taking all the various utilities having to do with "weird infinite utilities", normalizing all those scenarios by "my ability to affect those outcomes" (so my intrinsic concern for things decreased when I "gave up" on affecting them... which seems broken but also sorta seems like how things might actually work) and then run what's left through a sigmoid function so their impact on my happiness and behavior is finite and marginal... claiming maybe 1% of my consciously strategic planning time and resource expenditures under normal circumstances.
Under this model, the real meat of my utility func...
ETA: This is a meta comment about some aspects of some comments on this post and what I perceive to be problems with the sort of communication/thinking that leads to the continued existence of those aspects. This comment is not meant to be taken as a critique of the original post.
ETA2: This comment lacks enough concreteness to act as a serious consideration in favor of one policy over another. Please disregard it as a suggestion for how LW should normatively respond to something. Instead one might consider if one might personally benefit from enacting a policy I might be suggesting, on an individual basis.
Why are people on Less Wrong still talking about 'their' 'values' using deviations from a model that assumes they have a 'utility function'? It's not enough to explicitly believe and disclaim that this is obviously an incorrect model, at some point you have to actually stop using the model and adopt something else. People are godshatter, they are incoherent, they are inconsistent, they are an abstraction, they are confused about morality, their revealed preferences aren't their preferences, their revealed preferences aren't even their revealed preferences, their verbally express...
Don't you think people need to go through an "ah ha, there is such a thing as rationality, and it involves Bayesian updating and expected utility maximization" phase before moving on to "whoops, actually we don't really know what rationality is and humans don't seem to have utility functions"? I don't see how you can get people to stop talking about human utility functions unless you close LW off from newcomers.
It tells me that I ought to do what I don't want to do on any other than some highly abstract intellectual level. I don't even get the smallest bit of satisfaction out of it, just depression.
If this is really having that effect on you, why not just focus on things other than abstract large-scale ethical dilemmas, e.g. education, career, relationships? Progress on those fronts is likely to make you happier, and if you want to come back to mind-bending ethical conundrums you'll then be able to do so in a more productive and pleasant way. Trying to do something you're depressed and conflicted about is likely to be ineffective or backfire.
I haven't studied all the discussions on the parliamentary model, but I'm finding it hard to understand what the implications are, and hard to judge how close to right it is. Maybe it would be enlightening if some of you who do understand the model took a shot at answering (or roughly approximating the answers to) some practice problems? I'm sure some of these are underspecified and anyone who wants to answer them should feel free to fill in details. Also, if it matters, feel free to answer as if I asked about mixed motivations rather than moral uncertainty:
I assign 50% probability to egoism and 50% to utilitarianism, and am going along splitting my resources about evenly between those two. Suddenly and completely unexpectedly, Omega shows up and cuts down my ability to affect my own happiness by a factor of one hundred trillion. Do I keep going along splitting my resources about evenly between egoism and utilitarianism?
I'm a Benthamite utilitarian but uncertain about the relative values of pleasure (measured in hedons, with a hedon calibrated as e.g. me eating a bowl of ice cream) and pain (measured in dolors, with a dolor calibrated as e.g. me slapping myself in the face). My
Nope. Humans do have utility functions - in this sense:
A trivial sense, that merely labels what an agent does with 1 and what it doesn't with 0: the Texas Sharpshooter Utility Function. A "utility function" that can only be calculated -- even by the agent itself -- in hindsight is not a utility function. The agent is not using it to make choices and no observer can use it to make predictions about the agent.
Curiously, in what appears to be a more recent version of the paper, the TSUF is not included.
and that's a far better investment than any other philanthropic effort that you know of, so you should fund course of action X even if you think that model A is probably wrong.
This stands out as problematic, since there's no plausible consequentialist argument for this from a steel-manned Person 1. Person 1 is both arguing for the total dominance of total utilitarian considerations in Person 2's decision-making, and separately presenting a bogus argument about what total utilitarianism would recommend. Jennifer's comment addresses the first prong, while...
I think there are two things going on here:
If one accepts any kind of multiverse theory, even just Level I, then an infinite number of sentient organisms already exist, and it seems that we cannot care about each individual equally without running into serious problems. I previously suggested that we discount each individual using something like the length of its address in the multiverse.
Replies to questions:
Remarks:
The easiest answer is that nobody is seriously anything even remotely approaching utilitarian. Try writing down your utility function in even some very limited domain, and you'll see that yourself.
Utilitarianism is a mathematical model that has very convenient mathematical properties, and has enough numbers to tweak available that you can use it to analyze some very simple situations (see the entire discipline of economics). It breaks very quickly when you push it a little.
And seriously, exercise of writing down point system of what is worth how many utility points to you is really eye-opening, I wrote a post on lesswrong about it ages ago if you're interested.
Here's a link to Dawrst's main page. I find this article on vegetarianism to be particularly interesting---though perhaps in a different way than Dawrst intended---and it's perhaps one of few 'traditional' utilitarian arguments that has contributed to me changing how I thought about day-to-day decisions. I haven't re-evaluated that article since I read it 6 months ago though.
Surely you'd assign at least a 10^-5 chance that it's on the mark? More confidence than this would seem to indicate overconfidence bias, after all, plenty of smart people believe in model A and it can't be that likely that they're all wrong.
It seems that if you accept this, you really ought to go accept Pascal's Wager as well, since a lot of smart people believe in God.
It seems like an extraordinary leap to accept that the original numbers are within 5 orders of magnitude, unless you've actually been presented with strong evidence. Humans naturally suc...
Upon further thought, the real reason that I reject Person 1's argument is because everything should add up to normality, whereas Person 1's conclusion is ridiculous at face value, and not in a "that seems like a paradox" way, more of a "who is this lunatic talking to me" way.
As I understand it, the scenario is that you're hearing a complicated argument, and you don't fully grok or internalize it. As advised by "Making Your Explicit Reasoning Trustworthy", you have decided not to believe it fully.
The problem comes in the second argument - should you take the advice of the person (or meme) that you at least somewhat mistrust in "correcting" for your mistrust? As you point out, if the person (or the meme) is self-serving, then the original proposal and the correction procedure will fit together neatly to cause...
- Is the suggestion that one's utilitarian efforts should be primarily focused on the possibility of lab universes an example of "explicit reasoning gone nuts?"
I think so, for side reasons I go into in another comment reply: basically, in a situation with a ton of uncertainty and some evidence for the existence of a class of currently unknown but potentially extremely important things, one should "go meta" and put effort/resources into finding out how to track down such things, reason about such things, and reason about the known u...
One may not share Dawrst's intuition that pain would outweigh happiness in such universes, but regardless, the hypothetical of lab universes raises the possibility that all of the philanthropy that one engages in with a view toward utility maximizing should be focusing around creating or preventing the creation of infinitely many lab universes (according to whether or not one one views the expected value of such a universe as positive or negative).
I haven't even finished reading this post yet, but it's worth making explicit (because of the obvious conne...
Related to: Confidence levels inside and outside an argument, Making your explicit reasoning trustworthy
A mode of reasoning that sometimes comes up in discussion of existential risk is the following.
Person 1: According to model A (e.g. some Fermi calculation with probabilities coming from certain reference classes), pursuing course of action X will reduce existential risk by 10-5 ; existential risk has an opportunity cost of 1025 DALYs (*), therefore model A says the expected value of pursuing course of action X is 1020 DALYs. Since course of action X requires 109 dollars, the number of DALYs saved per dollar invested in course of action X is 1011. Hence course of action X is 1010 times as cost-effective as the most cost-effective health interventions in the developing world.
Person 2: I reject model A; I think that appropriate probabilities involved in the Fermi calculation may be much smaller than model A claims; I think that model A fails to incorporate many relevant hypotheticals which would drag the probability down still further.
Person 1: Sure, it may be that model A is totally wrong, but there's nothing obviously very wrong with it. Surely you'd assign at least a 10-5 chance that it's on the mark? More confidence than this would seem to indicate overconfidence bias, after all, plenty of smart people believe in model A and it can't be that likely that they're all wrong. So you think that the side-effects of pursuing course of action X are systematically negative, even your own implicit model gives a figure of at least 105 $/DALY saved, and that's a far better investment than any other philanthropic effort that you know of, so you should fund course of action X even if you think that model A is probably wrong.
(*) As Jonathan Graehl mentions, DALY stands for Disability-adjusted life year.
I feel very uncomfortable with this sort of argument that Person 1 advances above. My best attempt at an summary of where my discomfort comes from is that it seems like one could make the sort of argument to advance a whole number of courses of action, many of which would be at odds with one another.
I have difficulty parsing where my discomfort comes from in more detail. There may be underlying game-theoretic considerations, there may be underlying considerations based on the anthropic principle, it could be that the probability that one ascribes to model A being correct should be much lower than 10-5 on account of humans' poor ability to construct accurate models and that I shouldn't take it too seriously when some people ascribe to them, it could be that I'm irrationally influenced by social pressures against accepting unusual arguments that most people wouldn't feel comfortable accepting, it could be that in such extreme situations I value certainty over utility maximization, it could be some combination of all of these; I'm not sure how to disentangle the relevant issues in my mind.
One case study that I think may be useful to consider in juxtaposition with the above is as follows. In Creating Infinite Suffering: Lab Universes Alan Dawrst says
One may not share Dawrst's intuition that pain would outweigh happiness in such universes, but regardless, the hypothetical of lab universes raises the possibility that all of the philanthropy that one engages in with a view toward utility maximizing should be focusing around creating or preventing the creation of infinitely many lab universes (according to whether or not one one views the expected value of such a universe as positive or negative). This example is in the spirit of Pascal's wager but I prefer it because the premises are less metaphysically dubious.
One can argue that if one is willing to accept the argument given by Person 1 above, one should be willing to accept the argument that one should devote all of one's resources to studying and working toward or against lab universes.
Here various attempts at counterarguments seem to be uncompelling:
Counterargument #1: The issue here is with the infinite; we should ignore infinite ethics on the grounds that they're beyond the range of human comprehension and focus on finite ethics.
Response: The issue here doesn't seem to be with infinities, one can replace "infinitely many lab universes" with "3^^^3 lab universes" (or a sufficiently large number) and would be faced with essentially the same conundrum.
Counterargument #2: The hypothetical upside of a lab universe perfectly cancels out the hypothetical downside of such a universe so we can lab universes as having expected value zero.
Response: If this is true it's certainly not obviously true; there are physical constraints on the sorts of lab universes that could arise, it's probably not the case that for every universe there's an equal and opposite universe. Moreover, it's not the case that we don't have a means of investigating the expected utility of a lab universe. We do have our own universe as a model, can contemplate whether it has aggregate positive or negative utility and refine this understanding by researching fundamental physics, hypothesizing the variation among initial conditions and physical laws among lab universes and attempting to extrapolate what the utility/disutility of an average such universe would be.
Counterargument #3: Even if one's focus should be on lab universes, such a focus reduces to a focus on creating a Friendly AI, such an entity would be much better than us at reasoning about whether or not lab universes are a good thing and how to go about affecting their creation.
Response: Here too, if this is true it's not obvious. Even if one succeeds in creating an AGI that's sympathetic to human values, such an AGI may not ascribe to utilitarianism, after all many humans aren't and it's not clear that this is because their volitions have not been coherently extrapolated; maybe some humans have volitions which coherently extrapolate to being heavily utilitarian whereas others don't. If one is in the latter category, one may do better to focus on lab universes than one would do in focusing on FAI (for example, if one believes that lab universes would have average negative utility, one might work to increase existential risk so as to avert the possibility that a nonutilitarian FAI creates infinitely many universes in a lab because some people find it cool.
Counterargument #4: The universes so created would be parallel universes and parallel copies of a given organism should be considered equivalent to a single such organism, thus their total utility is finite and the expected utility of creating a lab universe is smaller than the expected utility in our own universe.
Response: Regardless of whether one considers parallel copies of a given organism equivalent to a single organism, there's some nonzero chance that the universes created would diverge in a huge number of ways; this could make the expected value of the creation of universes arbitrarily large depending how the probability that one assigns to the creation of n essentially distinct universes varies with n (this is partially an empirical/mathematical question; I'm not claiming that the answer goes one way or the other).
Counterargument #5: The statement "creating infinitely many universes would be infinitely bad" is misleading; as humans we experience marginal diminishing utility with respect to helping n sentient beings as n varies, this is not exclusively due to scope insensitivity, rather, the concavity of the function at least partially reflects terminal values.
Response: Even if one decides that this is true, one still has a question of how quickly the marginal diminishing utility sets in; and any choice here seems somewhat arbitrary so this line of reasoning seems unsatisfactory. Depending on the choice that one makes; one may reject Person 1's argument on the grounds that after a certain point one just doesn't care very much about helping additional people.
I'll end with a couple of questions for Less Wrong:
1. Is the suggestion that one's utilitarian efforts should be primarily focused on the possibility of lab universes an example of "explicit reasoning gone nuts?" (c.f. Anna's post Making your explicit reasoning trustworthy).
2. If so, is the argument advanced by Person 1 above also an example of "explicit reasoning gone nuts?" If the two cases are different then why?
3. If one rejects one or both of the argument by Person 1 and the argument that utilitarian efforts should be focused around lab universes, how does one reconcile this with the idea that one should assign some probability to the notion that one's model is wrong (or that somebody else's model is right)?