The LW community seems to assume, by default, that "unbounded, linear utility functions are reasonable."  That is, if you value the existence of 1 swan at 1.5 utilons, then 10 swans should be worth 15, etc.

Yudkowsky in his post on scope insensitivity argues that nonlinearity of personal utility functions is a logical fallacy.

However, unbounded and linearly increasing utility functions lead to conundrums such as Pascal's Mugging.  A recent discussion topic on Pascal's Mugging suggests ignoring probabilities that are too small.  However, such extreme measures are not necessary if tamer utility functions are used: one images a typical personal utility function to be bounded and nonlinear. 

In that recent discussion topic, V_V and I questioned the adoption of such an unbounded, linear utility function.  I would argue that nonlinear of utility functions is not a logical fallacy.

To make my case clear, I will clarify my personal interpretation of utilitarianism.  Utility functions are mathematical constructs that can be used to model individual or group decision-making.  However, it is unrealistic to suppose that every individual actually has an utility function or even a preference ordering; at best, one could find a utility function which approximates the behavior of the individual.  This is confirmed by studies demonstrating the inconsistency of human preferences.  The decisions made by coordinated groups: e.g. corporate partners, citizens in a democracy, or the entire community of effective altruists could also be more or less well-approximated by a utility function: presumably, the accuracy of the utility function model of decision-making depends on the cohesion of the group.  Utilitarianism, as proposed by Bentham and Mills, proposes an ethical framework based on some idealized utility function.  Rather than using utility functions to model group decision-making, Bentham and Mills propose to use some utility function to guide decision-making, in the form of an ethical theory.  It is important to distinguish these two different use-cases of utility functions, which might be termed descriptive utility and prescriptive utility.

But what is ethics?  I hold the hard-nosed position that moral philosophies (including utiliarianism) are human inventions which serve the purpose of facilitating large-scale coordination.  Another way of putting it is that moral philosophy is a manifestation of the limited superrationality that our species possesses.  [Side note: one might speculate that the intellectual aspect of human political behavior, of forming alliances based on shared ideals (including moral philosophies), is a memetic or genetic trait which propogated due to positive selection pressure: moral philosophy is necessary for the development of city-states and larger political entities, which in turn rose as the dominant form of social organization in our species.  But this is a separate issue from the the discussion at hand.]

In this larger context, we can be prepared to evaluate the relative worth of a moral philosophy, such as utiliarianism, against competing philosophies.  If the purpose of a moral philosophy is to facilitate coordination, then an effective moral philosophy is one that can actually hope to achieve that kind of coordination.  Utiliarianism is a good candidate for facilitating global-level coordination due to its conceptual simplicity and because most people can agree with its principles, and it provides a clear framework for decision-making, provided that a suitable utility function can be identified, or at least that the properties of the "ideal utility function" can be debated.  Furthermore, utiliarianism, and related consequentialist moralities are arguably better equipped to handle tragedy of the commons than competing deontological theories.

And if we accept utiliarianism, and if our goal is to facilitate global coordination, we can go further to evaluate the properties of any proposed utility function, by the same criteria as before: i.e., how well will the proposed utility function facilitate global coordination.  Will the proposed utility function find broad support among the key players in the global community?  Unbounded, linearly increasing utility functions clearly fail, because few people would support conclusions such as "it's worth spending all our resources to prevent a 0.001% chance that 1e100 human lives will be created and tortured."

If so, why are such utility functions so dominant in the LW community?  One cannot overlook the biased composition of the LW community as a potential factor: generally proficient in mathematical or logical thinking, but less adept than the general population in empathetic skills.  Oversimplified theories, such as linear unbounded utility functions, appeal more strongly to this type of thinker, while more realistic but complicated utility functions are instinctively dismissed as "illogical" or "irrational", when they real reason that they are dismissed is not because they are actually concluded to be illogical, but because because they are precieved as uglier.

Yet another reason stems from the motives of the founders of the LW community, who make a living primarily out of researching existential risk and friendly AI.  Since existential risks are the kind of low-probability, long-term and high-impact event which would tend to be neglected by "intuitive" bounded and nonlinear utility functions, but favored by unintuitive, unbounded linear utility functions, it is in the founders' best interests to personally adopt a form of utiliarianism employing the latter type of utility function.

Finally, let me clarify that I do not dispute the existence of scope insensitivity.  I think the general population is ill-equipped to reason about problems on a global scale, and that education could help remedy this kind of scope insensitivity.  However, even if natural utility functions asymptote far too early, I doubt that the end result of proper training against scope insensitivity would be an unbounded linear utility function; rather, it would still be a nonlinear utility function, but which asymptotes at a larger scale.

 

 

New Comment
3 comments, sorted by Click to highlight new comments since:

I agree with you that unbounded utility functions are awful, but Eliezer made a more nuanced point in his post on scope insensitivity than you give him credit for. Suppose there are about 100 billion birds, and every year, about 10 million birds drown. Unless your utility function is very chaotic, it will be locally close to linear, so the difference in utility between 9,800,000 birds drowning this year and 10,000,000 birds drowning this year will be much larger than the difference in utility between 9,998,000 birds drowning this year and 10,000,000 birds drowning this year. Furthermore, even if you did have some weird threshold in your utility function, like you care a lot about whether or not fewer than 9,999,000 birds drown this year but you don't care so much about how far from this threshold you get, you don't know exactly how many birds will drown this year, so saving a much larger number of birds will result in a much higher chance of crossing your threshold. Thus being willing to spend about the same amount of resources to save 2,000 birds as to save 200,000 birds doesn't make any sense. None of this relies on your utility function being being globally linear with respect to number of surviving birds.

Although utility functions can also be used to describe ethical systems, they are primarily designed to model preferences of individual agents, and I think your comments about moral philosophy are mostly irrelevant for that.

few people would support conclusions such as "it's worth spending all our resources to prevent a 0.001% chance that 1e100 human lives will be created and tortured."

A 0.001% chance of 10^100 humans being created just to be tortured would actually freak me out. Unless you were being literal about "all" of our resources, I think you should use a smaller probability.

If you view morality as entirely a means of civilisation co-ordinating then you're already immune to Pascal's Mugging because you don't have any reason to care in the slightest about simulated people who exist entirely outside the scope of your morality. So why bother talking about how to bound the utility of something to which you essentially assign zero utility to in the first place?

Or, to be a little more polite and turn the criticism around, if you do actually care a little bit about a large number of hypothetical extra-universal simulated beings, you need to find a different starting point for describing those feelings than facilitating civilisational co-ordination. In particular, the question of what sorts of probability trade-offs the existing population of earth would make (which seems to be the fundamental point of your argument) is informative, but far from the be-all and end-all of how to consider this topic.

I hold the hard-nosed position that moral philosophies (including utiliarianism) are human inventions which serve the purpose of facilitating large-scale coordination.

This is one part of morality, but there also seems to be the question of what to (coordinate to) do/build.