Years ago, I wrote an unfinished sequence of posts called "No-Nonsense Metaethics." My last post, Pluralistic Moral Reductionism, said I would next explore "empathic metaethics," but I never got around to writing those posts. Recently, I wrote a high-level summary of some initial thoughts on "empathic metaethics" in section 6.1.2 of a report prepared for my employer, the Open Philanthropy Project. With my employer's permission, I've adapted that section for publication here, so that it can serve as the long-overdue concluding post in my sequence on metaethics.
In my previous post, I distinguished "austere metaethics" and "empathic metaethics," where austere metaethics confronts moral questions roughly like this:
Tell me what you mean by 'right', and I will tell you what is the right thing to do. If by 'right' you mean X, then Y is the right thing to do. If by 'right' you mean P, then Z is the right thing to do. But if you can't tell me what you mean by 'right', then you have failed to ask a coherent question, and no one can answer an incoherent question.
Meanwhile, empathic metaethics says instead:
You may not know what you mean by 'right.' But let's not stop there. Here, let me come alongside you and help decode the cognitive algorithms that generated your question in the first place, and then we'll be able to answer your question. Then we can tell you what the right thing to do is.
Below, I provide a high-level summary of some of my initial thoughts on what one approach to "empathic metaethics" could look like.
Given my metaethical approach, when I make a “moral judgment” about something (e.g. about which kinds of beings are moral patients), I don’t conceive of myself as perceiving an objective moral truth, or coming to know an objective moral truth via a series of arguments. Nor do I conceive of myself as merely expressing my moral feelings as they stand today. Rather, I conceive of myself as making a conditional forecast about what my values would be if I underwent a certain “idealization” or “extrapolation” procedure (coming to know more true facts, having more time to consider moral arguments, etc.).[1]
Thus, in a (hypothetical) "extreme effort" attempt to engage in empathic metaethics (for thinking about my own moral judgments), I would do something like the following:
I would try to make the scenario I'm aiming to forecast as concrete as possible, so that my brain is able to treat it as a genuine forecasting challenge, akin to participating in a prediction market or forecasting tournament, rather than as a fantasy about which my brain feels "allowed" to make up whatever story feels nice, or signals my values to others, or achieves something else that isn't forecasting accuracy.[2] In my case, I concretize the extrapolation procedure as one involving a large population of copies of me who learn many true facts, consider many moral arguments, and undergo various other experiences, and then collectively advise me about what I should value and why.[3]
However, I would also try to make forecasts I can actually check for accuracy, e.g. about what my moral judgment about various cases will be 2 months in the future.
When making these forecasts, I would try to draw on the best research I've seen concerning how to make accurate estimates and forecasts. For example I would try to "think like a fox, not like a hedgehog," and I've already done several hours of probability calibration training, and some amount of forecasting training.[4]
Clearly, my current moral intuitions would serve as one important source of evidence about what my extrapolated values might be. However, recent findings in moral psychology and related fields lead me to assign more evidential weight to some moral intuitions than to others. More generally, I interpret my current moral intuitions as data generated partly by my moral principles, and partly by various "error processes" (e.g. a hard-wired disgust reaction to spiders, which I don't endorse upon reflection). Doing so allows me to make use of some standard lessons from statistical curve-fitting when thinking about how much evidential weight to assign to particular moral intuitions.[5]
As part of forecasting what my extrapolated values might be, I would try to consider different processes and contexts that could generate alternate moral intuitions in moral reasoners both similar and dissimilar to my current self, and I would try to consider how I feel about the the "legitimacy" of those mechanisms as producers of moral intuitions. For example I might ask myself questions such as "How might I feel about that practice if I was born into a world in which it was already commonplace?" and "How might I feel about that case if my built-in (and largely unconscious) processes for associative learning and imitative learning had been exposed to different life histories than my own?" and "How might I feel about that case if I had been born in a different century, or a different country, or with a greater propensity for clinical depression?" and "How might a moral reasoner on another planet feel about that case if it belonged to a more strongly r-selected species (compared to humans) but had roughly human-like general reasoning ability?"[6]
Observable patterns in how people's values change (seemingly) in response to components of my proposed extrapolation procedure (learning more facts, considering moral arguments, etc.) would serve as another source of evidence about what my extrapolated values might be. For example, the correlation between aggregate human knowledge and our "expanding circle of moral concern" (Singer 2011) might (very weakly) suggest that, if I continued to learn more true facts, my circle of moral concern would continue to expand. Unfortunately, such correlations are badly confounded, and might not provide much evidence.[7]
Personal facts about how my own values have evolved as I've learned more, considered moral arguments, and so on, would serve as yet another source of evidence about what my extrapolated values might be. Of course, these relations are likely confounded as well, and need to be interpreted with care.[8]
2. For more on forecasting accuracy, see this blog post. My use of research on the psychological predictors of forecasting accuracy for the purposes of doing moral philosophy is one example of my support for the use of "ameliorative psychology" in philosophical practice — see e.g. Bishop & Trout (2004, 2008).
3. Specifically, the scenario I try to imagine (and make conditional forecasts about) looks something like this:
In the distant future, I am non-destructively "uploaded." In other words, my brain and some supporting cells are scanned (non-destructively) at a fine enough spatial and chemical resolution that, when this scan is combined with accurate models of how different cell types carry out their information-processing functions, one can create an executable computer model of my brain that matches my biological brain's input-output behavior almost exactly. This whole brain emulation ("em") is then connected to a virtual world: computed inputs are fed to the em's (now virtual) signal transduction neurons for sight, sound, etc., and computed outputs from the em's virtual arm movements, speech, etc. are received by the virtual world, which computes appropriate changes to the virtual world in response. (I don't think anything remotely like this will ever happen, but as far as I know it is a physically possible world that can be described in some detail; for one attempt, see Hanson 2016.) Given functionalism, this "em" has the same memories, personality, and conscious experience that I have, though it experiences quite a shock when it awakens to a virtual world that might look and feel somewhat different from the "real" world.
This initial em is copied thousands of times. Some of the copies interact inside the same virtual world, other copies are placed inside isolated virtual worlds.
Then, these ems spend a very long time (a) collecting and generating arguments and evidence about morality and related topics, (b) undergoing various experiences, in varying orders, and reflecting on those experiences, (c) dialoguing with ems sourced from other biological humans who have different values than I do, and perhaps with sophisticated chat-bots meant to simulate the plausible reasoning of other types of people (from the past, or from other worlds) who were not available to be uploaded, and so on. They are able to do these things for a very long time because they and their virtual worlds are run at speeds thousands of times faster than my biological brain runs, allowing subjective eons to pass in mere months of "objective" time.
Finally, at some time, the ems dialogue with each other about which values seem "best," they engage in moral trade (Ord 2015), and they try to explain to me what values they think I should have and why. In the end, I am not forced to accept any of the values they then hold (collectively or individually), but I am able to come to much better-informed moral judgments than I could have without their input.
4. For more on forecasting "best practices," see this blog post.
5. Following Hanson (2002) and ch. 2 of Beckstead (2013), I consider my moral intuitions in the context of Bayesian curve-fitting. To explain, I'll quote Beckstead (2013) at some length:
Curve fitting is a problem frequently discussed in the philosophy of science. In the standard presentation, a scientist is given some data points, usually with an independent variable and a dependent variable, and is asked to predict the values of the dependent variable given other values of the independent variable. Typically, the data points are observations, such as "measured height" on a scale or "reported income" on a survey, rather than true values, such as height or income. Thus, in making predictions about additional data points, the scientist has to account for the possibility of error in the observations. By an error process I mean anything that makes the observed values of the data points differ from their true values. Error processes could arise from a faulty scale, failures of memory on the part of survey participants, bias on the part of the experimenter, or any number of other sources. While some treatments of this problem focus on predicting observations (such as measured height), I'm going to focus on predicting the true values (such as true height).
…For any consistent data set, it is possible to construct a curve that fits the data exactly… If the scientist chooses one of these polynomial curves for predictive purposes, the result will usually be overfitting, and the scientist will make worse predictions than he would have if he had chosen a curve that did not fit the data as well, but had other virtues, such as a straight line. On the other hand, always going with the simplest curve and giving no weight to the data leads to underfitting…
I intend to carry over our thinking about curve fitting in science to reflective equilibrium in moral philosophy, so I should note immediately that curve fitting is not limited to the case of two variables. When we must understand relationships between multiple variables, we can turn to multiple-dimensional spaces and fit planes (or hyperplanes) to our data points. Different axes might correspond to different considerations which seem relevant (such as total well-being, equality, number of people, fairness, etc.), and another axis could correspond to the value of the alternative, which we can assume is a function of the relevant considerations. Direct Bayesian updating on such data points would be impractical, but the philosophical issues will not be affected by these difficulties.
…On a Bayesian approach to this problem, the scientist would consider a number of different hypotheses about the relationship between the two variables, including both hypotheses about the phenomena (the relationship between X and Y) and hypotheses about the error process (the relationship between observed values of Y and true values of Y) that produces the observations…
…Lessons from the Bayesian approach to curve fitting apply to moral philosophy. Our moral intuitions are the data, and there are error processes that make our moral intuitions deviate from the truth. The complete moral theories under consideration are the hypotheses about the phenomena. (Here, I use "theory" broadly to include any complete set of possibilities about the moral truth. My use of the word "theory" does not assume that the truth about morality is simple, systematic, and neat rather than complex, circumstantial, and messy.) If we expect the error processes to be widespread and significant, we must rely on our priors more. If we expect the error processes to be, in addition, biased and correlated, then we will have to rely significantly on our priors even when we have a lot of intuitive data.
Beckstead then summarizes the framework with a table (p. 32), edited to fit into LessWrong's formatting:
Hypotheses about phenomena
(Science) Different trajectories of a ball that has been dropped
(Moral Philosophy) Moral theories (specific versions of utilitarianism, Kantianism, contractualism, pluralistic deontology, etc.)
Hypotheses about error processes
(Science) Our position measurements are accurate on average, and are within 1 inch 95% of the time (with normally distributed error)
(Moral Philosophy) Different hypotheses about the causes of error in historical cases; cognitive and moral biases; different hypotheses about the biases that cause inconsistent judgments in important philosophical cases
Observations
(Science) Recorded position of a ball at different times recorded with a certain clock
(Moral Philosophy) Intuitions about particular cases or general principles, and any other relevant observations
Background theory
(Science) The ball never bounces higher than the height it started at. The ball always moves along a continuous trajectory.
(Moral Philosophy) Meta-ethical or normative background theory (or theories)
I do not read much fiction, but I suspect that some types of fiction — e.g. historical fiction, fantasy, and science fiction — can help readers to temporarily transport themselves into fully-realized alternate realities, in which readers can test how their moral intuitions differ when they are temporarily "lost" in an alternate world.
7. There are many sources which discuss how people's values seem to change along with (and perhaps in response to) components of my proposed extrapolation procedure, such as learning more facts, reasoning through more moral arguments, and dialoguing with others who have different values. See e.g. Inglehart & Welzel (2010), Pinker (2011), Shermer (2015), and Buchanan & Powell (2016). See also the literatures on "enlightened preferences" (Althaus 2003, chs. 4-6) and on "deliberative polling."
8. For example, as I've learned more, considered more moral arguments, and dialogued more with people who don't share my values, my moral values have become more "secular-rational" and "self-expressive" (Inglehart & Welzel 2010), more geographically global, more extensive (e.g. throughout more of the animal kingdom), less person-affecting, and subject to greater moral uncertainty (Bykvist 2017).
Years ago, I wrote an unfinished sequence of posts called "No-Nonsense Metaethics." My last post, Pluralistic Moral Reductionism, said I would next explore "empathic metaethics," but I never got around to writing those posts. Recently, I wrote a high-level summary of some initial thoughts on "empathic metaethics" in section 6.1.2 of a report prepared for my employer, the Open Philanthropy Project. With my employer's permission, I've adapted that section for publication here, so that it can serve as the long-overdue concluding post in my sequence on metaethics.
In my previous post, I distinguished "austere metaethics" and "empathic metaethics," where austere metaethics confronts moral questions roughly like this:
Meanwhile, empathic metaethics says instead:
Below, I provide a high-level summary of some of my initial thoughts on what one approach to "empathic metaethics" could look like.
Given my metaethical approach, when I make a “moral judgment” about something (e.g. about which kinds of beings are moral patients), I don’t conceive of myself as perceiving an objective moral truth, or coming to know an objective moral truth via a series of arguments. Nor do I conceive of myself as merely expressing my moral feelings as they stand today. Rather, I conceive of myself as making a conditional forecast about what my values would be if I underwent a certain “idealization” or “extrapolation” procedure (coming to know more true facts, having more time to consider moral arguments, etc.).[1]
Thus, in a (hypothetical) "extreme effort" attempt to engage in empathic metaethics (for thinking about my own moral judgments), I would do something like the following:
1. This general approach sometimes goes by names such as "ideal advisor theory" or, arguably, "reflective equilibrium." Diverse sources explicating various extrapolation procedures (or fragments of extrapolation procedures) include: Rosati (1995); Daniels (2016); Campbell (2013); chapter 9 of Miller (2013); Muehlhauser & Williamson (2013); Trout (2014); Yudkowsky's "Extrapolated volition (normative moral theory)" (2016); Baker (2016); Stanovich (2004), pp. 224-275; Stanovich (2013).
2. For more on forecasting accuracy, see this blog post. My use of research on the psychological predictors of forecasting accuracy for the purposes of doing moral philosophy is one example of my support for the use of "ameliorative psychology" in philosophical practice — see e.g. Bishop & Trout (2004, 2008).
3. Specifically, the scenario I try to imagine (and make conditional forecasts about) looks something like this:
For more context on this sort of values extrapolation procedure, see Muehlhauser & Williamson (2013).
4. For more on forecasting "best practices," see this blog post.
5. Following Hanson (2002) and ch. 2 of Beckstead (2013), I consider my moral intuitions in the context of Bayesian curve-fitting. To explain, I'll quote Beckstead (2013) at some length:
Beckstead then summarizes the framework with a table (p. 32), edited to fit into LessWrong's formatting:
6. For more on this, see my conversation with Carl Shulman, O'Neill (2015), the literature on the evolution of moral values (e.g. de Waal et al. 2014; Sinnott-Armstrong & Miller 2007; Joyce 2005), the literature on moral psychology more generally (e.g. Graham et al. 2013; Doris 2010; Liao 2016; Christen et al. 2014; Sunstein 2005), the literature on how moral values vary between cultures and eras (e.g. see Flanagan 2016; Inglehart & Welzel 2010; Pinker 2011; Morris 2015; Friedman 2005; Prinz 2007, pp. 187-195), and the literature on moral thought experiments (e.g. Tittle 2004, ch. 7). See also Wilson (2016)'s comments on internal and external validity in ethical thought experiments, and Bakker (2017) on "alien philosophy."
I do not read much fiction, but I suspect that some types of fiction — e.g. historical fiction, fantasy, and science fiction — can help readers to temporarily transport themselves into fully-realized alternate realities, in which readers can test how their moral intuitions differ when they are temporarily "lost" in an alternate world.
7. There are many sources which discuss how people's values seem to change along with (and perhaps in response to) components of my proposed extrapolation procedure, such as learning more facts, reasoning through more moral arguments, and dialoguing with others who have different values. See e.g. Inglehart & Welzel (2010), Pinker (2011), Shermer (2015), and Buchanan & Powell (2016). See also the literatures on "enlightened preferences" (Althaus 2003, chs. 4-6) and on "deliberative polling."
8. For example, as I've learned more, considered more moral arguments, and dialogued more with people who don't share my values, my moral values have become more "secular-rational" and "self-expressive" (Inglehart & Welzel 2010), more geographically global, more extensive (e.g. throughout more of the animal kingdom), less person-affecting, and subject to greater moral uncertainty (Bykvist 2017).