Wei_Dai comments on A Sketch of an Anti-Realist Metaethics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (136)
Take someone who talks (or reads) themselves into utilitarianism or egoism. This seems to have real consequences on their actions, for example:
Presumably, when that writer "converted" to utilitarianism, the positive emotions of "rescuing lost puppies" or "personally volunteering" did not go away, but he chose to override those emotions. (Or if they did go away, that's a result of converting to utilitarianism, not the cause.)
I don't think changes in hormone level could explain "converting" to utilitarianism or egoism, but I do leave open the more general possibility that all moral changes are essentially "internal". If someone could conclusively show that, I think the anti-realist position would be much stronger.
So a couple points:
First, I'm reluctant to use Less Wrong posters as a primary data set because Less Wrong posters are far from neurotypical. A lot of hypotheses about autism involve... wait for it... amygdala abnormality.
Second, I think it is very rare for people to change their behavior when they adopt a new normative theory. Note that all the more powerful arguments in normative theory involve thought experiments designed to evoke an emotional response. People usually adopt a normative theory because it does a good job explaining the emotional intuitions they already possess.
Third, a realist account of changing moral beliefs is really metaphysically strange. Does anyone think we should be updating P(utilitarianism) based on evidence we gather? What would that evidence look like? If an anti-realist metaphysics gives us a natural account of what is really happening when we think we're responding to moral arguments then shouldn't anti-realism be the most plausible candidate?
This part of a previous reply to Richard Chappell seems relevant here also:
In other words, suppose I think I'm someone who would change my behavior when I adopt a new normative theory. Is your meta-ethical position still relevant to me?
If nothing else, my normative theory could change what I program into an FAI, in case I get the chance to do something like that. What does your metaethics imply for someone in this kind of situation? Should I, for example, not think too much about normative ethics, and when the time comes just program into the FAI whatever I feel like at that time? In case you don't have an answer now, do you think the anti-realist approach will eventually offer an answer?
I think we currently don't have a realist account of changing moral beliefs that is metaphysically not strange. But given that metaphysics is overall still highly confusing and unsettled, I don't think this is a strong argument in favor of anti-realism. For example what is the metaphysics of mathematics, and how does that fit into a realist account of changing mathematical beliefs?
What the anti-realist theory of moral change says is that terminal values don't change in response to reasons or evidence. So if you have a new normative theory and a new set of behaviors anti-realism predicts that either your map has changed or your terminal values changed internally and you took up a new normative theory as a rationalization of those new values.
I wonder if you, or anyone else can give me some examples reasons for changing one's normative theory. I suspect that most if not all such reasons which actually lead to a behavior change will either involve evoking emotion or updating the map (i.e. something like your normative theory ignores this class of suffering or something like that).
Good question that I could probably turn into a full post. Anti-realism doesn't get rid of normative ethics exactly, it just redefines what we mean by it. We're not looking for some theory that describes a set of facts about the world. Rather, we're trying to describe the moral subroutine in our utility function. In a sense, it deflates the normative project into something a lot like coherent extrapolated volition. Of course, anti-realism also constrains what methods we should expect to be successful in normative theory and what kinds of features we should expect an ideal normative theory to have. For example, since the morality function is a biological and cultural creation we shouldn't be surprised to find out that it is weirdly context dependent, kludgey or contradictory. We should also expect to uncover natural variance between utility functions. Anti-realism also suggests that descriptive moral psychology is a much more useful tool for forming an ideal normative theory than, say, abstract reasoning.
I actually think an approach similar to the one in this post might clarify the mathematics question (I think mathematics could be thought of as a set of meta-truths about our map and the language we use to draw the map). In any case, it seems obvious to me that the situations of mathematics and morality are asymmetric in important ways. Can you tell an equally plausible story about why we believe mathematical statements are true even though they are actually false? In particular, the intensive use of mathematics in our formulation of scientific theories seems to give it a secure footing that morality does not have.
In your view, is there such a thing as the best rationalization of one's values, or is any rationalization as good as another? If there is a best rationalization, what are its properties? For example, should I try to make my normative theory fit my emotions as closely as possible, or also take simplicity and/or elegance into consideration? What if, as seems likely, I find out that the most straightforward translation of my emotions into a utility function gives a utility function that is based on a crazy ontology, and it's not clear how to translate my emotions into a utility function based on the true ontology of the world (or my current best guess as to the true ontology). What should I do then?
The problem is, we do not have a utility function. If we want one, we have to construct it, which inevitably involves lots of "deliberative thinking". If the deliberative thinking module gets to have lots of say anyway, why can't it override the intuitive/emotional modules completely? Why does it have to take its cues from the emotional side, and merely "rationalize"? Or do you think it doesn't have to, but it should?
Unfortunately, I don't see how descriptive moral psychology can help me to answer the above questions. Do you? Or does anti-realism offer any other ideas?
What counts as a virtue in any model depends on what you're using that model for. If you're chiefly concerned with accuracy then you want your normative theory to fit your values as much as possible. But maybe the most accurate model takes to long to run on your hardware- in that case you might prefer a simpler, more elegant model. Maybe there are hard limits to how accurate we can make such models and will be willing to settle for good enough.
Whatever our best ontology is it will always have some loose analog in our evolved, folk ontology. So we should try our best to to make it fit. There will always be weird edge cases that arise as our ontology improves and our circumstances diverge from our ancestor's i.e. "are fetuses in the class of things we should have empathy for?" Expecting evolution to have encoded an elegant set of principles in the true ontology is obviously crazy. There isn't much one can do about it if you want to preserve your values. You could decide that you care more about obeying a simple, elegant moral code than you do about your moral intuition/emotional response (perhaps because you have a weak or abnormal emotional response to begin with). Whether you should do one or the other is just a meta moral judgment and people will have different answers because the answer depends on their psychological disposition. But I think realizing that we aren't talking about facts but trying to describe what we value makes elegance and simplicity seem less important.
I dispute the assumption that my emotions represent my values. Since the part of me that has to construct a utility function (let's say for the purpose of building an FAI) is the deliberative thinking part, why shouldn't I (i.e., that part of me) dis-identify with my emotional side? Suppose I do, then there's no reason for me to rationalize "my" emotions (since I view them as just the emotions of a bunch of neurons that happen to be attached to me). Instead, I could try to figure out from abstract reasoning alone what I should value (falling back to nihilism if ultimately needed).
According to anti-realism, this is just as valid a method of coming up with a normative theory as any other (that somebody might have the psychological disposition to choose), right?
Alternatively, what if I think the above may be something I should do, but I'm not sure? Does anti-realism offer any help besides that it's "just a meta moral judgment and people will have different answers because the answer depends on their psychological disposition"?
A superintelligent moral psychologist might tell me that there is one text file, which if I were to read it, would cause me to do what I described earlier, and another text file which would cause me to to choose to rationalize my emotions instead, and therefore I can't really be said to have an intrinsic psychological disposition in this matter. What does anti-realism say is my morality in that case?
Me too. There are people who consistently judge that their morality has "too little" motivational force, and there are people who perceive their morality to have "too much" motivational force. And there are people who deem themselves under-motivated by certain moral ideals and over-motivated by others. None of these would seem possible if moral beliefs simply echoed (projected) emotion. (One could, of course, object to one's past or anticipated future motivation, but not one's present; nor could the long-term averages disagree.)
See "weak internalism". There can still be competing motivational forces and non-moral emotions.
First, this scenario is just impossible. One cannot dis-identify from one's 'emotional side'. Thats not a thing. If someone thinks they're doing that they've probably smuggled their emotions into their abstract reasons (see, for example, Kant). Second, it seems silly, even dumb, to give up on making moral judgments and become a nihilist just because you'd like there be a way to determine moral principles from abstract reasoning alone. Most people are attached to their morality and would like to go on making judgments. If someone has such a strong psychological need to derive morality through abstract reasoning along that they're just going to give up morality: so be it I guess. But that would be a very not-normal person and not at all the kind of person I would want to have programming an FAI.
But yes- ultimately my values enter into it and my values may not be everyone else's. So of course there is no fact of the matter about the "right" way to do something. Nevertheless, there are still no moral facts.
You seem to be asking anti-realism to supply you with answers to normative questions. But what anti-realism tells you is that such questions don't have factual answers. I'm telling you what morality is. To me, the answer has some implications for FAI but anti-realism certainly doesn't answer questions that it says there aren't answers to.
In order to rationalize my emotions, I have to identify with them in the first place (as opposed to the emotions of my neighbor, say). Especially if I'm supposed to apply descriptive moral psychology, instead of just confabulating unreflectively based on whatever emotions I happen to feel at any given moment. But if I can identify with them, why can't I dis-identify from them?
That doesn't stop me from trying. In fact moral psychology could be a great help in preventing such "contamination".
If those questions don't have factual answers, then I could answer them any way I want, and not be wrong. On the other hand if they do have factual answers, then I better use my abstract reasoning skills to find out what those answers are. So why shouldn't I make realism the working assumption, if I'm even slightly uncertain that anti-realism is true? If that assumption turns out to be wrong, it doesn't matter anyway--whatever answers I get from using that assumption, including nihilism, still can't be wrong. (If I actually choose to make that assumption, then I must have a psychological disposition to make that assumption. So anti-realism would say that whatever normative theory I form under that assumption is my actual morality. Right?)
Can you answer the last question in the grandparent comment, which was asking just this sort of question?
That's true as stated, but "not being wrong" isn't the only thing you care about. According to your current morality, those questions have moral answers, and you shouldn't answer them any way you want, because that could be evil.
I'm not sure I actually understand what you mean by "dis-identify".
So Pascal's Wager?
In any case, while there aren't wrong answers there are still immoral ones. There is no fact of the matter about normative ethics- but there are still hypothetical AIs that do evil things.
Which question exactly?
Autism gets way over-emphasized here and elsewhere as a catch-all diagnosis for mental oddity. Schizotypality and obsessive-compulsive spectrum conditions are just as common near the far right of the rationalist ability curve. (Both of those are also associated with lots of pertinent abnormalities of the insula, anterior cingulate cortex, dorsolateral prefrontal cortex, et cetera. However I've found that fMRI studies tend to be relatively meaningless and shouldn't be taken too seriously; it's not uncommon for them to contradict each other despite high claimed confidence.)
I'm someone who "talks (or reads) myself into" new moral positions pretty regularly and thus could possibly be considered an interesting case study. I got an fMRI done recently and can probably persuade the researchers to give me a summary of their subsequent analysis. My brain registered absolutely no visible change during the two hours of various tasks I did while in the fMRI (though you could see my eyes moving around so it was clearly working); the guy sounded somewhat surprised at this but said that things would show up once the data gets sent to the lab for analysis. I wonder if that's common. (At the time I thought, "maybe that's because I always feel like I'm being subjected to annoying trivial tests of my ability to jump through pointless hoops" but besides sounding cool that's probably not accurate.) Anyway, point is, I don't yet know what they found.
(I'm not sure I'll ever be able to substantiate the following claim except by some day citing people who agree with me, 'cuz it's an awkward subject politically, but: I think the evidence clearly shows that strong aneurotypicality is necessary but not sufficient for being a strong rationalist. The more off-kilter your mind is the more likely you are to just be crazy, but the more likely you are to be a top tier rationalist, up to the point where the numbers get rarer than one per billion. There are only so many OCD-schizotypal IQ>160 folk. I didn't state that at all clearly but you get the gist, maybe.)
Can you talk about about some of the arguments that lead you to taking new moral positions? Obviously I'm not interested in cases where new facts changed how you thought ethics should be applied but cases where your 'terminal values' changed in response to something.
That's difficult because I don't really believe in 'terminal values', so everything looks like "new facts" that change how my "ethics" should be applied. (ETA: Like, falling in love with a new girl or a new piece of music can look like learning a new fact about the world. This perspective makes more sense after reading the rest of my comment.) Once you change your 'terminal values' enough they stop looking so terminal and you start to get a really profound respect for moral uncertainty and the epistemic nature of shouldness. My morality is largely directed at understanding itself. So you could say that one of my 'terminal values' is 'thinking things through from first principles', but once you're that abstract and that meta it's unclear what it means for it to change rather than, say, just a change in emphasis relative to something else like 'going meta' or 'justification for values must be even better supported than justification for beliefs' or 'arbitrariness is bad'. So it's not obvious at which level of abstraction I should answer your question.
Like, your beliefs get changed constantly whereas methods only get changed during paradigm shifts. The thing is that once you move that pattern up a few levels of abstraction where your simple belief update is equivalent to another person's paradigm shift, it gets hard to communicate in a natural way. Like, for the 'levels of organization' flavor of levels of abstraction, the difference between "I love Jane more than any other woman and would trade the world for her" and "I love humanity more than other memeplex instantiation and would trade the multiverse for it". It is hard for those two values to communicate with each other in an intelligible way; if they enter into an economy with each other it's like they'd be making completely different kinds of deals. kaldkghaslkghldskg communication is difficult and the inferential distance here is way too big.
To be honest I think that though efforts like this post are well-intentioned and thus should be promoted to the extent that they don't give people an excuse to not notice confusion, Less Wrong really doesn't have the necessarily set of skills or knowledge to think about morality (ethics, meta-ethics) in a particularly insightful manner. Unfortunately I don't think this is ever going to change. But maybe five years' worth of posts like this at many levels of abstraction and drawing on many different sciences and perspectives will lead somewhere? But people won't even do that. dlakghjadokghaoghaok. Ahem.
Like, there's a point at which object level uncertainty looks like "should I act as if I am being judged by agents with imperfect knowledge of the context of my decisions or should I act as if I am being judged by an omniscient agent or should I act as if I need to appease both simultaneously or ..."; you can go meta here in the abstract to answer this object level moral problem, but one of my many points is that at this point it just looks nothing like 'is killing good or bad?' or 'should I choose for the Nazis kill my son, or my daughter (considering they've forced this choice upon me)?'.
I remember that when I was like 11 years old I used to lie awake at night obsessing about variations on Sophie's choice problems. Those memories are significantly more vivid than my memories of living off ramen and potatoes with no electricity for a few months at around the same age. (I remember thinking that by far the worst part of this was the cold showers, though I still feel negative affect towards ramen (and eggs, which were also cheap).) I feel like that says something about my psychology.