Wei_Dai comments on A Sketch of an Anti-Realist Metaethics - Less Wrong

16 Post author: Jack 22 August 2011 05:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (136)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 22 August 2011 07:58:47PM 4 points [-]

This part of a previous reply to Richard Chappell seems relevant here also:

suppose the main reason I'm interested in metaethics is that I am trying to answer a question like "Should I terminally value the lives of random strangers?" and I'm not sure what that question means exactly or how I should go about answering it. In this case, is there a reason for me to care much about the pre-theoretic grasp of most people, as opposed to, say, people I think are most likely to be right about morality?

In other words, suppose I think I'm someone who would change my behavior when I adopt a new normative theory. Is your meta-ethical position still relevant to me?

If nothing else, my normative theory could change what I program into an FAI, in case I get the chance to do something like that. What does your metaethics imply for someone in this kind of situation? Should I, for example, not think too much about normative ethics, and when the time comes just program into the FAI whatever I feel like at that time? In case you don't have an answer now, do you think the anti-realist approach will eventually offer an answer?

Third, a realist account of changing moral beliefs is really metaphysically strange.

I think we currently don't have a realist account of changing moral beliefs that is metaphysically not strange. But given that metaphysics is overall still highly confusing and unsettled, I don't think this is a strong argument in favor of anti-realism. For example what is the metaphysics of mathematics, and how does that fit into a realist account of changing mathematical beliefs?

Comment author: Jack 22 August 2011 09:31:22PM 1 point [-]

In other words, suppose I think I'm someone who would change my behavior when I adopt a new normative theory. Is your meta-ethical position still relevant to me?

What the anti-realist theory of moral change says is that terminal values don't change in response to reasons or evidence. So if you have a new normative theory and a new set of behaviors anti-realism predicts that either your map has changed or your terminal values changed internally and you took up a new normative theory as a rationalization of those new values.

I wonder if you, or anyone else can give me some examples reasons for changing one's normative theory. I suspect that most if not all such reasons which actually lead to a behavior change will either involve evoking emotion or updating the map (i.e. something like your normative theory ignores this class of suffering or something like that).

If nothing else, my normative theory could change what I program into an FAI, in case I get the chance to do something like that. What does your metaethics imply for someone in this kind of situation? Should I, for example, not think too much about normative ethics, and when the time comes just program into the FAI whatever I feel like at that time? In case you don't have an answer now, do you think the anti-realist approach will eventually offer an answer?

Good question that I could probably turn into a full post. Anti-realism doesn't get rid of normative ethics exactly, it just redefines what we mean by it. We're not looking for some theory that describes a set of facts about the world. Rather, we're trying to describe the moral subroutine in our utility function. In a sense, it deflates the normative project into something a lot like coherent extrapolated volition. Of course, anti-realism also constrains what methods we should expect to be successful in normative theory and what kinds of features we should expect an ideal normative theory to have. For example, since the morality function is a biological and cultural creation we shouldn't be surprised to find out that it is weirdly context dependent, kludgey or contradictory. We should also expect to uncover natural variance between utility functions. Anti-realism also suggests that descriptive moral psychology is a much more useful tool for forming an ideal normative theory than, say, abstract reasoning.

I think we currently don't have a realist account of changing moral beliefs that is metaphysically not strange. But given that metaphysics is overall still highly confusing and unsettled, I don't think this is a strong argument in favor of anti-realism. For example what is the metaphysics of mathematics, and how does that fit into a realist account of changing mathematical beliefs?

I actually think an approach similar to the one in this post might clarify the mathematics question (I think mathematics could be thought of as a set of meta-truths about our map and the language we use to draw the map). In any case, it seems obvious to me that the situations of mathematics and morality are asymmetric in important ways. Can you tell an equally plausible story about why we believe mathematical statements are true even though they are actually false? In particular, the intensive use of mathematics in our formulation of scientific theories seems to give it a secure footing that morality does not have.

Comment author: Wei_Dai 23 August 2011 07:49:35PM 5 points [-]

So if you have a new normative theory and a new set of behaviors anti-realism predicts that either your map has changed or your terminal values changed internally and you took up a new normative theory as a rationalization of those new values.

In your view, is there such a thing as the best rationalization of one's values, or is any rationalization as good as another? If there is a best rationalization, what are its properties? For example, should I try to make my normative theory fit my emotions as closely as possible, or also take simplicity and/or elegance into consideration? What if, as seems likely, I find out that the most straightforward translation of my emotions into a utility function gives a utility function that is based on a crazy ontology, and it's not clear how to translate my emotions into a utility function based on the true ontology of the world (or my current best guess as to the true ontology). What should I do then?

The problem is, we do not have a utility function. If we want one, we have to construct it, which inevitably involves lots of "deliberative thinking". If the deliberative thinking module gets to have lots of say anyway, why can't it override the intuitive/emotional modules completely? Why does it have to take its cues from the emotional side, and merely "rationalize"? Or do you think it doesn't have to, but it should?

Anti-realism also suggests that descriptive moral psychology is a much more useful tool for forming an ideal normative theory than, say, abstract reasoning.

Unfortunately, I don't see how descriptive moral psychology can help me to answer the above questions. Do you? Or does anti-realism offer any other ideas?

Comment author: Jack 23 August 2011 08:50:25PM *  0 points [-]

In your view, is there such a thing as the best rationalization of one's values, or is any rationalization as good as another? If there is a best rationalization, what are its properties? For example, should I try to make my normative theory fit my emotions as closely as possible, or also take simplicity and/or elegance into consideration?

What counts as a virtue in any model depends on what you're using that model for. If you're chiefly concerned with accuracy then you want your normative theory to fit your values as much as possible. But maybe the most accurate model takes to long to run on your hardware- in that case you might prefer a simpler, more elegant model. Maybe there are hard limits to how accurate we can make such models and will be willing to settle for good enough.

What if, as seems likely, I find out that the most straightforward translation of my emotions into a utility function gives a utility function that is based on a crazy ontology, and it's not clear how to translate my emotions into a utility function based on the true ontology of the world (or my current best guess as to the true ontology). What should I do then?

Whatever our best ontology is it will always have some loose analog in our evolved, folk ontology. So we should try our best to to make it fit. There will always be weird edge cases that arise as our ontology improves and our circumstances diverge from our ancestor's i.e. "are fetuses in the class of things we should have empathy for?" Expecting evolution to have encoded an elegant set of principles in the true ontology is obviously crazy. There isn't much one can do about it if you want to preserve your values. You could decide that you care more about obeying a simple, elegant moral code than you do about your moral intuition/emotional response (perhaps because you have a weak or abnormal emotional response to begin with). Whether you should do one or the other is just a meta moral judgment and people will have different answers because the answer depends on their psychological disposition. But I think realizing that we aren't talking about facts but trying to describe what we value makes elegance and simplicity seem less important.

Comment author: Wei_Dai 23 August 2011 10:12:21PM 5 points [-]

There isn't much one can do about it if you want to preserve your values.

I dispute the assumption that my emotions represent my values. Since the part of me that has to construct a utility function (let's say for the purpose of building an FAI) is the deliberative thinking part, why shouldn't I (i.e., that part of me) dis-identify with my emotional side? Suppose I do, then there's no reason for me to rationalize "my" emotions (since I view them as just the emotions of a bunch of neurons that happen to be attached to me). Instead, I could try to figure out from abstract reasoning alone what I should value (falling back to nihilism if ultimately needed).

According to anti-realism, this is just as valid a method of coming up with a normative theory as any other (that somebody might have the psychological disposition to choose), right?

Alternatively, what if I think the above may be something I should do, but I'm not sure? Does anti-realism offer any help besides that it's "just a meta moral judgment and people will have different answers because the answer depends on their psychological disposition"?

A superintelligent moral psychologist might tell me that there is one text file, which if I were to read it, would cause me to do what I described earlier, and another text file which would cause me to to choose to rationalize my emotions instead, and therefore I can't really be said to have an intrinsic psychological disposition in this matter. What does anti-realism say is my morality in that case?

Comment author: torekp 24 August 2011 02:02:06AM 0 points [-]

I dispute the assumption that my emotions represent my values.

Me too. There are people who consistently judge that their morality has "too little" motivational force, and there are people who perceive their morality to have "too much" motivational force. And there are people who deem themselves under-motivated by certain moral ideals and over-motivated by others. None of these would seem possible if moral beliefs simply echoed (projected) emotion. (One could, of course, object to one's past or anticipated future motivation, but not one's present; nor could the long-term averages disagree.)

Comment author: Jack 24 August 2011 03:02:57AM 0 points [-]

See "weak internalism". There can still be competing motivational forces and non-moral emotions.

Comment author: Jack 23 August 2011 11:00:22PM 0 points [-]

Since the part of me that has to construct a utility function (let's say for the purpose of building an FAI) is the deliberative thinking part, why shouldn't I (i.e., that part of me) dis-identify with my emotional side? Suppose I do, then there's no reason for me to rationalize "my" emotions (since I view them as just the emotions of a bunch of neurons that happen to be attached to me). Instead, I could try to figure out from abstract reasoning alone what I should value (falling back to nihilism if ultimately needed). According to anti-realism, this is just as valid a method of coming up with a normative theory as any other (that somebody might have the psychological disposition to choose), right?

First, this scenario is just impossible. One cannot dis-identify from one's 'emotional side'. Thats not a thing. If someone thinks they're doing that they've probably smuggled their emotions into their abstract reasons (see, for example, Kant). Second, it seems silly, even dumb, to give up on making moral judgments and become a nihilist just because you'd like there be a way to determine moral principles from abstract reasoning alone. Most people are attached to their morality and would like to go on making judgments. If someone has such a strong psychological need to derive morality through abstract reasoning along that they're just going to give up morality: so be it I guess. But that would be a very not-normal person and not at all the kind of person I would want to have programming an FAI.

But yes- ultimately my values enter into it and my values may not be everyone else's. So of course there is no fact of the matter about the "right" way to do something. Nevertheless, there are still no moral facts.

You seem to be asking anti-realism to supply you with answers to normative questions. But what anti-realism tells you is that such questions don't have factual answers. I'm telling you what morality is. To me, the answer has some implications for FAI but anti-realism certainly doesn't answer questions that it says there aren't answers to.

Comment author: Wei_Dai 23 August 2011 11:37:18PM *  5 points [-]

One cannot dis-identify from one's 'emotional side'. Thats not a thing.

In order to rationalize my emotions, I have to identify with them in the first place (as opposed to the emotions of my neighbor, say). Especially if I'm supposed to apply descriptive moral psychology, instead of just confabulating unreflectively based on whatever emotions I happen to feel at any given moment. But if I can identify with them, why can't I dis-identify from them?

If someone thinks they're doing that they've probably smuggled their emotions into their abstract reasons (see, for example, Kant).

That doesn't stop me from trying. In fact moral psychology could be a great help in preventing such "contamination".

You seem to be asking anti-realism to supply you with answers to normative questions. But what anti-realism tells you is that such questions don't have factual answers.

If those questions don't have factual answers, then I could answer them any way I want, and not be wrong. On the other hand if they do have factual answers, then I better use my abstract reasoning skills to find out what those answers are. So why shouldn't I make realism the working assumption, if I'm even slightly uncertain that anti-realism is true? If that assumption turns out to be wrong, it doesn't matter anyway--whatever answers I get from using that assumption, including nihilism, still can't be wrong. (If I actually choose to make that assumption, then I must have a psychological disposition to make that assumption. So anti-realism would say that whatever normative theory I form under that assumption is my actual morality. Right?)

I'm telling you what morality is.

Can you answer the last question in the grandparent comment, which was asking just this sort of question?

Comment author: cousin_it 24 August 2011 12:46:53PM *  2 points [-]

If those questions don't have factual answers, then I could answer them any way I want, and not be wrong.

That's true as stated, but "not being wrong" isn't the only thing you care about. According to your current morality, those questions have moral answers, and you shouldn't answer them any way you want, because that could be evil.

Comment author: Wei_Dai 24 August 2011 07:12:07PM 3 points [-]

When you say "you shouldn't answer them any way you want" are you merely expressing an emotional dissatisfaction, like Jack?

If it's meant to be more than an expression of emotional dissatisfaction, I guess "should" means "what my current morality recommends" and "evil" means "against my current morality", but what do you mean by "current morality"? As far as I can tell, according to anti-realism, my current morality is whatever morality I have the psychological disposition to construct. So if I have the psychological disposition to construct it using my intellect alone (or any other way), how, according to anti-realism, could that be evil?

Comment author: cousin_it 24 August 2011 10:12:50PM *  1 point [-]

By "current morality" I mean that the current version of you may dislike some outcomes of your future moral deliberations if Omega shows them to you in advance. It's quite possible that you have a psychological disposition to eventually construct a moral system that the current version of you will find abhorrent. For an extreme test case, imagine that your long-term "psychological dispositions" are actually coming from a random number generator; that doesn't mean you cannot make any moral judgments today.

Comment author: Jack 24 August 2011 01:24:04AM 1 point [-]

In order to rationalize my emotions, I have to identify with them in the first place (as opposed to the emotions of my neighbor, say). Especially if I'm supposed to apply descriptive moral psychology, instead of just confabulating unreflectively based on whatever emotions I happen to feel at any given moment. But if I can identify with them, why can't I dis-identify from them?

I'm not sure I actually understand what you mean by "dis-identify".

If those questions don't have factual answers, then I could answer them any way I want, and not be wrong. On the other hand if they do have factual answers, then I better use my abstract reasoning skills to find out what those answers are. So why shouldn't I make realism the working assumption, if I'm even slightly uncertain that anti-realism is true? If that assumption turns out to be wrong, it doesn't matter anyway--whatever answers I get from using that assumption, including nihilism, still can't be wrong.

So Pascal's Wager?

In any case, while there aren't wrong answers there are still immoral ones. There is no fact of the matter about normative ethics- but there are still hypothetical AIs that do evil things.

Which question exactly?

Comment author: Vladimir_Nesov 24 August 2011 01:40:42AM 0 points [-]

In any case, while there aren't wrong answers there are still immoral ones. There is no fact of the matter about normative ethics- but there are still hypothetical AIs that do evil things.

Then there is fact of the matter about which answers are moral, and we might as well call those that aren't, "incorrect".

Comment author: wedrifid 24 August 2011 04:57:29AM *  4 points [-]

Then there is fact of the matter about which answers are moral, and we might as well call those that aren't, "incorrect".

It seems like a waste to overload the meaning of the word "incorrect" to also include such things as "Fuck off! That doesn't satisfy socially oriented aspects of my preferences. I wish to enforce different norms!"

It really is useful to emphasize a carve in reality between 'false' and 'evil/bad/immoral'. Humans are notoriously bad at keeping the concepts distinct in their minds and allowing 'incorrect' (and related words) to be used for normative claims encourages even more motivated confusion.

Comment author: Jack 24 August 2011 02:14:03AM 1 point [-]

No. Moral properties don't exist. What I'm doing, per the post, when I say "There are immoral answers" is expressing an emotional dissatisfaction to certain answers.