Wei_Dai comments on A Defense of Naive Metaethics - Less Wrong

8 Post author: Will_Sawin 09 June 2011 05:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (294)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 11 June 2011 07:57:11AM 4 points [-]

What do you mean by "safe to ignore"?

You said that you're not interested in an "ought" sentence if it reduces to talking about the world of is not. I was trying to make the same point by "safe to ignore".

If you're talking about something that doesn't reduce (even theoretically) into physics and/or a logical-mathematical function, then what are you talking about?

I don't know, but I don't think it's a good idea to assume that only things that are reducible to physics and/or math are worth talking about. I mean it's a good working assumption to guide your search for possible meanings of "should", but why declare that you're not "interested" in anything else? Couldn't you make that decision on a case by case basis, just in case there is a meaning of "should" that talks about something else besides physics and/or math and its interestingness will be apparent once you see it?

Or perhaps you're asking the question in the sense of "Please fix my broken question for me. I don't know what I mean by 'should'. Would you please do a stack trace on the cognitive algorithms that generated that question, fix my question, and then answer it for me?" And in that case we're doing empathic metaethics.

Maybe I should have waited until you finish your sequence after all, because I don't know what "doing empathic metaethics" actually entails at this point. How are you proposing to "fix my question"? It's not as if there is a design spec buried somewhere in my brain, and you can check my actual code against the design spec to see where the bug is... Do you want to pick up this conversation after you explain it in more detail?

Comment author: lukeprog 11 June 2011 04:59:58PM 2 points [-]

I don't think it's a good idea to assume that only things that are reducible to physics and/or math are worth talking about. I mean it's a good working assumption to guide your search for possible meanings of "should", but why declare that you're not "interested" in anything else?

Maybe this is because I'm fairly confident of physicalism? Of course I'll change my mind if presented with enough evidence, but I'm not anticipating such a surprise.

'Interest' wasn't the best word for me to use. I'll have to fix that. All I was trying to say is that if somebody uses 'ought' to refer to something that isn't physical or logical, then this punts the discussion back to a debate over physicalism, which isn't the topic of my already-too-long 'Pluralistic Moral Reductionism' post.

Surely, many people use 'ought' to refer to things non-reducible to physics or logic, and they may even be interesting (as in fiction), but in the search for true statements that use 'ought' language they are not 'interesting', unless physicalism is false (which is a different discussion, then).

Does that make sense? I'll explain empathic metaethics in more detail later, but I hope we can get some clarity on this part right now.

Comment author: Wei_Dai 11 June 2011 08:16:04PM 3 points [-]

Maybe this is because I'm fairly confident of physicalism? Of course I'll change my mind if presented with enough evidence, but I'm not anticipating such a surprise.

First I would call myself a radical platonist instead of a physicalist. (If all universes that exist mathematically also exist physically, perhaps it could be said that there is no difference between platonism and physicalism, but I think most people who call themselves physicalists would deny that premise.) So I think it's likely that everything "interesting" can be reduced to math, but given the history of philosophy I don't think I should be very confident in that. See my recent How To Be More Confident... That You're Wrong.

Comment author: lukeprog 12 June 2011 08:41:17AM 2 points [-]

Right, I'm pretty partial to Tegmark, too. So what I call physicalism is compatible with Tegmark. But could you perhaps give an example of what it would mean to reduce normative language to a logical-mathematical function - even a silly one?

Comment author: Wei_Dai 12 June 2011 10:15:34AM 2 points [-]

(It's late and I'm thinking up this example on the spot, so let me know if it doesn't make sense.)

Suppose I'm in a restaurant and I say to my dinner companion Bob, "I'm too tired to think tonight. You know me pretty well. What do you think I should order?" From the answer I get, I can infer (when I'm not so tired) a set of joint constraints on what Bob believes to be my preferences, what decision theory he applied on my behalf, and the outcome of his (possibly subconscious) computation. If there is little uncertainty about my preferences and the decision theory involved, then the information conveyed by "you should order X" in this context just reduces to a mathematical statement about (for example) what the arg max of a set of weighted averages is.

(I notice an interesting subtlety here. Even though what I infer from "you should order X" is (1) "according to Bob's computation, the arg max of ... is X", what Bob means by "you should order X" must be (2) "the arg max of ... is X", because if he means (1), then "you should order X" would be true even if Bob made an error in his computation.)

Comment author: lukeprog 12 June 2011 10:24:35AM 1 point [-]

Yeah, that's definitely compatible with what I'm talking about when I talk about reducing normative language to natural language (that is, to math/logic + physics).

Do you think any disagreements or confusion remains in this thread?

Comment author: Wei_Dai 29 June 2011 06:38:45PM 4 points [-]

Having thought more about these matters over the last couple of weeks, I've come to realize that my analysis in the grandparent comment is not very good, and also that I'm confused about the relationship between semantics (i.e., study of meaning) and reductionism.

First, I learned that it's important (and I failed) to distinguish between (A) the meaning of a sentence (in some context), (B) the set of inferences that can be drawn from it, and (C) what information the speaker intends to convey.

For example, suppose Alice says to Bob, "It's raining outside. You should wear your rainboots." The information that Alice really wants to convey by "it's raining outside" is that there are puddles on the ground. That, along with for example "it's probably not sunny" and "I will get wet if I don't use an umbrella", belongs to the set of inferences that can be drawn from the sentence. But clearly the meaning of "it's raining outside" is distinct from either of these. Similarly, the fact that Bob can infer that there are puddles on the ground from "you should wear your rainboots" does not show that "you should wear your rainboots" means "there are puddles on the ground".

Nor does it seem to make sense to say that "you should wear your rainboots" reduces to "there are puddles on the ground" (why should it, when clearly "it's raining outside" doesn't reduce that way?), which, by analogy, calls into question my claim in the grandparent comment that "you should order X" reduces to "the arg max of ... is X".

But I'm confused about what reductionism even means in the context of semantics. The Eliezer post that you linked to from Pluralistic Moral Reductionism defined "reductionism" as:

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

But that appears to be a position about ontology, and it not clear to me what implications it has for semantics, especially for the semantics of normative language. (I know you posted a reading list for reductionism, which I have not gone though except to skim the encyclopedia entry. Please let me know if the answer will be apparent once I do read them, or if there is a more specific reference you can point me to that will answer this immediate question.)

Comment author: lukeprog 29 June 2011 09:50:58PM *  1 point [-]

Excellent. We should totally be clarifying such things.

There are many things we might intend to communicate when we talk about the 'meaning' of a word or phrase or sentence. Let's consider some possible concepts of 'the meaning of a sentence', in the context of declarative sentences only:

(1) The 'meaning of a sentence' is what the speaker intended to assert, that assertion being captured by truth conditions the speaker would endorse when asked for them.

(2) The 'meaning of a sentence' is what the sentence asserts if the assertion is captured by truth conditions that are fixed by the sentence's syntax and the first definition of each word that is provided by the Oxford English Dictionary.

(3) The 'meaning of a sentence' is what the speaker intended to assert, that assertion being captured by truth conditions determined by a full analysis of the cognitive algorithms that produced the sentence (which are not accessible to the speaker).

There are several other possibilities, even just for declarative sentences.

I tried to make it clear that when doing austere metaethics, I was taking #1 to be the meaning of a declarative moral judgment (e.g. "Murder is wrong!"), at least when the speaker of such sentences intended them to be declarative (rather than intending them to be, say, merely emotive or in other ways 'non-cognitive').

The advantage of this is that we can actually answer (to some degree, in many cases) the question of what a moral judgment 'means' (in the austere metaethics sense), and thus evaluate whether it is true or untrue. After some questioning of the speaker, we might determine that meaning~1 of "Murder is wrong" in a particular case is actually "Murder is forbidden by Yahweh", in which case we can evaluate the speaker's sentence as untrue given its truth conditions (given its meaning~1).

But we may very well want to know instead what is 'right' or 'wrong' or 'good' or 'bad' when evaluating sentences that use those words using the third sense of 'the meaning of a sentence' listed above. Though my third sense of meaning above is left a bit vague for now, that's roughly what I'll be doing when I start talking about empathic metaethics.

Will Sawin has been talking about the 'meaning' of 'ought' sentences in a fourth sense of the word 'meaning' that is related to but not identical to meaning~3 I gave above. I might interpret Will as saying that:

The meaning~4 of 'ought' in a declarative ought-sentence is determined by the cognitive algorithms that process 'ought' reasoning in a distinctive cognitive module devoted to that task, which do not include normative primitives nor reference to physical phenomena but only relate normative concepts to each other.

I am not going to do a thousand years of conceptual analysis on the English word-tool 'meaning.' I'm not going to survey which definition of 'meaning' is consistent with the greatest number of our intuitions about its meaning given a certain set of hypothetical scenarios in which we might use the term. Instead, I'm going to taboo 'meaning' so that I can use the word along with others to transfer ideas from my head into the heads of others, and take ideas from their heads into mine. If there's an objection to this, I'll be tempte to invent a new word-tool that I can use in the circumstances where I currently want to use the word-tool 'meaning' to transfer ideas between brains.

In discussing austere metaethics, I'm considering the 'meaning' of declarative moral judgment sentences as meaning~1. In discussing empathic metaethics, I'm considering the 'meaning' of declarative moral judgment sentences as (something like) meaning~3. I'm also happy to have additional discussions about 'ought' when considering the meaning of 'ought' as meaning~4, though the empirical assumptions underlying meaning~4 might turn out to be false. We could discuss 'meaning' as meaning~2, too, but I'm personally not that interested to do so.

Before I talk about reductionism, does this comment about meaning make sense?

Comment author: Wei_Dai 30 June 2011 12:09:29AM 3 points [-]

As I indicated in a recent comment, I don't really see the point of austere metaethics. Meaning~1 just doesn't seem that interesting, given that meaning~1 is not likely to be closely related to actual meaning, as in your example when someone thinks that by "Murder is wrong" they are asserting "Murder is forbidden by Yahweh".

Empathic metaethics is much more interesting, of course, but I do not understand why you seem to assume that if we delve into the cognitive algorithms that produce a sentence like "murder is wrong" we will be able to obtain a list of truth conditions. For example if I examine the algorithms behind an Eliza bot that sometimes says "murder is wrong" I'm certainly not going to obtain a list of truth conditions. It seems clear that information/beliefs about math and physics definitely influence the production of normative sentences in humans, but it's much less clear that those sentences can be said to assert facts about math and physics.

Instead, I'm going to taboo 'meaning' so that I can use the word along with others to transfer ideas from my head into the heads of others, and take ideas from their heads into mine.

Can you show me an example of such idea transfer? (Depending on what ideas you want to transfer, perhaps you do not need to "fully" solve metaethics, in which case our interests might diverge at some point.)

If there's an objection to this, I'll be tempte to invent a new word-tool that I can use in the circumstances where I currently want to use the word-tool 'meaning' to transfer ideas between brains.

This is probably a good idea. (Nesov previously made a general suggestion along those lines.)

Comment author: lukeprog 30 June 2011 12:33:38AM *  1 point [-]

I don't really see the point of austere metaethics. Meaning~1 just doesn't seem that interesting, given that meaning~1 is not likely to be closely related to actual meaning

What do you mean by 'actual meaning'?

The point of pluralistic moral reductionism (austere metaethics) is to resolve lots of confused debates in metaethics that arise from doing metaethics (implicitly or explicitly) in the context of traditional conceptual analysis. It's clearing away the dust and confusion from such debates so that we can move on to figure out what I think is more important: empathic metaethics.

I do not understand why you seem to assume that if we delve into the cognitive algorithms that produce a sentence like "murder is wrong" we will be able to obtain a list of truth conditions

I don't assume this. Whether this can be done is an open research question.

Can you show me an example of such idea transfer?

My entire post 'Pluralistic Moral Reductionism' is an example of such idea transfer. First I specified that one way we can talk about morality is to stipulate what we mean by terms like 'morally good', so as to resolve debates about morality in the same way that we resolve a hypothetical debate about 'sound' by stipulating our definitions of 'sound.' Then I worked through the implications of that approach to metaethics, and suggested toward the end that it wasn't the only approach to metaethics, and that we'll explore empathic metaethics in a later post.

Comment author: Wei_Dai 13 June 2011 03:59:12PM 3 points [-]

I'm not sure if we totally agree, but if there is any disagreement left in this thread, I don't think it's substantial enough to keep discussing at this point. I'd rather that we move on to talking about how you propose to do empathic metaethics.

BTW, I'd like to give another example that shows the difficulty of reducing (some usages of) normative language to math/physics.

Suppose I'm facing Newcomb's problem, and I say to my friend Bob, "I'm confused. What should I do?" Bob happens to be a causal decision theorist, so he says "You should two-box." It's clear that Bob can not just mean "the arg max of ... is 'two-box'" (where ... is the formula given by CDT), since presumably "you should two-box" is false and "the arg max of ... is 'two-box'" is true. Instead he probably means something like "CDT is the correct decision theory, and the arg max of ... is 'two-box'", but how do we reduce the first part of this sentence to physics/math?

Comment author: lukeprog 13 June 2011 04:39:21PM 1 point [-]

I'm not saying that reducing to physics/math is easy. Even ought language stipulated to refer to, say, the well-being of conscious creatures is pretty hard to reduce. We just don't have that understanding yet. But it sure seems to be pointing to things that are computed by physics. We just don't know the details.

I'm just trying to say that if I'm right about reductionism, and somebody uses ought language in a way that isn't likely to reduce to physics/math, then their ought language isn't likely to refer successfully.

We can hold off the rest of the dialogue until after another post or two; I appreciate your help so far. As a result of my dialogue with you, Sawin, and Nesov, I'm going to rewrite the is-ought part of 'Pluralistic Moral Reductionism' for clarity.

Comment author: Will_Sawin 13 June 2011 03:28:55PM 0 points [-]

For example I could use a variant of CEV (call it Coherent Extrapolated Pi Estimation) to answer "What is the trillionth digit of pi?" but that doesn't imply that by "the trillionth digit of pi" I actually mean "the output of CEPE"

(I notice an interesting subtlety here. Even though what I infer from "you should order X" is (1) "according to Bob's computation, the arg max of ... is X", what Bob means by "you should order X" must be (2) "the arg max of ... is X", because if he means (1), then "you should order X" would be true even if Bob made an error in his computation.)

Do you accept the conclusion I draw from my version of this argument?

Comment author: Wei_Dai 13 June 2011 06:48:00PM 0 points [-]

I agree with you up to this part:

But this is certainly not the definition of water! Imagine if Bob used this criterion to evaluate what was and was not water. He would suffer from an infinite regress. The definition of water is something else. The statement "This is water" reduces to a set of facts about this, not a set of facts about this and Bob's head.

I made the same argument (perhaps not very clearly) at http://lesswrong.com/lw/44i/another_argument_against_eliezers_metaethics/

But I'm confused by the rest of your argument, and don't understand what conclusion you're trying to draw apart from "CEV can't be the definition of morality". For example you say:

Well, why does it have a long definition? It has a long definition because that's what we believe is important.

I don't understand why believing something to be important implies that it has a long definition.

Comment author: Will_Sawin 13 June 2011 06:55:07PM 1 point [-]

Ah. So this is what I am saying.

If you say "I define should as [Eliezers long list of human values]"

then I say: "That's a long definition. How did you pick that definition?"

and you say: 'Well, I took whatever I thought was morally important, and put it into the definition."

In the part you quote I am arguing that (or at least claiming that) other responses to my query are wrong.

I would then continue:

"Using the long definition is obscuring what you really mean when you say 'should'. You really mean 'what's important', not [the long list of things I think are important]. So why not just define it as that?"

Comment author: Vladimir_Nesov 13 June 2011 08:33:41PM *  2 points [-]

One more way to describe this idea. I ask, "What is morality?", and you say, "I don't know, but I use this brain thing here to figure out facts about it; it errs sometimes, but can provide limited guidance. Why do I believe this "brain" is talking about morality? It says it does, and it doesn't know of a better tool for that purpose presently available. By the way, it's reporting that <long list of conditions> are morally relevant, and is probably right."

Comment author: Wei_Dai 14 June 2011 06:30:03PM 0 points [-]

By the way, it's reporting that <long list of conditions> are morally relevant, and is probably right.

Where do you get "is probably right" from? I don't think you can get that if you take an outside view and consider how often a human brain is right when it reports on philosophical matters in a similar state of confusion...

Comment author: Will_Sawin 13 June 2011 10:17:53PM 0 points [-]

Beautiful. I would draw more attention to the "Why.... ? It says it does" bit, but that seems right.

Comment author: Vladimir_Nesov 11 June 2011 07:28:10PM 2 points [-]

Maybe this is because I'm fairly confident of physicalism? Of course I'll change my mind if presented with enough evidence, but I'm not anticipating such a surprise.

You'd need the FAI able to change its mind as well, which requires that you retain this option in its epistemology. To attack the communication issue from a different angle, could you give examples of the kinds of facts you deny? (Don't say "god" or "magic", give a concrete example.)

Comment author: lukeprog 12 June 2011 08:31:58AM *  0 points [-]

Yes, we need the FAI to be able to change its mind about physicalism.

I don't think I've ever been clear about what people mean to assert when they talk about things that don't reduce to physics/math.

Rather, people describe something non-natural or supernatural and I think, "Yeah, that just sounds confused." Specific examples of things I deny because of my physicalism are Moore's non-natural goods and Chalmers' conception of consciousness.

Comment author: Peterdjones 12 June 2011 07:50:34PM *  2 points [-]

I don't think I've ever been clear about what people mean to assert when they talk about things that don't reduce to physics/math.

SInce you can't actually reduce[*] 99.99% of your vocabulary, you're either so confused you couldn't possibly think or communicate...or you're only confused about the nature of confusion.

[*] Try reducing "shopping" to quarks, electrons and photons.You can't do it, and if you could, it would tell you nothing useful. Yet there is nothing that is not made of quarks,electrons and photons involved.

Comment author: Vladimir_Nesov 12 June 2011 11:04:19AM 0 points [-]

Specific examples of things I deny because of my physicalism are Moore's non-natural goods and Chalmers' conception of consciousness.

Not much better than "magic", doesn't help.

Comment author: lukeprog 12 June 2011 11:05:38AM 0 points [-]

Is this because you're not familiar with Moore on non-natural goods and Chalmers on consciousness, or because you agree with me that those ideas are just confused?

Comment author: Vladimir_Nesov 12 June 2011 11:30:02AM 1 point [-]

They are not precise enough to carefully examine. I can understand the distinction between a crumbling bridge and 3^^^^3>3^^^3, it's much less clear what kind of thing "Chalmers' view on consciousness" is. I guess I could say that I don't see these things as facts at all unless I understand them, and some things are too confusing to expect understanding them (my superpower is to remain confused by things I haven't properly understood!).

(To compare, a lot of trouble with words is incorrectly assuming that they mean the same thing in different contexts, and then trying to answer questions about their meaning. But they might lack a fixed meaning, or any meaning at all. So the first step before trying to figure out whether something is true is understanding what is meant by that something.)

Comment author: Peterdjones 12 June 2011 07:56:11PM *  0 points [-]

They are not precise enough to carefully examine.

How are you on dark matter?

(No new idea is going to be precise, because precise definitions come from established theories, and established theories come from speculative theories, and speculative theories are theories about something that is defined relatively vaguely. The Oxygen theory of combustion was a theory about "how burning works"-- it was not, circularly, the Oxygen theory of Oxidisation).

Comment author: Peterdjones 12 June 2011 08:11:28PM 1 point [-]

Dude, you really need to start distinguishing between reducible-in-principle and usefully-reducible and doesn't need-reducing.