lukeprog comments on A Defense of Naive Metaethics - Less Wrong

8 Post author: Will_Sawin 09 June 2011 05:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (294)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 14 June 2011 06:19:51PM *  0 points [-]

You are proposing that for each meaningful use of moral language, one such function must be correct by definition

Not what I meant to propose. I don't agree with that.

you can just make statements in moral language which do not correspond to any statements in physical language.

Of course you can. People do it all the time. But if you're a physicalist (by which I mean to include Tegmarkian radical platonists), then those statements fail to successfully refer. That's all I'm saying.

Comment author: Will_Sawin 14 June 2011 06:23:10PM 2 points [-]

I am standing up for the usefulness and well-definedness of statements that fail to successfully refer.

Comment author: lukeprog 14 June 2011 06:49:35PM *  0 points [-]

I am standing up for the usefulness and well-definedness of statements that fail to successfully refer.

Okay, we're getting nearer to understanding each other, thanks. :)

Perhaps you could give an example of a non-normative statement that is well-defined and useful even though it fails to refer? Perhaps then I can grok better where you're coming from.

Elsewhere, you said:

The problem is that the word "ought" has multiple definitions. You are observing that all the other definitions of ought are physically reducible. That puts them on the "is" side. But now there is a gap between hypothetical-ought-statements and categorical-ought-statements, and it's just the same size as before. You can reduce the word "ought" in the following sentence: "If 'ought' means 'popcorn', then I am eating ought right now." It doesn't help.

Goodness, no. I'm not arguing that all translations of 'ought' are equally useful as long as they successfully refer!

But now you're talking about something different than the is-ought gap. You're talking about a gap between "hypothetical-ought-statements and categorical-ought-statements." Could you describe the gap, please? 'Categorical ought' in particular leaves me with uncertainty about what you mean, because that term is used in a wide variety of ways by philosophers, many of them incoherent.

I genuinely appreciate you sticking this out with me. I know it's taking time for us to understand each other, but I expect serious fruit to come of mutual understanding.

Comment author: Will_Sawin 14 June 2011 06:56:59PM 0 points [-]

Perhaps you could give an example of a non-normative statement that is well-defined and useful even though it fails to refer? Perhaps then I can grok better where you're coming from.

I don't think any exist, so I could not do so.

Goodness, no. I'm not arguing that all translations of 'ought' are equally useful as long as they successfully refer!

I'm saying that the fact that you can use a word to have a meaning in class X does not provide much evidence that the other uses of that word have a meaning in class X.

Could you describe the gap, please? 'Categorical ought' in particular leaves me with uncertainty about what you mean, because that term is used in a wide variety of ways by philosophers, many of them incoherent.

Hypothetical-ought statements are a certain kind of statement about the physical world. They're the kind that contain the word "ought", but they're just an arbitrary subset of the "is"-statements.

Categorical-ought statements are statements of support for a preference order. (not statements about support.)

Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.

Comment author: Vladimir_Nesov 15 June 2011 12:03:36AM 1 point [-]

Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.

(Physical facts can inform you about what the right preference order is, if you expect that they are related to the moral facts.)

Comment author: Will_Sawin 15 June 2011 12:18:32AM 0 points [-]

perhaps the right thing to say is "No fact can alone imply a preference order."

Comment author: Vladimir_Nesov 15 June 2011 12:22:14AM 1 point [-]

But no fact can alone imply anything (in this sense), it's not a point specific to moral values, and in any case a trivial uninteresting point that is easily confused with a refutation of the statement I noted in the grandparent.

Comment author: torekp 15 June 2011 01:33:01AM 3 points [-]

No fact alone can imply anything: true and important. For example, a description of my brain at the neuronal level does not imply that I'm awake. To get the implication, we need to add a definition (or at least some rule) of "awake" in neuronal terms. And this definition will not capture the meaning of "awake." We could ask, "given that a brain is <insert neuronal definition here>, is it awake?" and intuition will tell us that it is an open question.

But that is beside the point, if what we want to know is whether the definition succeeds. The definition does not have to capture the meaning of "awake". It only needs to get the reference correct.

Reduction doesn't typically involve capturing the meaning of the reduced terms - Is the (meta)ethical case special? If so, why and how?

Comment author: Wei_Dai 15 June 2011 04:33:17AM 0 points [-]

Reduction doesn't typically involve capturing the meaning of the reduced terms - Is the (meta)ethical case special? If so, why and how?

Great question. It seems to me that normative ethics involves reducing the term "moral" without necessarily capturing the meaning, whereas metaethics involves capturing the meaning of the term. And the reason we want to capture the meaning is so that we know what it means to do normative ethics correctly (instead of just doing it by intuition, as we do now). It would also allow an AI to perform normative ethics (i.e., reduce "moral") for us, instead of humans reducing the term and programming a specific normative ethical theory into the AI.

Comment author: torekp 16 June 2011 01:25:53AM 0 points [-]

I doubt that metaethics can wholly capture the meaning of ethical terms, but I don't see that as a problem. It can still shed light on issues of epistemics, ontology, semantics, etc. And if you want help from an AI, any reduction that gets the reference correct will do, regardless of whether meaning is captured. A reduction need not be a full-blown normative ethical theory. It just needs to imply one, when combined with other truths.

Comment author: Vladimir_Nesov 15 June 2011 01:50:28AM *  0 points [-]

(I agree with your comment.)

Reduction doesn't typically involve capturing the meaning of the reduced terms - Is the (meta)ethical case special? If so, why and how?

A formal logical definition often won't capture the full meaning of a mathematical structure (there may be non-standard models of the logical theory, and true statements it won't infer), yet it has the special power of allowing you to correctly infer lots of facts about that structure without knowing anything else about the intended meaning. If we are given just a little bit less, then the power to infer stuff gets reduced dramatically.

It's important to get a definition of morality in a similar sense and for similar reasons: it won't capture the whole thing, yet it must be good enough to generate right actions even in currently unimaginable contexts.

Comment author: Wei_Dai 16 June 2011 05:15:25PM 4 points [-]

Formal logic does seem very powerful, yet incomplete. Would you be willing to create an AI with such limited understanding of math or morality (assuming we can formalize an understanding of morality on par with math), given that it could well obtain supervisory power over humanity? One might justify it by arguing that it's better than the alternative of trying to achieve and capture fuller understanding, which would involve further delay and risk. See for example Tim Freeman's argument in this line, or my own.

Another alternative is to build an upload-based FAI instead, like Stuart Armstrong's recent proposal. That is, use uploads as components in a larger system, with lots of safety checks. In a way Eliezer's FAI ideas can also be seen as heavily upload based, since CEV can be interpreted (as you did before) as uploads with safety checks. (So the question I'm asking can be phrased as, instead of just punting normative ethics to CEV, why not punt all of meta-math, decision theory, meta-ethics, etc., to a CEV-like construct?)

Of course you're probably just as unsure of these issues as I am, but I'm curious what your current thoughts are.

Comment author: Will_Sawin 15 June 2011 12:41:36AM 0 points [-]

Agreed, however, it is somewhat useful in pointing out a specific, common, type of bad argument.

Comment author: lukeprog 14 June 2011 07:09:22PM 0 points [-]

I don't think any exist, so I could not do so.

Okay, so you think that the only class of statements that are well-defined and useful but fail to refer is the class of normative statements? Why are they special in this regard?

I'm saying that the fact that you can use a word to have a meaning in class X does not provide much evidence that the other uses of that word have a meaning in class X.

Agreed.

Categorical-ought statements are statements of support for a preference order. (not statements about support.)

What do you mean by this? Do you mean that a categorical-ought statement is a statement of support as in "I support preference-ordering X", as opposed to a statement about support as in "preference-ordering X is 'good' if 'good' is defined as 'maximizes Y'"?

Since no fact can imply a preference order, no is-statement can imply a categorical-ought-statement.

What do you mean by 'preference order' such that no fact can imply a preference order? I'm thinking of a preference order as a brain state, including parts of the preference ordering that are extrapolated from that brain state. Surely physical facts about that brain state and extrapolations from it imply (or entail, or whatever) the preference order...

Comment author: Will_Sawin 14 June 2011 08:02:25PM 1 point [-]

Okay, so you think that the only class of statements that are well-defined and useful but fail to refer is the class of normative statements? Why are they special in this regard?

Because a positive ('is") statement + a normative ("ought) statement is enough information to determine an action, and once actions are determined you don't need further information.

"information" may not be the right word.

What do you mean by this? Do you mean that a categorical-ought statement is a statement of support as in "I support preference-ordering X", as opposed to a statement about support as in "preference-ordering X is 'good' if 'good' is defined as 'maximizes Y'"?

I believe "I ought to do X" if and only if I support preference-ordering X.

What do you mean by 'preference order' such that no fact can imply a preference order? I'm thinking of a preference order as a brain state, including parts of the preference ordering that are extrapolated from that brain state. Surely physical facts about that brain state and extrapolations from it imply (or entail, or whatever) the preference order...

I'm thinking of a preference order as just that: a map from the set of {states of the world} x {states of the world} to the set {>, =, <}. The brain state encodes a preference order but it does not constitute a preference order.

I believe "this preference order is correct" if and only if there is an encoding in my brain of this preference order.

Much like how:

I believe "this fact is true" if and only if there is an encoding in my brain of this fact.

Comment author: lukeprog 21 June 2011 07:14:32PM 0 points [-]

I've continued our dialogue here.

Comment author: Vladimir_Nesov 19 June 2011 11:31:30PM *  0 points [-]

I believe "this fact is true" if and only if there is an encoding in my brain of this fact.

What if it's encoded outside your brain, in a calculator for example, while your brain only knows that calculator shows indication "28" on display iff the fact is true? Or, say, I know that my computer contains a copy of "Understand" by Ted Chiang, even though I don't remember its complete text. Finally, some parts of my brain don't know what other parts of my brain know. The brain doesn't hold a privileged position with respect of where the data must be encoded to be referred, it can as easily point elsewhere.

Comment author: Will_Sawin 20 June 2011 05:22:58AM 1 point [-]

Well if I see the screen then there's an encoding of "28" in my brain. Not of the reason why 28 is true, but at least that the answer is "28".

You believe that "the computer contains a copy of Understand", not "the computer contains a book with the following text: [text of Understand]".

Obviously, on the level of detail in which the notion of "belief" starts breaking down, the notion of "belief" starts breaking down.

But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.

Comment author: Vladimir_Nesov 22 June 2011 10:21:04PM 0 points [-]

but at least that the answer is "28".

Yet you might not know the question. "28" only certifies that the question makes a true statement.

You believe that "the computer contains a copy of Understand", not "the computer contains a book with the following text: [text of Understand]".

Exactly. You don't know [text of Understand], yet you can reason about it, and use it in your designs. You can copy it elsewhere, and you'll know that it's the same thing somewhere else, all without having an explicit or any definition of the text, only diverse intuitions describing its various aspects and tools for performing operations on it. You can get an md5 sum of the text, for example, and make a decision depending on its value, and you can rely on the fact that this is an md5 sum of exactly the text of "Understand" and nothing else, even though you don't know what the text of "Understand" is.

But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statement.

This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What's "just a statement"?)

Comment author: Will_Sawin 23 June 2011 01:42:51AM 1 point [-]

This sort of deep wisdom needs to be the enemy (it strikes me often enough). Acts as curiosity-stopper, covering the difficulty in understanding things more accurately. (What's "just a statement"?)

In certain AI designs, this problem is trivial. In humans, this problem is not simple.

The complexities of the human version of this problem do not have relevance to anything in this overarching discussion (that I am aware of).

Comment author: Peterdjones 22 June 2011 11:04:29PM *  0 points [-]

But still, it remains; When we say that I know a fact, the statement of my fact is encoded in my brain. Not the referent, not an argument for that statement, just: a statemen

So you say. Many would say that you need the argument (proof, justification, evidence) for a true belief for it to qualify as knowledge.

Comment author: Will_Sawin 23 June 2011 01:33:37AM 1 point [-]

Obviously, this doesn't prevent me from saying that I know something without an argument.

Comment author: lukeprog 16 June 2011 11:48:12PM 0 points [-]

Been really busy, will respond to this in about a week. I want to read your earlier discussion post, first, too.

Comment author: Vladimir_Nesov 15 June 2011 12:10:16AM *  0 points [-]

I believe "this preference order is correct" if and only if there is an encoding in my brain of this preference order.

Encodings are relative to interpretations. Something has to decide that a particular fact encodes particular other fact. And brains don't have a fundamental role here, even if they might contain most of the available moral information, if you know how to get it.

The way in which decisions are judged to be right or wrong based on moral facts and facts about the world, where both are partly inferred with use of empirical observations, doesn't fundamentally distinguish the moral facts from the facts about the world, so it's unclear how to draw a natural boundary that excludes non-moral facts without excluding moral facts also.

Comment author: Will_Sawin 15 June 2011 12:18:03AM 0 points [-]

My ideas work unless it's impossible to draw the other kind of boundary, including only facts about the world and not moral facts.

Is it? If it's impossible, why?

Comment author: Vladimir_Nesov 19 June 2011 11:34:02PM *  0 points [-]

My ideas work unless it's impossible to draw the other kind of boundary, including only facts about the world and not moral facts.

It's the same boundary, just the other side. If you can learn of moral facts by observing things, if your knowledge refers to a joint description of moral and physical facts, state of your brain say as the physical counterpart, and so your understanding of moral facts benefits from better knowledge and further observation of physical facts, you shouldn't draw this boundary.

Comment author: Will_Sawin 20 June 2011 05:20:21AM 0 points [-]

There is an asymmetry. We can only make physical observations, not moral observations.

This means that every state of knowledge about moral and physical facts maps to a state of knowledge about just physical facts, and the evolution of the 2nd is determined only by evidence, with no reference to moral facts.