Will_Sawin comments on A Defense of Naive Metaethics - Less Wrong

8 Post author: Will_Sawin 09 June 2011 05:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (294)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Sawin 10 June 2011 08:11:54PM 1 point [-]

You are correct in your first paragraph, I oversimplified.

Comment author: p4wnc6 13 June 2011 04:46:55AM *  -2 points [-]

I think this address this topic very well. The first person experience of belief is one in the same with fact-assertion. 'I ought to do X' refers to a 4-tuple of actions, outcomes, utility function, and conditional probability function.

W.r.t. your question about whether a murderer who, prior to and immediately after committing murder, attests to believing that murder is wrong, I would say it is a mistaken question to bring their sanity into it. You can't decide that question without debating what is meant by 'sane'. How a person's preference ordering and resulting actions look from the outside does not necessarily reveal that the person failed to behave rationally, according to their utility function, on the inside. If I choose to label them as 'insane' for seeming to violate their own belief, this is just a verbal distinction about how I will label such third-person viewings of that occurrence. Really though, their preference ordering might have been temporarily suspended due to clouded judgment from rage or emotion. Or, they might not be telling the full truth about their preference ordering and may not even be aware of some aspects of it.

The point is that beliefs are always statements of physics. If I say, "murder is wrong", I am referring to some quantified subset of states of matter and their consequences. If I say, "I believe murder is wrong", I am telling you that I assert that "murder is wrong" is true, which is a statement about my brain's chemistry.

Comment author: Will_Sawin 13 June 2011 11:15:01AM 1 point [-]

The point is that beliefs are always statements of physics

Everyone keeps saying that, but they never give convincing arguments for it.

Comment author: lukeprog 24 June 2011 05:58:25PM 1 point [-]

The point is that beliefs are always statements of physics

I also disagree with this.

Comment author: p4wnc6 13 June 2011 05:17:27PM *  -1 points [-]

If I say, "murder is wrong", I am referring to some quantified subset of states of matter and their consequences. If I say, "I believe murder is wrong", I am telling you that I assert that "murder is wrong" is true, which is a statement about my brain's chemistry.

Pardon me, but I believe the burden of proof here is for you to supply something non-physical that's being specified and then produce evidence that this is the case. If the thing you're talking about is supposed to be outside of a magisterium of evidence, then I fail to see how your claim is any different than that we are zombies.

At a coarse scale, we're both asking about the evidence that we observe, which is the first-person experience of assertions about beliefs. Over models that can explain this phenomenon, I am attempting to select the one with minimum message length, as a computer program for producing the experience of beliefs out of physical material can have some non-zero probability attached to it through evidence. How are we to assign probability to the explanation that beliefs do not point to things that physically exist? Is that claim falsifiable? Are there experiments we can do which depend on the result? If not, then the burden of proof here is squarely on you to present a convincing case why the same-old same-old punting to complicated physics is not good enough. If it's not good enough for you, and you insist on going further, that's fine. But physics is good enough for me here and that's not a cop out or an unjustified conclusion in the slightest.

Comment author: Will_Sawin 13 June 2011 05:50:02PM 1 point [-]

Suppose I say "X is red".

That indicates something physical - it indicates that I believe X is red

but it means something different, and also physical - it means that X is red

Now suppose I say "X is wrong"

That indicates something physical - it indicates that I believe X is wrong

using the same-old, same-old principle, we include that it means something different.

but there is nothing else physical that we could plausibly say it means.

Comment author: p4wnc6 13 June 2011 06:26:12PM -2 points [-]

but there is nothing else physical that we could plausibly say it means.

Why do you say this? Flesh out the definition of 'wrong' and you're done. 'Wrong' refers to arrangements of matter and their consequences. It doesn't attempt to refer to intrinsic properties of objects that exist apart from their physicality. If (cognitive object X) is (attribute Y) this just means that (arrangements of matter that correspond to what I give the label X) have (physical properties that I group together into the heading Y). It doesn't matter if you're saying "freedom is good" or "murder is wrong" or "that sign is red". 'Freedom' refers to arrangements of matter and physical laws governing them. 'Good' refers to local physical descriptions of the ways that things can yield fortunate outcomes, where fortunate outcomes can be further chased down in its physical meaning, etc.

"X is wrong" unpacks to statements about the time evolution of physical systems. You can't simply say

there is nothing else physical that we could plausibly say it means.

Have you gone and checked every possible physical thing? Have you done experiments showing that making correspondences between cognitive objects and physical arrangements of matter somehow "fails" to capture its "meaning"?

This seems to me to be one of those times where you need to ask yourself: is it really the case that cognitive objects are not just linguistic devices for labeling arrangements of matter and laws governing the matter......... or do I just think that's the case?

Comment author: Will_Sawin 13 June 2011 06:49:55PM 1 point [-]

Have you gone and checked every possible physical thing?

Your whole argument rests on this, since you have not provided a counterexample to my claim. You've just repeated the fact that there is some physical referent, over and over.

This is not how burden of proof works! It would be simply impossible for me to check every possible physical thing. Is it, therefore, impossible for you to be convinced that I am right?

I expect better from lesswrong posters.

Comment author: p4wnc6 13 June 2011 07:36:19PM *  0 points [-]

Is it, therefore, impossible for you to be convinced that I am right?

This is what it means for a claim to fail falsifiability. It's easy to generate claims whose proof would only be constituted by fact-checking against every physical thing. This is a far cry from a decision-theoretic claim where, though we can't have perfect evidence, we can make useful quantifications of the evidence we do have and our uncertainty about it.

The empty set has many interesting properties.

It's impossible to quantify your claim without having all of the evidence up front.

You've just repeated the fact that there is some physical referent, over and over.

What I'm trying to say is that I can test the hypothesis of whether or not there is a physical referent. If someone says to me, "Is there or isn't there a physical referent?" and I have to respond, then I have to do so on the strength of evidence alone. I may not be able to provide a referent explicitly, but I know that non-zero probability can be assigned to a physical system in which cognitive objects are placeholders for complicated sets of matter and governing laws of physics. I cannot make the same claim about the hypothesis that cognitive objects do not have utterly physical referents, and therefore, whether or not I have explicit examples of referents, the hypothesis that there must be underlying physical referents wins hands down.

The criticism you're making of me, that I insist there are referents without supplying the actual referents, is physically backwards in this case. For example, someone might say "consciousness is a process that does not correspond to any physically existing thing." If I then reply,

"But consciousness is a property of material and varies directly with changes in that material (or some similar, more detailed argument about cognition), and therefore, I can assign non-zero probability to its being a physical computation, and since I do not have the capacity to assign probabilities to non-physical entities, the hypothesis that consciousness is physical wins."

this is a convincing argument, up to the quantification of the evidence. If you personally don't feel like it's convincing, that's fine, but then you're outside of decision theory and the claim you're making contains literally no semantic information.

The same can be said of the referent of a belief. I think you're failing to appreciate that you're making the very mistake you're claiming that I am making. You're just asserting that beliefs can't plausibly correspond to physically existing things. That's just an assertion. It might be a good assertion or might not even be a coherent assertion. In order to check, I am going to go draft up some sort of probability model that relates that claim to what I know about thoughts and beliefs. Oh, snap, when I do that, I run into the unfortunate wall that if beliefs don't have physical referents, then talking about their referents at all suddenly has no physical meaning. Therefore, I will stick with my hypothesis that they are physical, pending explicit evidence that they aren't.

The convincingness of the argument lies in the fact that one side of this can be made quantitative and experimentally relevant and the other cannot. The burden of proof, as I view it in this situation, is on making a non-zero probability connection between beliefs and some type of referent. I don't see anything in your argument that prevents this connection being made to physical things. I do, however, fail to see any part of your argument that makes this probabilistic connection with non-physical referents.

Maybe it's better to think of it like an argument from non-cognitivism. You're trying to make up a solution space for the problem (non-physical referents) that is incompatible with the whole system in which the problem takes place (physics). Until you make an explicit physical definition of what a "non-physical referent" actually is, then I will not entertain it as a possible hypothesis.

Ultimately, even though your epistemology is more complicated, you argument might as well be: beliefs are pointers to magical unicorns outside of space and time and these magical unicorns are what determine human values. 'Non-physical referents' simply are not. I can't assign a probability to the existence of something which is itself hypothesized to fail to exist, since existence and "being a physical part of reality" are one in the same thing.

Comment author: Will_Sawin 13 June 2011 08:01:22PM 0 points [-]

It's easy to generate claims whose proof would only be constituted by fact-checking against every physical thing

That's the good kind of claim, the falsifiable kind, like the Law of Universal Gravitation. That's the kind of claim I'm making.

It's impossible to quantify your claim without having all of the evidence up front.

Your argument seems to depend on the idea that the only way to evaluate a claim is to list the physical universes in which it is true and the physical universes in which it is not true.

This, obviously, is circular.

Do you acknowledge that your reasoning is circular and defend it, presumably with Eliezer's defense of circular reasoning? Or do you claim that it is not circular?

I cannot make the same claim about the hypothesis that cognitive objects do not have utterly physical referents

Sure you can. You take a world, find all the cognitive objects in it, then find all the corresponding physical referents, cross those objects off the list.

I am saying that there are beliefs (strings of symbols with meaning) endowed meaning by their place in a functional mind but for which the set of physical referents they correspond to is the empty set.

Surely you can admit the existence of strings of symbols without physical referents, like this one: "fj4892fjsoidfj390ds;j9d3". There's nothing non-physical about it.

The convincingness of the argument lies in the fact that one side of this can be made quantitative and experimentally relevant and the other cannot.

If "X" is quantitative and experimentally relevant, how could "not-X" be irrelevant? If X makes predictions, how could not-X not make the opposite predictions?

I do, however, fail to see any part of your argument that makes this probabilistic connection with non-physical referents.

Who said that all beliefs have referents?

Comment author: p4wnc6 13 June 2011 09:58:13PM *  0 points [-]

Sure you can. You take a world, find all the cognitive objects in it

My claim is that if one had really done this, then by definition of "find", they have the physical referents for the cognitive objects. If a cognitive object has the empty set as the set of physical referents, then it is the null cognitive object. The string of symbols "fj4892fjsoidfj390ds;j9d3" might have no meaning to you when thinking in English, say, but then it just means it is an instantiation of the empty cognitive object, any string of symbols failing to point to a physical referent.

I'm trying to say that if the cognitive object is to be considered as pointing to something, that is, it is in some sense not the null cognitive object, then the thing which is its referent is physical. It's incoherent to say that a string of symbols refers to something that's not physical. What do you mean by 'refer' in that setting? There is no existing thing to be referred to, hence the symbol does no action of referring. So when someone speaks about "X" being right or wrong, either they are speaking about physical events or else "X" fails to be a cognitive object.

I claim that my reasoning is not circular.

Your argument seems to depend on the idea that the only way to evaluate a claim is to list the physical universes in which it is true and the physical universes in which it is not true.

It depends on what you mean by 'evaluate'. What I'm taking for that definition right now is that if I want to assess whether proposition P is true, then I can only do so in a setting of decision theory and degrees of evidence and uncertainty. This means that I need a model for the action P and a way of corresponding probabilities to the various hypotheses about P. In this case, P = "Some cognitive objects have referents that are not the null referent and are also not physical". I claim that all referents are either physical or else they are the null referent. The set of non-physical referents is empty.

Just because a string fails to have a physical referent does not mean that it succeeds in having a non-physical one. What evidence do I have that there exist non-physical referents. What model of cognitive objects exists with which it is possible to achieve experimental evidence of a non-physical referent?

I am saying that there are beliefs (strings of symbols with meaning) endowed meaning by their place in a functional mind but for which the set of physical referents they correspond to is the empty set.

What do you mean by 'endowed meaning'? If a cognitive object has no physical referent, to me, that is the definition of meaningless. It fails to correspond to reality.

Comment author: asr 13 June 2011 11:02:28PM 0 points [-]

The point is that beliefs are always statements of physics. If I say, "murder is wrong", I am referring to some quantified subset of states of matter and their consequences. If I say, "I believe murder is wrong", I am telling you that I assert that "murder is wrong" is true, which is a statement about my brain's chemistry.

Hm? It's easy to form beliefs about things that aren't physical. Suppose I tell you that the infinite cardinal aleph-1 is strictly larger than aleph-0. What's the physical referent of the claim?

I'm not making a claim about the messy physical neural structures in my head that correspond to those sets -- I'm making a claim about the nonphysical infinite sets.

Likewise, I can make all sorts of claims about fictional characters. Those aren't claims about the physical book, they're claims about its nonphysical implications.

Comment author: p4wnc6 13 June 2011 11:19:47PM 0 points [-]

Why do you think that nonphysical implications are ontologically existing things? I argue that what you're trying to get at by saying "nonphysical implications" are actual quantified subsets of matter. Ideas, however abstract, are referring to arrangements of matter. The vision in your mind when you talk about aleph-1 is of a physically existing thing. When's the last time you imagined something that wasn't physical? A unicorn? You mean a horse with wings glued onto it? Mathematical objects represent states of knowledge, which are as physical as anything else. The color red refers to a particular frequency of light and the physical processes by which it is a common human experience. There is no idea of what red is apart from this. Red is something different to a blind man than it is to you, but by speaking about your physical referent, the blind man can construct his own useful physical referent.

Claims about fictional characters are no better. What do you mean by Bugs Bunny other than some arrangement of colors brought to your eyes by watching TV in the past. That's what Bugs Bunny is. There's no separately existing entity which is Bugs Bunny that can be spoken about as if it ontologically was. Every person who refers to Bugs Bunny refers to physical subsets of matter from their experience, whether that's because they witnessed the cartoon and were told through supervised learning what cognitive object to attach it to or they heard about it later through second hand experience. A blind person can have a physical referent when speaking about Bugs Bunny, albeit one that I have a very hard time mentally simulating.

In any case, merely asserting that something fails to have a physical referent is not a convincing reason to believe so. Ask yourself why you think there is no physical referent and whether one could construct a computational system that behaves that way.

Comment author: Alicorn 13 June 2011 11:52:20PM 3 points [-]

A unicorn? You mean a horse with wings glued onto it?

No.

Comment author: asr 14 June 2011 12:20:45AM 0 points [-]

I have no very firm ontological beliefs. I don't want to make any claim about whether fictional characters or mathematical abstractions "really exist".

I do claim that I can talk about abstractions without there being any set of physical referents for that abstraction. I think it's utterly routine to write software that manipulates things without physical referents. A type-checker, for instance, isn't making claims about the contents of memory; it's making higher-order claims about how those values will be used across all possible program executions -- including ones that can't physically happen.

I would cheerfully agree with you that the cognitive process (or program execution) is carried out by physical processes. Of course. But the subject of that process isn't the mechanism. There's nothing very strange about this, as far as I can tell. It's routine for programs and programmers to talk about "infinite lists"; obviously there is no such thing in the physical world, but it is a very useful abstraction.

By the way, I think your Bugs Bunny example fails. When I talk to somebody about Bugs Bunny, I am able to make myself understood. The other person and I are able to talk, in every sense that matters, about the same thing. But we don't share the same mental states. Conversely, my mental picture isn't isomorphic to any particular set of photons; it's a composite. Somehow, that doesn't defeat practical communication.

The case might be clearer for purely literary characters. When I talk about the character King Lear, I certainly am not saying something about the physical copy I read! Consider the perfectly ordinary (and true) sentence "King Lear had three daughters." That's not a claim about ink, it's a claim about the mental models created in competent speakers of English by the work (which itself is an abstraction, not a physical thing). Those models are physically embodied, but they are not physical things! There's no set of quarks you can point to and say "there's the mental model."

Comment author: p4wnc6 14 June 2011 01:54:25AM 0 points [-]

mental models created in competent speakers of English by the work (which itself is an abstraction, not a physical thing)

This is where we disagree. Those mental models are simply arrangements of matter. The fact that it feels like you're referring something separate from an arrangement of matter-memory in your brain is another thing all together. The reason that practical communication works at all is that there is an extreme amount of mutual information held between the set of features which you use to categorize the physical memory of, say, Bugs Bunny, and the features used to categorize Bugs in someone else's mind. You can reference your brain's physical memory in such a way as to cause another's physical memory to reference something, and if an algorithm sorts the mutual information of these concepts until it finds a maximum, and common experience then forms all sorts of additional memories about what wound up being referenced, it is not surprising at all that a purely physical model of concepts would allow communication. I don't see how anything you've said represents more than an assertion that it feels to you as if abstractions are not simply the brain matter that they are made out of in your mind. It's not a convincing reason for me to think abstractions have ontological properties. I think the hypothesis that it just feels that way since my brain is made of meat and I can't look at the wiring schematics is more likely.

Comment author: asr 14 June 2011 02:26:09AM 0 points [-]

This is starting to feel like a shallow game of definition-bending. I don't think we're disagreeing about any testable claim. So I'm not going to argue about why your definition is wrong, but I will describe why I think it's less useful in expressing the sorts of claims we make about the world.

When we talk about whether two mental models are similar, the similarity function we use is representation-independent. You and I might have very similar mental models, even if you are thinking with superconducting wires in liquid helium and our physical brains have nothing in common. Not being willing to talk honestly about abstractions makes it hard to ask how closely aligned two mental models are -- and that's a useful question to ask, since it helps predict speech-acts.

Conversely, saying that "everything is a physical property" deprives us of what was previously a useful category. A toaster is physical in a way that an eight-dimensional vector space is not and in a way that a not-yet-produced toaster is not. I want a word to capture that difference.

In particular, physical objects, as most of the world uses the term, means that objects have position and mass that evolve in predictable ways. It's sensible to ask what a toaster weighs. It's not sensible to ask what a mental model weighs.

I think your definitions here mean that you can't actually explain ordinary ostensive reference. There is a toaster over there, and a mental model over here, and there is some correspondence. And the way most of the world uses language, I can have the same referential relationship to a fictional person as to a real person, as to a toaster.

And I think I'm now done with the topic.

Comment author: p4wnc6 14 June 2011 02:43:06AM 0 points [-]

When we talk about whether two mental models are similar, the similarity function we use is representation-independent. ...

Not being willing to talk honestly about abstractions makes it hard to ask how closely aligned two mental models are -- and that's a useful question to ask, since it helps predict speech-acts.

Conversely, saying that "everything is a physical property" deprives us of what was previously a useful category. A toaster is physical in a way that an eight-dimensional vector space is not and in a way that a not-yet-produced toaster is not. I want a word to capture that difference.

First, I didn't say anything at all about the usefulness of treating abstractions the way we do. I don't believe in actual free will but I certainly believe that the way we walk around acting as if free will was a real attribute that we have is very useful. You can arrange a network of neurons in such a way that it will allow identification of a concept, and we use natural language to talk about this sort of arrangement of matter. Talking about it that way is just fine, and indeed very useful. But this thread was about a defense of metaethics and partially about the defense of beliefs as non-physical, but still really existing, entities. For purposes of debating that point, I think it starts to matter whether someone does or does not recognize that concepts are just arrangements of matter: information which can be extracted from brain states but does not in and of itself point to any actual, ontological entity.

I think I am quite willing to talk about abstractions and their usefulness ... just not willing to agree that they are fundamental parts of reality rather than merely hallucinations the same way that free will is.

In conversations about the ontology of physical categories, it's better to say that the category of toasters in my brain is just a pattern of matter that happens to score high correlations with image, auditory, and verbals feature vectors generated by toasters. In conversations about making toast, it's better to talk about the abstraction of the category of toasters as if it was itself something.

It's the same as talking about the wing of an airplane.

Comment author: asr 14 June 2011 05:16:16AM *  0 points [-]

But this thread was about a defense of metaethics and partially about the defense of beliefs as non-physical, but still really existing, entities. For purposes of debating that point, I think it starts to matter whether someone does or does not recognize that concepts are just arrangements of matter: information which can be extracted from brain states but does not in and of itself point to any actual, ontological entity.

Thank you, that explained where you were coming from.

But I don't see that any of this ontology gets you the meta-ethical result you want to show. I think all you've shown is that ethical claims aren't more true than, say, mathematical truth or physical law. But by any normal standard, "as true as the proof of Fermat's last theorem" is a very high degree of truth.

I think to get the ethical result you want, you should be showing that moral terms are strictly less meaningful than mathematical ones. Certainly you need to somehow separate mathematical truth from "ethical truth" -- and I don't see that ontology gets you there.

Comment author: p4wnc6 14 June 2011 06:37:42AM *  0 points [-]

Actually, I am opposed to the argument of ontology of belief, which is why I was trying to argue that beliefs are encoded states of matter. If I assert that "X is wrong" it must mean I assert "I believe X is wrong" as well. If I assert "I believe X is wrong" but don't assert "X is wrong", something's clearly a miss. As pointed out here, beliefs are reflections of best available estimates about physically existing things. If I do assert that I believe X is wrong but don't assert that X is wrong, then either I am lying about the belief, or else there's some muddling of definitions and maybe I mean some local version of X or some local version of "wrong", or I am unaware of my actual state of beliefs (possibly due to insanity, etc.) But my point is that in a sane person, from that person's first-person experience, the two statements, "I believe X is wrong" and "X is wrong" contain exactly the same information about the state of my brain. They are the same statement.

My point in all this was that "I believe X is wrong" has the same first-person referent as "X is wrong". If X = murder, say, and I assert that "murder is wrong", then once you unpack whatever definitions in terms of physical matter and consequence that I mean by "murder" and "wrong", you're left with a pointer to a physical arrangement of matter in my brain that resonates when feature vectors of my sensory input correlate with the pattern that stores "murder" and "wrong" in my brain's memory. It's a physical thing. The wrongness of murder is that thing, it isn't an ontological concept that exists outside my brain as some non-physical attribute of reality. Even though other humans have remarkably similar brain-matter-patterns of wrongness and murder, enough so that the mutual information between the pattern allows effective communication, this doesn't suddenly cause the idea that murder is wrong to stop being just a local manifestation in my brain and start being a separate idea that many humans share pointers to.

If someone wanted to establish metaethical claims based on the idea that there exist non-physical referents being referred to by common human beliefs, and that this set of referents somehow reflects an inherent property of reality, I think this would be misguided and experimentally either not falsifiable or at the very least unsupported by evidence. I don't guess that this makes too much practical difference, other than being a sort of Pandora's box for religious-type reasoning (but what isn't?).

Comment author: p4wnc6 13 June 2011 11:25:34PM *  0 points [-]

I think more salient examples that make this question hard are not going to be borne out of trying to come up with something increasingly abstract. The more puzzling cognitive objects to explain are when you apply unphysical transformations to obvious objects... like taking a dog and imagining it stretched out to the length of a football field. Or a person with a torus-like hole in their abdomen. But these are simply images in the brain. That the semantic content of the image can be interpreted as strange unions of other cognitive objects is not a reason to think that the cognitive object itself isn't physical.