TheAncientGeek comments on Hedonium's semantic problem - Less Wrong

12 Post author: Stuart_Armstrong 09 April 2015 11:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 13 April 2015 06:08:45PM *  -1 points [-]

You should have them - as stories people talk about, at the very least. Enough to be able to say "no, Santa's colour is red, not orange", for instance..

Ontology isn't a vague synonym for vocabulary. An ontological catalogue is the stuff whose existence you are seriously committed to ... .so if you have tags against certain symbols in your vocabulary saying "fictional", those definitely aren't the items you want to copy across to your ontological catalogue .

Enough to be able to say "no, Santa's colour is red, not orange", for instance.

Fictional narratives allow one to answer that kind of question by relating one symbol to another ... but the whole point of symbol grounding is to get out of such closed, mutually referential systems.

This gets back to the themes of the chinese room. The worry is that if you naively dump a dictionary or encyclopedia into an AI, it won't have real semantics, because of lack of grounding, even though it can correctly answer questions, in the way you and I can about Santa.

But if you want grounding to solve that problem, you need a robust enough version of grounding ... it wont do to water down the notion of grounding to include fictions.

Genuine human beings are also fiction from the point of view of quantum mechanics; they exist more strongly as models (that's what allows you to say that people stay the same even when they eat and excrete food). Or even as algorithms, which are also fictions from the point of view of physical reality.

Fiction isnt a synonym for lossy high-level abstraction, either. Going down that route means that "horse" and "unicorn" are both fictions. Almost all of our terms are high level abstractions.

Comment author: dxu 13 April 2015 08:09:27PM *  2 points [-]

What you've written here tells people what you think fiction is not. Could you define fiction positively instead of negatively?

Comment author: TheAncientGeek 14 April 2015 06:33:21PM -1 points [-]

For the purposes of the current discussion , iit s a symbol which is not intended to correspond to reality.

Comment author: dxu 14 April 2015 10:21:53PM 2 points [-]

Really? In that case, Santa is not fiction, because the term "Santa" refers to a cultural and social concept in the public consciousness--which, as I'm sure you'll agree, is part of reality.

Comment author: TheAncientGeek 15 April 2015 06:07:45AM 0 points [-]

I don't have to concede that the intentional content of culture is part of reality, even if I have to concede that its implementations and media are. Ink and paper are real, but as as soon as you stop treating books as marks on paper, and start reifying the content, the narrative, you cross from the territory to the map.

Comment author: dxu 15 April 2015 03:32:22PM 2 points [-]

Sure, but my point still stands: as long as "Santa" refers to something in reality, it isn't fiction; it doesn't have to mean a jolly old man who goes around giving people presents.

Comment author: TheAncientGeek 15 April 2015 04:34:15PM 0 points [-]

My point would be that a terms referent has to be picked out by its sense. No existing entity is fat AND jolly AND lives at the north pole AND delivers presents., so no existing referent fulfils the sense.

Comment author: dxu 15 April 2015 10:54:49PM *  2 points [-]

No existing entity is fat AND jolly AND lives at the north pole AND delivers presents., so no existing referent fulfils the sense.

This simply means that "an entity that is fat AND jolly AND lives at the North Pole AND delivers presents" shouldn't be chosen as a referent for "Santa". However, there is a particular neural pattern (most likely a set of similar neural patterns, actually) that corresponds to a mental image of "an entity that is fat AND jolly AND lives at the North Pole AND delivers presents"; moreover, this neural pattern (or set of neural patterns) exists across a large fraction of the human population. I'm perfectly fine with letting the word "Santa" refer to this pattern (or set of patterns). Is there a problem with that?

Comment author: TheOtherDave 16 April 2015 04:21:21AM 2 points [-]

My $0.02...

OK, so let's consider the set of neural patterns (and corresponding artificial signals/symbols) you refer to here... the patterns that the label "Santa" can be used to refer to. For convenience, I'm going to label that set of neural patterns N.

I mean here to distinguish N from the set of flesh-and-blood-living-at-the-North-Pole patterns that the label "Santa" can refer to. For convenience, I'm going to label that set of patterns S.

So, I agree that N exists, and I assume you agree that S does not exist.

You further say:

"I'm perfectly fine with letting the word "Santa" refer to this pattern (or set of patterns)."

...in other words, you're fine with letting "Santa" refer to N, and not to S. Yes?

Is there a problem with that?

Well, yes, in that I don't think it's possible.

I mean, I think it's possible to force "Santa" to refer to N, and not to S, and you're making a reasonable effort at doing so here. And once you've done that, you can say "Santa exists" and communicate exists(N) but not communicate exists(S).

But I also think that without that effort being made what "Santa exists" will communicate is exists(S).

And I also think that one of the most reliable natural ways of expressing exists(N) but not communicate exists(S) is by saying "Santa doesn't exist."

Put another way: it's as though you said to me that you're perfectly fine with letting the word "fish" refer to cows. There's no problem with that, particularly; if "fish" ends up referring to cows when allowed to, I'm OK with that. But my sense of English is that, in fact, "fish" does not end up referring to cows when allowed to, and when you say "letting" you really mean forcing.

Comment author: dxu 18 April 2015 02:56:55AM *  2 points [-]

That seems fair. What I was mostly trying to get at was a way to describe Santa without admitting his existence; for instance, I could say, "Santa wears a green coat!" and you'd be able to say, "That's wrong!" without either of us ever claiming that Santa actually exists. In other words, we would be communicating information about N, but not S.

More generally speaking, this problem usually arises whenever a word has more than one meaning, and information about which meaning is being used when is conveyed through context. As usual, discussion of the meaning of words leaves out a lot of details about how humans actually communicate (for instance, an absolutely enormous amount of communication occurs through nonverbal channels). Overloaded words occur all the time in human communication, and Santa just happens to be one of these overloaded words; it occasionally refers to S, occasionally to N. Most of the time, you can tell which meaning is being used, but in a discussion of language, I agree I was being imprecise. The concept of overloading a word just didn't occur to me at the time I was typing my original comment, for whatever reason.

Comment author: Quill_McGee 16 April 2015 02:27:44PM 1 point [-]

A way to communicate Exists(N) and not Exists(S) in a way that doesn't depend on the context of the current conversation might be ""Santa" exists but Santa does not." Of course, the existence of "Santa" is granted when "Santa does not exist" is understood by the other person, so this is really just a slightly less ambiguous way of saying "Santa does not exist"

Comment author: TheAncientGeek 16 April 2015 10:19:59AM *  0 points [-]

But I also think that without that effort being made what "Santa exists" will communicate is exists(S).

Yes, The not-exists(S) is explicit, in "there is no Santa ", the exists(N) is implicit in the fact that listener and speaker understood each other.

Comment author: TheAncientGeek 16 April 2015 10:05:03AM *  0 points [-]

This simply means that "an entity that is fat AND jolly AND lives at the North Pole AND delivers presents" shouldn't be chosen as a referent for "Santa".

That is the exact opposite if what I was saying. An entity that is fat and jolly, etc, should, normatively be chosen as the referent of "Santa", and in the absence of any such, Santa has no referent. AFAICT you are tacitly assuming that every term must have a referent, however unrelated to its sense. I am not. Under the Fregean scheme, I can cash out fictional terms as terms with no referents.

However, there is a particular neural pattern (most likely a set of similar neural patterns, actually) that corresponds to a mental image of "an entity that is fat AND jolly AND lives at the North Pole AND delivers presents";

I'm not disputing that. What I am saying is that such neural patterns are the referent of "neural other representing fat jolly man....", not referents of "Santa".

moreover, this neural pattern (or set of neural patterns) exists across a large fraction of the human population. I'm perfectly fine with letting the word "Santa" refer to this pattern (or set of patterns). Is there a problem with that?

Several.

  1. Breaks the rule that referents are picked out by senses.

  2. Entails map/territory confusions.

  3. Blurs fiction/fact boundary.

  4. Inconsistent...sometimes "X" has referent X, sometimes it has referent "representation of X"

Comment author: dxu 18 April 2015 02:41:52AM *  2 points [-]

Look, I think you've maybe forgotten that this conversation started when you took issue with this part of the article:

difference is that in all three of the first processes, the symbols in the brain correspond to objects in reality (or virtual reality).

To which Stuart replied:

If a human plays starcraft 2 and has a symbol for Protoss Carrier, does that mean the human's symbol is suddenly ungrounded?

And then you said:

If fictions can ground symbols, then what is wrong with having santa , the tooth fairy, and unicorns in your ontology?

And from here the conversation branched off. Several comments in, and you have now managed to divert this conversation into a discussion on philosophy of language, all the while entirely ignoring the fact that your stated concerns are irrelevant to your original contention. Let's take a look at each of your complaints:

An entity that is fat and jolly, etc, should, normatively be chosen as the referent of "Santa", and in the absence of any such, Santa has no referent.

You have now utterly divorced this conversation from the issue which first prompted it. The confusion here stems from the fact that the traditional tale of "Santa" tells of a physical man who physically exists at the physical North Pole. None of that applies to virtual reality, which was the part of the article you originally took umbrage at. Nor is it the case for Stuart's example of the Protoss Carrier in Starcraft 2. In these examples, objects in virtual reality/the computer model of the Protoss Carrier should "normatively be chosen as the referents" (as you phrased it) of the phrases "objects in virtual reality"/"the Protoss Carrier".

What I am saying is that such neural patterns are the referent of "neural other representing fat jolly man....", not referents of "Santa".

What is the referent of "Protoss Carrier", if not "computer-generated video game model of the Protoss Carrier"?

  1. Breaks the rule that referents are picked out by senses.

Again, irrelevant to the original example.

  1. Entails map/territory confusions.

Still irrelevant.

  1. Blurs fiction/fact boundary.

Still irrelevant.

  1. Inconsistent...sometimes "X" has referent X, sometimes it has referent "representation of X"

Still irrelevant, and you can easily tell from context besides.

Look, you've performed what is known as a conversational "bait-and-switch", wherein you present one idea for discussion, and when another engages you on that idea, you back out and start talking about something that seems maybe-a-little-bit-possibly-slightly-tangentially-related-if-you-don't-squint-at-it-too-hard. Stick to the topic at hand, please.

EDIT: And in fact, this entire confusion stems from your original use of the word "fiction". You've been implicitly been using the word with two meanings in mind, in an analogous fashion to how we've been using "Santa" that refer to different things:

  1. Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and makes no claim to exist in physical reality. This is the definition you used when first addressing Stuart.

  2. Something which does not exist in physical reality except by being instantiated in some virtual representation or map, and is also claimed to exist in physical reality. This is the definition you began using when you first brought up Santa and unicorns, and it's the definition you've been using ever since.

In retrospect, I should have seen that and called you out on it immediately, but I didn't look too closely despite there being a nagging feeling that something strange was going on when I first read your comment. Let's keep words from being overloaded, neh? That's what happened with Santa, after all.

Comment author: Nornagest 15 April 2015 10:56:47PM 0 points [-]

Isn't that just the contention of "Yes, Virginia..."?

Comment author: dxu 15 April 2015 11:05:15PM *  0 points [-]

I'm not quite sure on what you mean by that. I looked up the phrase, and it returned an 1897 article in The New York Sun, but besides the obvious fact that both my comment and the article deal with the existence (or non-existence) of Santa Claus, I'm not seeing a huge connection here. Could you possibly expand?

Comment author: Stuart_Armstrong 14 April 2015 09:05:29AM 1 point [-]

I'm not entirely sure that we're still disagreeing. I'm not claiming that fiction is the same as non-fictional entities. I'm saying that something functioning in the human world has to have a category called "fiction", and to correctly see the contours of that category.

This gets back to the themes of the chinese room. The worry is that if you naively dump a dictionary or encyclopedia into an AI, it won't have real semantics, because of lack of grounding, even though it can correctly answer questions, in the way you and I can about Santa.

Yes, just like the point I made on the weakness of the Turing test. The problem is that it uses verbal skills as a test, which means it's only testing verbal skills.

However, if the chinese room walked around in the world, interacted with objects, and basically demonstrated human-level (or higher) lever of prediction, manipulation, and such, AND it operated by manipulating symbols and models, then I'd conclude that those actions demonstrate the symbols and models were grounded. Would you disagree?

Comment author: TheAncientGeek 14 April 2015 06:31:20PM 0 points [-]

I'd say they could .bd taken to be as grounded as ours. There is still a problem with referential semantics, that neither we nor the AI can tell it isnt in VR.

Which itself feeds through into problem with empiricism and physicalism.

Since semantics is inherently tricky, there aren't easy answers to the CR.

Comment author: Stuart_Armstrong 14 April 2015 06:38:09PM *  2 points [-]

If you're in VR and can never leave it or see evidence of if (eg a perfect Descartes's demon), I see no reason to see this as different from being in reality. The symbols are still grounded in the baseline reality as far as you could ever tell. Any being you could encounter could check that your symbols are as grounded as you can make them.

Note that this is not the case for a "encyclopaedia Chinese Room". We could give it legs and make it walk around; and then when it fails and falls over every time while talking about how easy it is to walk, we'd realise its symbols are not grounded in our reality (which may be VR, but that's not relevant).