Stuart_Armstrong comments on Hedonium's semantic problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (61)
Indeed, that was my argument (and why I'm annoyed that Searle misdirected a correct intuition).
You should have them - as stories people talk about, at the very least. Enough to be able to say "no, Santa's colour is red, not orange", for instance.
Genuine human beings are also fiction from the point of view of quantum mechanics; they exist more strongly as models (that's what allows you to say that people stay the same even when they eat and excrete food). Or even as algorithms, which are also fictions from the point of view of physical reality.
PS: I don't know why you keep on getting downvoted.
Are you saying that you and Harry Potter are equally fictional? Or rainbows and kobolds? If not, what are you saying, when you say that they are all fictional? What observations were made, whereby quantum mechanics discovered this fictional nature, and what counterfactual observations would have implied the opposite?
Certainly not. I'm saying it's useful for people to have symbols labelled "Stuart Armstrong" and "Harry Potter" (with very different properties - fictional being one), without needing either symbol defined in terms of quantum mechanics.
Something defined in terms of quantum mechanics can still fail to correspond....you're still on the mail and therefore talking about issues orthogonal to grounding.
Ontology isn't a vague synonym for vocabulary. An ontological catalogue is the stuff whose existence you are seriously committed to ... .so if you have tags against certain symbols in your vocabulary saying "fictional", those definitely aren't the items you want to copy across to your ontological catalogue .
Fictional narratives allow one to answer that kind of question by relating one symbol to another ... but the whole point of symbol grounding is to get out of such closed, mutually referential systems.
This gets back to the themes of the chinese room. The worry is that if you naively dump a dictionary or encyclopedia into an AI, it won't have real semantics, because of lack of grounding, even though it can correctly answer questions, in the way you and I can about Santa.
But if you want grounding to solve that problem, you need a robust enough version of grounding ... it wont do to water down the notion of grounding to include fictions.
Fiction isnt a synonym for lossy high-level abstraction, either. Going down that route means that "horse" and "unicorn" are both fictions. Almost all of our terms are high level abstractions.
What you've written here tells people what you think fiction is not. Could you define fiction positively instead of negatively?
For the purposes of the current discussion , iit s a symbol which is not intended to correspond to reality.
Really? In that case, Santa is not fiction, because the term "Santa" refers to a cultural and social concept in the public consciousness--which, as I'm sure you'll agree, is part of reality.
I don't have to concede that the intentional content of culture is part of reality, even if I have to concede that its implementations and media are. Ink and paper are real, but as as soon as you stop treating books as marks on paper, and start reifying the content, the narrative, you cross from the territory to the map.
Sure, but my point still stands: as long as "Santa" refers to something in reality, it isn't fiction; it doesn't have to mean a jolly old man who goes around giving people presents.
My point would be that a terms referent has to be picked out by its sense. No existing entity is fat AND jolly AND lives at the north pole AND delivers presents., so no existing referent fulfils the sense.
This simply means that "an entity that is fat AND jolly AND lives at the North Pole AND delivers presents" shouldn't be chosen as a referent for "Santa". However, there is a particular neural pattern (most likely a set of similar neural patterns, actually) that corresponds to a mental image of "an entity that is fat AND jolly AND lives at the North Pole AND delivers presents"; moreover, this neural pattern (or set of neural patterns) exists across a large fraction of the human population. I'm perfectly fine with letting the word "Santa" refer to this pattern (or set of patterns). Is there a problem with that?
My $0.02...
OK, so let's consider the set of neural patterns (and corresponding artificial signals/symbols) you refer to here... the patterns that the label "Santa" can be used to refer to. For convenience, I'm going to label that set of neural patterns N.
I mean here to distinguish N from the set of flesh-and-blood-living-at-the-North-Pole patterns that the label "Santa" can refer to. For convenience, I'm going to label that set of patterns S.
So, I agree that N exists, and I assume you agree that S does not exist.
You further say:
...in other words, you're fine with letting "Santa" refer to N, and not to S. Yes?
Well, yes, in that I don't think it's possible.
I mean, I think it's possible to force "Santa" to refer to N, and not to S, and you're making a reasonable effort at doing so here. And once you've done that, you can say "Santa exists" and communicate exists(N) but not communicate exists(S).
But I also think that without that effort being made what "Santa exists" will communicate is exists(S).
And I also think that one of the most reliable natural ways of expressing exists(N) but not communicate exists(S) is by saying "Santa doesn't exist."
Put another way: it's as though you said to me that you're perfectly fine with letting the word "fish" refer to cows. There's no problem with that, particularly; if "fish" ends up referring to cows when allowed to, I'm OK with that. But my sense of English is that, in fact, "fish" does not end up referring to cows when allowed to, and when you say "letting" you really mean forcing.
That is the exact opposite if what I was saying. An entity that is fat and jolly, etc, should, normatively be chosen as the referent of "Santa", and in the absence of any such, Santa has no referent. AFAICT you are tacitly assuming that every term must have a referent, however unrelated to its sense. I am not. Under the Fregean scheme, I can cash out fictional terms as terms with no referents.
I'm not disputing that. What I am saying is that such neural patterns are the referent of "neural other representing fat jolly man....", not referents of "Santa".
Several.
Breaks the rule that referents are picked out by senses.
Entails map/territory confusions.
Blurs fiction/fact boundary.
Inconsistent...sometimes "X" has referent X, sometimes it has referent "representation of X"
Isn't that just the contention of "Yes, Virginia..."?
I'm not entirely sure that we're still disagreeing. I'm not claiming that fiction is the same as non-fictional entities. I'm saying that something functioning in the human world has to have a category called "fiction", and to correctly see the contours of that category.
Yes, just like the point I made on the weakness of the Turing test. The problem is that it uses verbal skills as a test, which means it's only testing verbal skills.
However, if the chinese room walked around in the world, interacted with objects, and basically demonstrated human-level (or higher) lever of prediction, manipulation, and such, AND it operated by manipulating symbols and models, then I'd conclude that those actions demonstrate the symbols and models were grounded. Would you disagree?
I'd say they could .bd taken to be as grounded as ours. There is still a problem with referential semantics, that neither we nor the AI can tell it isnt in VR.
Which itself feeds through into problem with empiricism and physicalism.
Since semantics is inherently tricky, there aren't easy answers to the CR.
If you're in VR and can never leave it or see evidence of if (eg a perfect Descartes's demon), I see no reason to see this as different from being in reality. The symbols are still grounded in the baseline reality as far as you could ever tell. Any being you could encounter could check that your symbols are as grounded as you can make them.
Note that this is not the case for a "encyclopaedia Chinese Room". We could give it legs and make it walk around; and then when it fails and falls over every time while talking about how easy it is to walk, we'd realise its symbols are not grounded in our reality (which may be VR, but that's not relevant).