TheAncientGeek comments on Hedonium's semantic problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (61)
I agree, but it is possible to rescue valid intuitions from M,B&P.
If fictions can ground symbols, then what is wrong with having santa , the tooth fairy, and unicorns in your ontology?
Indeed, that was my argument (and why I'm annoyed that Searle misdirected a correct intuition).
You should have them - as stories people talk about, at the very least. Enough to be able to say "no, Santa's colour is red, not orange", for instance.
Genuine human beings are also fiction from the point of view of quantum mechanics; they exist more strongly as models (that's what allows you to say that people stay the same even when they eat and excrete food). Or even as algorithms, which are also fictions from the point of view of physical reality.
PS: I don't know why you keep on getting downvoted.
Are you saying that you and Harry Potter are equally fictional? Or rainbows and kobolds? If not, what are you saying, when you say that they are all fictional? What observations were made, whereby quantum mechanics discovered this fictional nature, and what counterfactual observations would have implied the opposite?
Certainly not. I'm saying it's useful for people to have symbols labelled "Stuart Armstrong" and "Harry Potter" (with very different properties - fictional being one), without needing either symbol defined in terms of quantum mechanics.
Something defined in terms of quantum mechanics can still fail to correspond....you're still on the mail and therefore talking about issues orthogonal to grounding.
Ontology isn't a vague synonym for vocabulary. An ontological catalogue is the stuff whose existence you are seriously committed to ... .so if you have tags against certain symbols in your vocabulary saying "fictional", those definitely aren't the items you want to copy across to your ontological catalogue .
Fictional narratives allow one to answer that kind of question by relating one symbol to another ... but the whole point of symbol grounding is to get out of such closed, mutually referential systems.
This gets back to the themes of the chinese room. The worry is that if you naively dump a dictionary or encyclopedia into an AI, it won't have real semantics, because of lack of grounding, even though it can correctly answer questions, in the way you and I can about Santa.
But if you want grounding to solve that problem, you need a robust enough version of grounding ... it wont do to water down the notion of grounding to include fictions.
Fiction isnt a synonym for lossy high-level abstraction, either. Going down that route means that "horse" and "unicorn" are both fictions. Almost all of our terms are high level abstractions.
What you've written here tells people what you think fiction is not. Could you define fiction positively instead of negatively?
For the purposes of the current discussion , iit s a symbol which is not intended to correspond to reality.
Really? In that case, Santa is not fiction, because the term "Santa" refers to a cultural and social concept in the public consciousness--which, as I'm sure you'll agree, is part of reality.
I don't have to concede that the intentional content of culture is part of reality, even if I have to concede that its implementations and media are. Ink and paper are real, but as as soon as you stop treating books as marks on paper, and start reifying the content, the narrative, you cross from the territory to the map.
Sure, but my point still stands: as long as "Santa" refers to something in reality, it isn't fiction; it doesn't have to mean a jolly old man who goes around giving people presents.
My point would be that a terms referent has to be picked out by its sense. No existing entity is fat AND jolly AND lives at the north pole AND delivers presents., so no existing referent fulfils the sense.
This simply means that "an entity that is fat AND jolly AND lives at the North Pole AND delivers presents" shouldn't be chosen as a referent for "Santa". However, there is a particular neural pattern (most likely a set of similar neural patterns, actually) that corresponds to a mental image of "an entity that is fat AND jolly AND lives at the North Pole AND delivers presents"; moreover, this neural pattern (or set of neural patterns) exists across a large fraction of the human population. I'm perfectly fine with letting the word "Santa" refer to this pattern (or set of patterns). Is there a problem with that?
I'm not entirely sure that we're still disagreeing. I'm not claiming that fiction is the same as non-fictional entities. I'm saying that something functioning in the human world has to have a category called "fiction", and to correctly see the contours of that category.
Yes, just like the point I made on the weakness of the Turing test. The problem is that it uses verbal skills as a test, which means it's only testing verbal skills.
However, if the chinese room walked around in the world, interacted with objects, and basically demonstrated human-level (or higher) lever of prediction, manipulation, and such, AND it operated by manipulating symbols and models, then I'd conclude that those actions demonstrate the symbols and models were grounded. Would you disagree?
I'd say they could .bd taken to be as grounded as ours. There is still a problem with referential semantics, that neither we nor the AI can tell it isnt in VR.
Which itself feeds through into problem with empiricism and physicalism.
Since semantics is inherently tricky, there aren't easy answers to the CR.
If you're in VR and can never leave it or see evidence of if (eg a perfect Descartes's demon), I see no reason to see this as different from being in reality. The symbols are still grounded in the baseline reality as far as you could ever tell. Any being you could encounter could check that your symbols are as grounded as you can make them.
Note that this is not the case for a "encyclopaedia Chinese Room". We could give it legs and make it walk around; and then when it fails and falls over every time while talking about how easy it is to walk, we'd realise its symbols are not grounded in our reality (which may be VR, but that's not relevant).