Stuart_Armstrong comments on Hedonium's semantic problem - Less Wrong

12 Post author: Stuart_Armstrong 09 April 2015 11:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 13 April 2015 09:51:03AM 3 points [-]

In particular, if you are going to appeal to robot bodies as giving a level of causal connection sufficient to ground symbols, then Searle still has a point about the limitations of abstract, unembodied, software.

Except that isn't Searle's stated point. He really flubs the "problem of other minds" objection as badly as I parodied it.

How can a symbol correspond to what does not actually exist?

If a human plays starcraft 2 and has a symbol for Protoss Carrier, does that mean the human's symbol is suddenly ungrounded?

Comment author: TheAncientGeek 13 April 2015 10:51:54AM -1 points [-]

Except that isn't Searle's stated point. 

I agree, but it is possible to rescue valid intuitions from M,B&P.

If a human plays starcraft 2 and has a symbol for Protoss Carrier, does that mean the human's symbol is suddenly ungrounded?

If fictions can ground symbols, then what is wrong with having santa , the tooth fairy, and unicorns in your ontology?

Comment author: Stuart_Armstrong 13 April 2015 11:32:30AM 2 points [-]

but it is possible to rescue valid intuitions from M,B&P.

Indeed, that was my argument (and why I'm annoyed that Searle misdirected a correct intuition).

If fictions can ground symbols, then what is wrong with having santa , the tooth fairy, and unicorns in your ontology?

You should have them - as stories people talk about, at the very least. Enough to be able to say "no, Santa's colour is red, not orange", for instance.

Genuine human beings are also fiction from the point of view of quantum mechanics; they exist more strongly as models (that's what allows you to say that people stay the same even when they eat and excrete food). Or even as algorithms, which are also fictions from the point of view of physical reality.

PS: I don't know why you keep on getting downvoted.

Comment author: RichardKennaway 13 April 2015 01:12:32PM 1 point [-]

Genuine human beings are also fiction from the point of view of quantum mechanics

Are you saying that you and Harry Potter are equally fictional? Or rainbows and kobolds? If not, what are you saying, when you say that they are all fictional? What observations were made, whereby quantum mechanics discovered this fictional nature, and what counterfactual observations would have implied the opposite?

Comment author: Stuart_Armstrong 13 April 2015 01:46:48PM 2 points [-]

Are you saying that you and Harry Potter are equally fictional?

Certainly not. I'm saying it's useful for people to have symbols labelled "Stuart Armstrong" and "Harry Potter" (with very different properties - fictional being one), without needing either symbol defined in terms of quantum mechanics.

Comment author: TheAncientGeek 14 April 2015 06:25:02PM -1 points [-]

Something defined in terms of quantum mechanics can still fail to correspond....you're still on the mail and therefore talking about issues orthogonal to grounding.

Comment author: TheAncientGeek 13 April 2015 06:08:45PM *  -1 points [-]

You should have them - as stories people talk about, at the very least. Enough to be able to say "no, Santa's colour is red, not orange", for instance..

Ontology isn't a vague synonym for vocabulary. An ontological catalogue is the stuff whose existence you are seriously committed to ... .so if you have tags against certain symbols in your vocabulary saying "fictional", those definitely aren't the items you want to copy across to your ontological catalogue .

Enough to be able to say "no, Santa's colour is red, not orange", for instance.

Fictional narratives allow one to answer that kind of question by relating one symbol to another ... but the whole point of symbol grounding is to get out of such closed, mutually referential systems.

This gets back to the themes of the chinese room. The worry is that if you naively dump a dictionary or encyclopedia into an AI, it won't have real semantics, because of lack of grounding, even though it can correctly answer questions, in the way you and I can about Santa.

But if you want grounding to solve that problem, you need a robust enough version of grounding ... it wont do to water down the notion of grounding to include fictions.

Genuine human beings are also fiction from the point of view of quantum mechanics; they exist more strongly as models (that's what allows you to say that people stay the same even when they eat and excrete food). Or even as algorithms, which are also fictions from the point of view of physical reality.

Fiction isnt a synonym for lossy high-level abstraction, either. Going down that route means that "horse" and "unicorn" are both fictions. Almost all of our terms are high level abstractions.

Comment author: dxu 13 April 2015 08:09:27PM *  2 points [-]

What you've written here tells people what you think fiction is not. Could you define fiction positively instead of negatively?

Comment author: TheAncientGeek 14 April 2015 06:33:21PM -1 points [-]

For the purposes of the current discussion , iit s a symbol which is not intended to correspond to reality.

Comment author: dxu 14 April 2015 10:21:53PM 2 points [-]

Really? In that case, Santa is not fiction, because the term "Santa" refers to a cultural and social concept in the public consciousness--which, as I'm sure you'll agree, is part of reality.

Comment author: TheAncientGeek 15 April 2015 06:07:45AM 0 points [-]

I don't have to concede that the intentional content of culture is part of reality, even if I have to concede that its implementations and media are. Ink and paper are real, but as as soon as you stop treating books as marks on paper, and start reifying the content, the narrative, you cross from the territory to the map.

Comment author: dxu 15 April 2015 03:32:22PM 2 points [-]

Sure, but my point still stands: as long as "Santa" refers to something in reality, it isn't fiction; it doesn't have to mean a jolly old man who goes around giving people presents.

Comment author: TheAncientGeek 15 April 2015 04:34:15PM 0 points [-]

My point would be that a terms referent has to be picked out by its sense. No existing entity is fat AND jolly AND lives at the north pole AND delivers presents., so no existing referent fulfils the sense.

Comment author: Stuart_Armstrong 14 April 2015 09:05:29AM 1 point [-]

I'm not entirely sure that we're still disagreeing. I'm not claiming that fiction is the same as non-fictional entities. I'm saying that something functioning in the human world has to have a category called "fiction", and to correctly see the contours of that category.

This gets back to the themes of the chinese room. The worry is that if you naively dump a dictionary or encyclopedia into an AI, it won't have real semantics, because of lack of grounding, even though it can correctly answer questions, in the way you and I can about Santa.

Yes, just like the point I made on the weakness of the Turing test. The problem is that it uses verbal skills as a test, which means it's only testing verbal skills.

However, if the chinese room walked around in the world, interacted with objects, and basically demonstrated human-level (or higher) lever of prediction, manipulation, and such, AND it operated by manipulating symbols and models, then I'd conclude that those actions demonstrate the symbols and models were grounded. Would you disagree?

Comment author: TheAncientGeek 14 April 2015 06:31:20PM 0 points [-]

I'd say they could .bd taken to be as grounded as ours. There is still a problem with referential semantics, that neither we nor the AI can tell it isnt in VR.

Which itself feeds through into problem with empiricism and physicalism.

Since semantics is inherently tricky, there aren't easy answers to the CR.

Comment author: Stuart_Armstrong 14 April 2015 06:38:09PM *  2 points [-]

If you're in VR and can never leave it or see evidence of if (eg a perfect Descartes's demon), I see no reason to see this as different from being in reality. The symbols are still grounded in the baseline reality as far as you could ever tell. Any being you could encounter could check that your symbols are as grounded as you can make them.

Note that this is not the case for a "encyclopaedia Chinese Room". We could give it legs and make it walk around; and then when it fails and falls over every time while talking about how easy it is to walk, we'd realise its symbols are not grounded in our reality (which may be VR, but that's not relevant).