Stuart_Armstrong comments on Hedonium's semantic problem - LessWrong

12 Post author: Stuart_Armstrong 09 April 2015 11:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 14 April 2015 09:05:29AM 1 point [-]

I'm not entirely sure that we're still disagreeing. I'm not claiming that fiction is the same as non-fictional entities. I'm saying that something functioning in the human world has to have a category called "fiction", and to correctly see the contours of that category.

This gets back to the themes of the chinese room. The worry is that if you naively dump a dictionary or encyclopedia into an AI, it won't have real semantics, because of lack of grounding, even though it can correctly answer questions, in the way you and I can about Santa.

Yes, just like the point I made on the weakness of the Turing test. The problem is that it uses verbal skills as a test, which means it's only testing verbal skills.

However, if the chinese room walked around in the world, interacted with objects, and basically demonstrated human-level (or higher) lever of prediction, manipulation, and such, AND it operated by manipulating symbols and models, then I'd conclude that those actions demonstrate the symbols and models were grounded. Would you disagree?

Comment author: TheAncientGeek 14 April 2015 06:31:20PM 0 points [-]

I'd say they could .bd taken to be as grounded as ours. There is still a problem with referential semantics, that neither we nor the AI can tell it isnt in VR.

Which itself feeds through into problem with empiricism and physicalism.

Since semantics is inherently tricky, there aren't easy answers to the CR.

Comment author: Stuart_Armstrong 14 April 2015 06:38:09PM *  2 points [-]

If you're in VR and can never leave it or see evidence of if (eg a perfect Descartes's demon), I see no reason to see this as different from being in reality. The symbols are still grounded in the baseline reality as far as you could ever tell. Any being you could encounter could check that your symbols are as grounded as you can make them.

Note that this is not the case for a "encyclopaedia Chinese Room". We could give it legs and make it walk around; and then when it fails and falls over every time while talking about how easy it is to walk, we'd realise its symbols are not grounded in our reality (which may be VR, but that's not relevant).