Bugmaster comments on Welcome to Less Wrong! (6th thread, July 2013) - Less Wrong

21 Post author: KnaveOfAllTrades 26 July 2013 02:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (513)

You are viewing a single comment's thread. Show more comments above.

Comment author: telms 05 August 2013 01:22:15AM *  9 points [-]

Hi, everyone. My name is Teresa, and I came to Less Wrong by way of HPMOR.

I read the first dozen chapters of HPMOR without having read or seen the Harry Potter canon, but once I was hooked on the former, it became necessary to see all the movies and then read all the books in order to get the HPMOR jokes. JK Rowling actually earned royalties she would never have received otherwise thanks to HPMOR.

I don't actually identify as a pure rationalist, although I started out that way many, many years ago. What I am committed to today is SANITY. I learned the hard way that, in my case at least, it is the body that keeps the mind sane. Without embodiment to ground meaning, you get into problems of unsearchable infinite regress, and you can easily hypothesize internally consistent worlds that are nevertheless not the real world the body lives in. This can lead to religions and other serious delusions.

That said, however, I find a lot of utility in thinking through the material on this site. I discovered Bayesian decision theory in high school, but the texts I read at the time either didn't explain the whole theory or else I didn't catch it all at age 14. Either way, it was just a cute trick for calculating compound utility scores based on guesses of likelihood for various contingencies. The greatest service the Less Wrong site has done for me is to connect the utility calculation method to EMPIRICAL prior probabilities! Like, duh! A hugely useful tool, that is.

As a professional writer in my day job and student of applied linguistics research otherwise, I have some reservations about those of the Sequences that reference the philosophy of language. I completely agree that Searle believes in magic (aka "intentionality"), which is not useful. But this does not mean the Chinese Room problem isn't real.

When you study human language use empirically in natural contexts (through frame-by-frame analysis of video recordings), it turns out that what we think we do with language and what we actually do are rather divergent. The body and places in the world and other agents in the interaction all play a much bigger role in the real-time construction of meaning than you would expect from introspection. Egocentric bias has a HUGE impact on what we imagine about our own utterances. I've come to the conclusion that Stevan Harnad is absolutely correct, and that machine language understanding will require an AI ROBOT, not a disembodied algorithmic system.

As for HPMOR, I hereby predict that Harrymort is going to go back in time to the primal event in Godric's Hollow and change the entire universe to canon in his quest to, er, spoilers, can't say.

Cheers.

Comment author: Bugmaster 05 August 2013 02:22:20AM *  4 points [-]

I've come to the conclusion that Stevan Harnad is absolutely correct, and that machine language understanding will require an AI ROBOT, not a disembodied algorithmic system.

I am not familiar with Stevan Harnad, but this sounds counterintuitive to me (though it's very likely that I'm misunderstanding your point). I am currently reading your words on the screen. I can't hear you or see your body language. And yet, I can still understand what you wrote (not fully, perhaps, but enough to ask you questions about it). In our current situation, I'm not too different from a software program that is receiving the text via some input stream, so I don't see an a priori reason why such a program could not understand the text as well as I do.

Comment author: SaidAchmiz 05 August 2013 02:36:42AM 3 points [-]

I assume telms is referring to embodied cognition, the idea that your ability to communicate with her, and achieve mutual understanding of any sort, is made possible by shared concepts and mental structures which can only arise in an "embodied" mind.

I am rather skeptical about this thesis as far as artificial minds go; somewhat less skeptical about it if applied only to "natural" (i.e., evolved) minds — although in that case it's almost trivial; but in any case don't know enough about it to have a fully informed opinion.

Comment author: Bugmaster 05 August 2013 03:43:02AM 2 points [-]

Oh, ok, that makes more sense. As far as I understand, the idea behind embodied cognition is that intelligent minds must have a physical body with a rich set of sensors and effectors in order to develop; but once they're done with their development, they can read text off of the screen instead of talking.

That definitely makes sense in case of us biological humans, but just like you, I'm skeptical that the thesis applies to all possible minds at all times.

Comment author: telms 05 August 2013 05:09:25AM 2 points [-]
Comment author: Bugmaster 05 August 2013 07:16:52AM 5 points [-]

I skimmed both papers, and found them unconvincing. Granted, I am not a philosopher, so it's likely that I'm missing something, but still:

In the first paper, Harnad argues that rule-based expert systems cannot be used to build a Strong AI; I completely agree. He further argues that merely building a system out of neural networks does not guarantee that it will grow to be a Strong AI either; again, we're on the same page so far. He further points out that, currently, nothing even resembling Strong AI exists anywhere. No argument there.

Harnad totally loses me, however, when he begins talking about "meaning" as though that were some separate entity to which "symbols" are attached. He keeps contrasting mere "symbol manipulation" with true understanding of "meaning", but he never explains how we could tell one from the other.

In the second paper, Harnad basically falls into the same trap as Searle. He lampoons the "System Reply" by calling it things like "a predictable piece of hand-waving" -- but that's just name-calling, not an argument. Why precisely is Harnad (or Searle) so convinced that the Chinese Room as a whole does not understand Chinese ? Sure, the man inside doesn't understand Chinese, but that's like saying that a car cannot drive uphill at 70 mph because no human driver can run uphill that fast.

The rest of his paper amounts to a moving of the goalposts. Harnad is basically saying, "Ok, let's say we have an AI that can pass the TT via teletype. But that's not enough ! It also needs to pass the TTT ! And if it passes that, then the TTTT ! And then maybe the TTTTT !" Meanwhile, Harnad himself is reading articles off his screen which were published by other philosophers, and somehow he never requires them to pass the TTTT before he takes their writings seriously.

Don't get me wrong, it is entirely possible that the only way to develop a Strong AI is to embody it in the physical world, and that no simulation, no matter how realistic, will suffice. I am open to being convinced, but the papers you linked are not convincing. I'm not interested in figuring out whether any given person who appears to speak English really, truly understands English; or whether this person is merely mimicking a perfect understanding of English. I'd rather listen to what such a person has to say.

Comment author: SaidAchmiz 07 August 2013 06:21:15AM 6 points [-]

Why precisely is Harnad (or Searle) so convinced that the Chinese Room as a whole does not understand Chinese ?

Haven't read the Harnad paper yet, but the reason Searle's convinced seems obvious to me: he just doesn't take his own scenario seriously — seriously enough to really imagine it, rather than just treating it as a piece of absurd fantasy. In other words, he does what Dennett calls "mistaking a failure of imagination for an insight into necessity".

In The Mind's Eye, Dennett and Hofstadter give the Chinese Room scenario a much more serious fictional treatment, and show in great detail what elements of it trigger Searle's intuitions on the matter, as well as how to tweak those intuitions in various ways. Sadly but predictably, Searle has never (to my knowledge) responded to their dissection of his views.

Comment author: wedrifid 07 August 2013 06:56:16AM 3 points [-]

In other words, he does what Dennett calls "mistaking a failure of imagination for an insight into necessity".

I like the expression and can think of times where I have looked for something that expresses this all-to-common practice simply.

Comment author: SaidAchmiz 07 August 2013 06:25:50PM 5 points [-]

Having now read the second linked Harnad paper, my evaluation is similar to yours. Some more specific comments follow.

Harnad talks a lot about whether a body "has a mind": whether a Turing Test could show if a body "has a mind", how we know a body "has a mind", etc.

What on earth does he mean by "mind"? Not... the same thing that most of us here at LessWrong mean by it, I should think.

He also refers to artificial intelligence as "computer models". Either he is using "model" quite strangely as well... or he has some... very confused ideas about AI. (Actually, very confused ideas about computers in general is, in my experience, endemic among the philosopher population. It's really rather distressing.)

Searle has shown that a mindless symbol-manipulator could pass the [Turing Test] undetected.

This has surely got to be one of the most ludicrous pronouncements I've ever seen a philosopher make.

people can do a lot more than just communicating verbally by teletype. They can recognize and identify and manipulate and describe real objects, events and states of affairs in the world. [italics added]

One of these things is not like the others...

Similar arguments can be made against behavioral "modularity": It is unlikely that our chess-playing capacity constitutes an autonomous functional module, independent of our capacity to see, move, manipulate, reason, and perhaps even to speak.

Well, maybe our chess-playing module is not autonomous, but as we have seen, we can certainly build a chess-playing module that has absolutely no capacity to see, move, manipulate, or speak.

Most of the rest of the paper is nonsensical, groundless handwaving, in the vein of Searle but worse. I am unimpressed.

Comment author: Bugmaster 08 August 2013 08:57:04PM *  0 points [-]

What on earth does he mean by "mind"?

Yeah, I think that's the main problem with pretty much the entire Searle camp. As far as I can tell, if they do mean anything by the word "mind", then it's "you know, that thing that makes us different from machines". So, we are different from AIs because we are different from AIs. It's obvious when you put it that way !