All of telms's Comments + Replies

Sad to say, my only experience with wargaming was playing Risk in high school. I'm not sure that counts.

Let me refer you to Computation and Human Experience, by Philip E. Agre, and to Understanding Computers and Cognition, by Terry Winograd and Fernando Flores.

4Bugmaster
Can you summarize the salient parts ?

Guesstimates based on quick reading without serious analysis:

(1) Probability that Amnda Knox is guilty: 5%

(2) Probablility that Raffaele Sollecito is guilty: 10%

(3) Probability that Gudy Guede is guilty: 60%

(4) Probability that my estimates are congruent with OP's: 50% (ie random, I can't tell what his opinion is)

Hi, Antiochus. What areas of history are you interested in? I'm similarly interested in history -- particularly paleontology and archaeology, the history or urban civilizations (rise and collapse and reemergence), and the history of technology. I kind of lose interest after World War II, though. You?

0Antiochus
Any and all! Though I have a lot of interest in military history in particular, which lead me to wargaming, with some specialized interest in the Hellenistic period and the ancient world in general, medieval martial arts, and the black powder era of linear battles.

I was able to follow this explanation (as well as the rest of your post) without seeing your physical body in any way. ... The fact that we can do this looks to me like evidence against your main thesis.

Ah, but you're assuming that this particular interaction stands on its own. I'll bet you were able to visualize the described gestures just fine by invoking memories of past interactions with bodies in the world.

Two points. First, I don't contest the existence of verbal labels that merely refer -- or even just register as being invoked without refering a... (read more)

3Bugmaster
What do you mean by "fully" ? I believe I understand you well enough for all practical purposes. I don't agree with you, but agreement and understanding are two different things. I'm not sure what you mean by "merely refer", but keep in mind that we humans are able to communicate concepts which have no physical analogues that would be immediately accessible to our senses. For example, we can talk about things like "O(N)", or "ribosome", or "a^n +b^n = c^n". We can also talk about entirely imaginary worlds, such as f.ex. the world where Mario, the turtle-crushing plumber, lives. And we can do this without having any "physical context" for the interaction, too. All that is beside the point, however. In the rest of your post, you bring up a lot of evidence in support of your model of human development. That's great, but your original claim was that any type of intelligence at all will require a physical body in order to develop; and nothing you've said so far is relevant to this claim. True, human intelligence is the only kind we know of so far, but then, at one point birds and insects were the only self-propelled flyers in existence -- and that's not the case anymore. Furthermore, your also claimed that no simulation, no matter how realistic, will serve to replace the physical world for the purposes of human development, and I'm still not convinced that this is true, either. As I'd said before, we humans do not have perfect senses; if physical coordinates of real objects were snapped to a 0.01mm grid, no human child would ever notice. And in fact, there are plenty of humans who grow up and develop language just fine without the ability to see colors, or to move some of their limbs in order to point at things. Just to drive the point home: even if I granted all of your arguments regarding humans, you would still need to demonstrate that human intelligence is the only possible kind of intelligence; that growing up in a human body is the only possible way to develop

Are you really claiming that ability to understand the very concept of indexicality, and concepts like "soon", "late", "far", etc., relies on humanlike fingers? That seems like an extraordinary claim, to put it lightly.

Yeah, I am advancing the hypothesis that, in humans, the comprehension of indexicality relies on embodied pointing at its core -- though not just with fingers, which are not universally used for pointing in all human cultures. Sotaro Kita has the most data on this subject for language, but the embodied basis ... (read more)

4Bugmaster
I was able to follow this explanation (as well as the rest of your post) without seeing your physical body in any way. In addition, I suspect that, while you were typing your paragraph, you weren't physically pointing at things. The fact that we can do this looks to me like evidence against your main thesis.
1Said Achmiz
But wait; whether all possible minds must rely on such a mechanism is the entire question at hand! Humans implement this feature in some particular way? Fine; but this thread started by discussing what AIs and robots must do to implement the same feature. If implementation-specific details in humans don't tell us anything interesting about implementation constraints in other minds, especially artificial minds which we are in theory free to place anywhere in mind design space, then the entire topic is almost completely irrelevant to an AI discussion (except possible as an example of "well, here is one way you could do it"). Er, what? I thought I was a member of a European culture, but I don't think this is how I use the word "there". If I point to some direction while facing somewhere, and say "there", I mean... "in the direction I am pointing". The only situation when I'd use "there" in the way you describe is if I were describing some scenario involving myself located somewhere other than my current location, such that absolute directions in the story/scenario would not be the same as absolute directions in my current location. If this is accurate, then why on earth would we map this word in this language to the English "there"? It clearly does not remotely resemble how we use the word "there", so this seems to be a case of poor translation rather than an example of cultural differences. Yeah, actually, this research I was aware of. As I recall, the Native Americans in question had some difficulty understanding the Westerners' concepts of speaker-relative indexicals. But note: if we can have such different concepts of indexicality, despite sharing the same pointing digits and whatnot... it seems premature, at best, to suggest that said hardware plays such a key role in our concept formation, much less in the possibility of having such concepts at all. Ultimately, the interesting aspect of this entire discussion (imo, of course) is what these human-specific imp

You make a very important point that I would like to emphasize: incommensurate bodies very likely will lead to misunderstanding. It's not just a matter of shared or disjunct body isomorphism. It's also a matter of embodied interaction in a real world.

Let's take the very fundamental function of pointing. Every human language is rife with words called deictics that anchor the flow of utterance to specific pieces of the immediate environment. English examples are words like "this", "that", "near", "far", "soon"... (read more)

1TheOtherDave
Sure, I agree that we make use of all kinds of contextual cues to interpret speech, and a system lacking awareness of that context will have trouble interpreting speech.For example, if I say "Do you like that?" to Sam, when Sam can't see the thing I'm gesturing to indicate or doesn't share the cultural context that lets them interpret that gesture, Sam won't be able to interpret or engage with me successfully. Absolutely agreed. And this applies to all kinds of things, including (as you say) but hardly limited to pointing. And, sure, the system may not even be aware of that trouble... illusions of transparency abound. Sam might go along secure in the belief that they know what I'm asking about and be completely wrong. Absolutely agreed. And sure, I agree that we rely heavily on physical metaphors when discussing abstract ideas, and that a system incapable of processing my metaphors will have difficulty engaging with me successfully. Absolutely agreed. All of that said, what I have trouble with is your apparent insistence that only a humanoid system is capable of perceiving or interpreting human contextual cues, metaphors, etc. That doesn't seem likely to me at all, any more than it seems likely that a blind person (or one on the other end of a text-only link) is incapable of understanding human speech.
7Richard_Kennaway
If I am talking to you on the telephone, I have no mechanism for pointing and no sensory apparatus for detecting your pointing gestures, yet we can communicate just fine. The whole embodied cognition thing is a massive, elementary mistake as bad as all the ones that Eliezer has analysed in the Sequences. It's an instant fail.
1Said Achmiz
Are you really claiming that ability to understand the very concept of indexicality, and concepts like "soon", "late", "far", etc., relies on humanlike fingers? That seems like an extraordinary claim, to put it lightly. Also: "Detecting pointing gestures" would be the function of a perception algorithm, not a sensory apparatus (unless what you mean is "a robot with no ability to perceive positions/orientations/etc. of objects in its environment", which... wouldn't be very useful). So it's a matter of what we do with sense data, not what sorts of body we have; that is, software, not hardware. More generally, a lot of what you're saying (and — this is my very tentative impression — a lot of the ideas of embodied cognition in general) seems to be based on an idea that we might create some general-intelligent AI or robot, but have it start at some "undeveloped" state and then proceed to "learn" or "evolve", gathering concepts about the world, growing in understanding, until it achieves some desired level of intellectual development. The concern then arises that without the kind of embodiment that we humans enjoy, this AI will not develop the concepts necessary for it to understand us and vice versa. Ok. But is anyone working in AI these days actually suggesting that this is how we should go about doing things? Is everyone working in AI these days suggesting that? Isn't this entire line of reasoning inapplicable to whole broad swaths of possible approaches to AI design? P.S. What does "there, relative to the river" mean?

You make some good points. Please forgive me if I am more pessimistic than you are about the likelihood of AGI in our lifetimes, though. These are hard problems, which decompose into hard problems, which decompose into hard problems -- it's hard problems all the way down, I think. The good news is, there's plenty of work to be done.

7Bugmaster
I skimmed both papers, and found them unconvincing. Granted, I am not a philosopher, so it's likely that I'm missing something, but still: In the first paper, Harnad argues that rule-based expert systems cannot be used to build a Strong AI; I completely agree. He further argues that merely building a system out of neural networks does not guarantee that it will grow to be a Strong AI either; again, we're on the same page so far. He further points out that, currently, nothing even resembling Strong AI exists anywhere. No argument there. Harnad totally loses me, however, when he begins talking about "meaning" as though that were some separate entity to which "symbols" are attached. He keeps contrasting mere "symbol manipulation" with true understanding of "meaning", but he never explains how we could tell one from the other. In the second paper, Harnad basically falls into the same trap as Searle. He lampoons the "System Reply" by calling it things like "a predictable piece of hand-waving" -- but that's just name-calling, not an argument. Why precisely is Harnad (or Searle) so convinced that the Chinese Room as a whole does not understand Chinese ? Sure, the man inside doesn't understand Chinese, but that's like saying that a car cannot drive uphill at 70 mph because no human driver can run uphill that fast. The rest of his paper amounts to a moving of the goalposts. Harnad is basically saying, "Ok, let's say we have an AI that can pass the TT via teletype. But that's not enough ! It also needs to pass the TTT ! And if it passes that, then the TTTT ! And then maybe the TTTTT !" Meanwhile, Harnad himself is reading articles off his screen which were published by other philosophers, and somehow he never requires them to pass the TTTT before he takes their writings seriously. Don't get me wrong, it is entirely possible that the only way to develop a Strong AI is to embody it in the physical world, and that no simulation, no matter how realistic, will suffice. I am o

Is a computer executing a software emulation of a humanoid body interacting with an emulated physical environment a disembodied algorithmic system, or an AI ROBOT (or neither, or both, or it depends on something)?

An emulated body in an emulated environment is a disembodied algorithmic system in my terminology. The classic example is Terry Winograd's SHRDLU, which made significant advances in machine language understanding by adding an emulated body (arm) and an emulated world (a cartoon blocks world, but nevertheless a world that could be manipulated) ... (read more)

6TheOtherDave
OK, thanks for clarifying. I certainly agree that a physical robot body is subject to constraints that an emulated body may not be subject to; it is possible to design an emulated body that we are unable to build, or even a body that cannot be built even in principle, or a body that interacts with its environment in ways that can't happen in the real world. And I similarly agree that physical systems demonstrate relationships, like that between torque and effort, which provide data, and that an emulated body doesn't necessarily demonstrate the same relationships that a robot body does (or even that it can in principle). And those aren't unrelated, of course; it's precisely the constraints on the system that cause certain parts of that system to vary in correlated ways. And I agree that a robot body is automatically subject to those constraints, whereas if I want to build an emulated software body that is subject to the same constraints that a particular robot body would be subject to, I need to know a lot more. Of course, a robot body is not subject to the same constraints that a human body is subject to, any more than an emulated software body is; to the extent that a shared ability to understand language depends on a shared set of constraints, rather than on simply having some constraints, a robot can't understand human language until it is physically equivalent to a human. (Similar reasoning tells us that paraplegics don't understand language the same way as people with legs do.) And if understanding one another's language doesn't depend on a shared set of constraints, such that a human with two legs, a human with no legs, and a not-perfectly-humanlike robot can all communicate with one another, it may turn out that an emulated software body can communicate with all three of them. The latter seems more likely to me, but ultimately it's an empirical question.
3Bugmaster
Ok, but is this the correct conclusion ? It's pretty obvious that a SHRDLU-style simulation is not sufficient to achieve natural language understanding, but can you generalize that to saying that no conceivable simulation is sufficient ? As far as I can tell, you would make such a generalization because, While this is true, it is also true that our human senses cannot fully perceive the reality around us with infinite fidelity. A child who is still learning his native tongue can't a rock that is 5cm in diameter from a rock that's 5.000001cm in diameter. This would lead me to believe that your simulation does not need 7 significant figures of precision in order to produce a language-speaking mind. In fact, a colorblind child can't tell a red-colored ball from a green-colored ball, and yet colorblind adults can speak a variety of languages, so it's possible that your simulation could be monochrome and still achieve the desired result.

Jurgen Streeck's book Gesturecraft: The manu-facture of meaning is a good summary of Streeck's cross-linguistic research on the interaction of gesture and speech in meaning creation. The book is pre-theoretical, for the most part, but Streeck does make an important claim that the biological covariation in a speaker or hearer across the somatosensory modes of gesture, vision, audition, and speech do the work of abstraction -- which is an unsolved problem in my book.

Streeck's claim happens to converge with Eric Kandel's hypothesis that abstraction happens w... (read more)

1Swimmer963 (Miranda Dixon-Luinenburg)
Thanks! Neat.

Hi, everyone. My name is Teresa, and I came to Less Wrong by way of HPMOR.

I read the first dozen chapters of HPMOR without having read or seen the Harry Potter canon, but once I was hooked on the former, it became necessary to see all the movies and then read all the books in order to get the HPMOR jokes. JK Rowling actually earned royalties she would never have received otherwise thanks to HPMOR.

I don't actually identify as a pure rationalist, although I started out that way many, many years ago. What I am committed to today is SANITY. I learned the hard ... (read more)

5Mitchell_Porter
The chief deficiency of embodiment philosophy-of-mind, at least among AIers and cognitivists, is that they constantly say "embodiment" when they should say "experience of embodiment". And when you put it that way, most of the magic leaches away and you're left facing the same old hard problem of consciousness. Meaning, understanding, intentionality are all aspects of consciousness. And various studies can show that body awareness is surprisingly important in the genesis and constitution of those things. But just having a material object governed by a hierarchy of feedback loops does not explain why there should be anyone home in that object - why there should be any form of awareness in, or around, or otherwise associated with that object.
4TheOtherDave
Well, I certainly agree that there are important aspects of human languages that come out of our experience of being embodied in particular ways, and that without some sort of model that embeds the results of that kind of experience we're not going to get very far in automating the understanding of human language. But it sounds like you're suggesting that it's not possible to construct such a model within a "disembodied" algorithmic system, and I'm not sure why that should be true. Then again, I'm not really sure what precisely is meant here by "disembodied algorithmic system" or "ROBOT". For example, is a computer executing a software emulation of a humanoid body interacting with an emulated physical environment a disembodied algorithmic system, or an AI ROBOT (or neither, or both, or it depends on something)? How would I tell, for a given computer, which kind of thing it was (if either)?
1Said Achmiz
I agree that Searle believes in magic, but "intentionality" is not magic (see: almost anything Dennett has written). This sounds interesting. Could you expand on this?
5Bugmaster
I am not familiar with Stevan Harnad, but this sounds counterintuitive to me (though it's very likely that I'm misunderstanding your point). I am currently reading your words on the screen. I can't hear you or see your body language. And yet, I can still understand what you wrote (not fully, perhaps, but enough to ask you questions about it). In our current situation, I'm not too different from a software program that is receiving the text via some input stream, so I don't see an a priori reason why such a program could not understand the text as well as I do.
0Swimmer963 (Miranda Dixon-Luinenburg)
Welcome! Yeah. This, and the "existential angst" thing, seem to be common problems on LW, and I've never been sure why. I think that keeping yourself busy doing practical stuff prevents it from becoming an issue. That's fascinating! What research has been done on this! I would totally be interested in reading more about it.

I'd suggest adding separate columns for actual WORK TIME versus total ELAPSED TIME after email turnaround, task switching, sleep, etc.

Prepare kettle of chili from scratch: 40 min work time, 3 hr elapsed time

Read a 350-page novel: 6 hr (work & elapsed)

Read 690 pages of economic history excluding references: 52 hrs (work time), 3 months (elapsed)

1eurg
Read a 350-page novel in your 2nd language: 9 hrs (work & elapsed)

Let's see if I can take your college example and fit it to what Freakonics is investigating.

Before you roll the dice, you are asked how confident you are that if the dice roll 6, you will in fact enroll and pay the first semester's tuition at school X and still be attending classes there two months from now. You can choose from:

(a) Very likely

(b) Somewhat likely

(c) Somewhat unlikely

(d) Very unlikely

Then you're asked to give a probability estimate that you will not show up, pay up, and stick it out for two months.

Let's say you're highly motivated to do scho... (read more)

It's my understanding that, in a repeated series of PD games, the best strategy in the long run is "tit-for-tat": cooperate by default, but retaliate with defection whenever someone defects against you, and keep defecting until the original defector returns to cooperation mode. Perhaps the prisoners in this case were generalizing a cooperative default from multiple game-like encounters and treating this particular experiment as just one more of these more general interactions?

5ThisSpaceAvailable
Well, to be precise, researchers found tit-for-tat was the best, given the particular set-up. There's no strategy that is better than every other strategy in every set-up. If everyone has a set choice (either "always defect (AD)" or "always cooperate (AC)"), then the best strategy is AD. If there are enough TFT players, however, they will increase each others' scores, and the TFT will be more successful than AD. The more iterations there are, the more advantage TFT will give. However, if all of the players are TFT or AC, then AC will be just as good as TFT. If you have an evolutionary situation between AD, AC, and TFT where complexity is punished, "all TFT" isn't an equilibrium, because you'll have mutations to AC, which will out-compete TFT due to lower complexity, until there are enough AC that AD becomes viable, at which point TFT will start to have an advantage again. All AD will be an equilibrium, because once you reach that point, AC will be inferior, and an incremental increase in TFT due to mutation will not be able to take hold. If you have all AC, then AD will start to proliferate. If you have AC and AD, but no TFT, then eventually AD will take over.

Mmm, that's not really where I'm coming from. There is an aggressively empirical research tradition in applied linguistics called "conversation analysis", which analyzes how language is actually used in real-world interaction. The raw data is actual recordings, usually with video so that the physical embodiment of the interaction and the gestures and facial expressions can be captured. The data is transcribed frame-by-frame at 1/30th of a second intervals, and includes gesture as well as vocal non-words (uh-huh, um, laugh, quavery voice, etc) to ... (read more)

Speaking for a moment as a discourse analyst rather than a philosopher, I would like to point out that much talk is social action rather than reasoning or argument, and what is said is rarely all, or even most, of what is meant. Does anyone here know of any empirical discourse research into the actual linguistic uses of semantic "stopsigns" in conversational practice?

0imaginaryphiend
Telms, it seems you are looking to tread in the path of the logical positivists where they sought to sort this out within a context of early Wittgenstein. Taken to the logical extreme with regard to a logical epistemic foundationalism, they tend to be generally dismissed, but in the context of an semantics relavant to general, meaningful discourse, i thought they tended to make a lot of good sense. Ironically, i keep going back to positivism. The irony being in the potential paradoxes of me seeing my self as essentially an epistemic nihilist. lol... I see it all metaphorically as an epistemology modelled visually to appear as ever expanding circles of reasoning, looking like an outward moving psychedelic spiral. If we try to deconstruct that psychedelic, perpetually moving spiral, we reach further and further towards an propositional foundation, but find we can only approach it as an infinitessimal, proposed, but not actually realizable, absolute beginning.