Comment author: Antiochus 11 August 2013 06:40:11PM 0 points [-]

Any and all! Though I have a lot of interest in military history in particular, which lead me to wargaming, with some specialized interest in the Hellenistic period and the ancient world in general, medieval martial arts, and the black powder era of linear battles.

Comment author: telms 12 August 2013 05:10:32AM *  0 points [-]

Sad to say, my only experience with wargaming was playing Risk in high school. I'm not sure that counts.

Comment author: Bugmaster 11 August 2013 05:20:13AM *  2 points [-]

In our own case here and now, we are actually failing to understand each other fully because I can't show you actual videotapes of what I'm talking about.

What do you mean by "fully" ? I believe I understand you well enough for all practical purposes. I don't agree with you, but agreement and understanding are two different things.

First, I don't contest the existence of verbal labels that merely refer -- or even just register as being invoked without refering at all.

I'm not sure what you mean by "merely refer", but keep in mind that we humans are able to communicate concepts which have no physical analogues that would be immediately accessible to our senses. For example, we can talk about things like "O(N)", or "ribosome", or "a^n +b^n = c^n". We can also talk about entirely imaginary worlds, such as f.ex. the world where Mario, the turtle-crushing plumber, lives. And we can do this without having any "physical context" for the interaction, too.

All that is beside the point, however. In the rest of your post, you bring up a lot of evidence in support of your model of human development. That's great, but your original claim was that any type of intelligence at all will require a physical body in order to develop; and nothing you've said so far is relevant to this claim. True, human intelligence is the only kind we know of so far, but then, at one point birds and insects were the only self-propelled flyers in existence -- and that's not the case anymore.

Furthermore, your also claimed that no simulation, no matter how realistic, will serve to replace the physical world for the purposes of human development, and I'm still not convinced that this is true, either. As I'd said before, we humans do not have perfect senses; if physical coordinates of real objects were snapped to a 0.01mm grid, no human child would ever notice. And in fact, there are plenty of humans who grow up and develop language just fine without the ability to see colors, or to move some of their limbs in order to point at things.

Just to drive the point home: even if I granted all of your arguments regarding humans, you would still need to demonstrate that human intelligence is the only possible kind of intelligence; that growing up in a human body is the only possible way to develop human intelligence; and that no simulation could in principle suffice, and the body must be physical. These are all very strong claims, and so far you have provided no evidence for any of them.

Comment author: telms 12 August 2013 05:05:15AM 2 points [-]

Let me refer you to Computation and Human Experience, by Philip E. Agre, and to Understanding Computers and Cognition, by Terry Winograd and Fernando Flores.

Comment author: telms 11 August 2013 05:36:49AM *  1 point [-]

Guesstimates based on quick reading without serious analysis:

(1) Probability that Amnda Knox is guilty: 5%

(2) Probablility that Raffaele Sollecito is guilty: 10%

(3) Probability that Gudy Guede is guilty: 60%

(4) Probability that my estimates are congruent with OP's: 50% (ie random, I can't tell what his opinion is)

Comment author: Antiochus 26 July 2013 03:44:01AM 14 points [-]

Hi. I'm a software engineer and history enthusiast. Been reading for years, and just recently got around to making an account. Still building up the courage to dive in, but this place has done wonders for reducing sloppy thinking on my part.

Comment author: telms 11 August 2013 05:01:22AM 0 points [-]

Hi, Antiochus. What areas of history are you interested in? I'm similarly interested in history -- particularly paleontology and archaeology, the history or urban civilizations (rise and collapse and reemergence), and the history of technology. I kind of lose interest after World War II, though. You?

Comment author: Bugmaster 08 August 2013 09:13:34PM 3 points [-]

If the orientation of my body changes when I say "there", I might point over my shoulder rather than to my front and left.

I was able to follow this explanation (as well as the rest of your post) without seeing your physical body in any way. In addition, I suspect that, while you were typing your paragraph, you weren't physically pointing at things. The fact that we can do this looks to me like evidence against your main thesis.

Comment author: telms 11 August 2013 04:48:29AM *  -1 points [-]

I was able to follow this explanation (as well as the rest of your post) without seeing your physical body in any way. ... The fact that we can do this looks to me like evidence against your main thesis.

Ah, but you're assuming that this particular interaction stands on its own. I'll bet you were able to visualize the described gestures just fine by invoking memories of past interactions with bodies in the world.

Two points. First, I don't contest the existence of verbal labels that merely refer -- or even just register as being invoked without refering at all. As long as some labels are directly grounded to body/world, or refer to other labels that do get grounded in the body/world historically, we generally get by in routine situations. And all cultures have error detection and repair norms for conversation so that we can usually recover without social disaster.

However, the fact that verbal labels can be used without grounding them in the body/world is a problem. It is frequently the case that speakers and hearers alike don't bother to connect words to reality, and this is a major source of misunderstanding, error, and nonsense. In our own case here and now, we are actually failing to understand each other fully because I can't show you actual videotapes of what I'm talking about. You are rightly skeptical because words alone aren't good enough evidence. And that is itself evidence.

Second, humans have a developmental trajectory and history, and memories of that history. We're a time-binding animal in Korzybski's terminology. I would suggest that an enculturated adult native speaker of a language will have what amount to "muscle memory" tics that can be invoked as needed to create referents. Mere memory of a motion or a perception is probably sufficient.

"Oh, look, it's an invisible gesture!" is not at all convincing, I realize, so let me summarize several lines of evidence for it.

Developmentally, there's quite a lot of research on language acquisition in infants and young children that suggests shared attention management -- through indexical pointing, and shared gaze, and physical coercion of the body, and noises that trigger attention shift -- is a critical building block for constructing "aboutness" in human language. We also start out with some shared, built-in cries and facial expressions linked to emotional states. At this level of development, communication largely fails unless there is a lot of embodied scaffolding for the interaction, much of it provided by the caregiver but a large part of it provided by the physical context of the interaction. There is also some evidence from the gestural communication of apes that attests to the importance of embodied attention management in communication.

Also, co-speech gesture turns out to be a human universal. Congenitally blind children do it, having never seen gesture by anyone else. Congenitally deaf children who spend time in groups together will invent entire gestural languages complete with formal syntax, as recently happened in Nicaragua. And adults speaking on the telephone will gesture even knowing they cannot be seen. Granted, people gesture in private at a significantly lower rate than they do face-to-face, but the fact that they do it at all is a bit of a puzzle, since the gestures can't be serving a communicative function in these contexts. Does the gesturing help the speakers actually think, or at least make meaning more clear to themselves? Susan Goldin-Meadow and her colleagues think so.

We also know from video conversation data that adults spontaneously invent new gestures all the time in conversation, then reuse them. Interestingly, though, each reuse becomes more attentuated, simplified, and stylized with repetition. Similar effects are seen in the development of sign languages and in written scripts.

But just how embodied can a label be when gesture (and other embodied experience) is just a memory, and is so internalized that is is externally invisible? This has actually been tested experimentally. The Stroop effect has been known for decades, for example: when the word "red" is presented in blue text, it is read or acted on more slowly than when the word "red" is presented in red text -- or in socially neutral black text. That's on the embodied perception side of things. But more recent psychophysical experiments have demonstrated a similar psychomotor Stroop-like effect when spatial and motion stimulus sentences are semantically congruent with the direction of the required response action. This effect holds even for metaphorical words like "give", which tests as motor-congruent with motion away from oneself, and "take", which tests as motor-congruent with motion toward oneself.

I understand how counterintuitive this stuff can be when you first encounter it -- especially to intelligent folks who work with codes or words or models a great deal. I expect the two of us will never reach a consensus on this without looking at a lot of original data -- and who has the time to analyze all the data that exists on all the interesting problems in the world? I'd be pleased if you could just note for future reference that a body of empirical evidence exists for the claim. That's all.

Comment author: SaidAchmiz 07 August 2013 06:05:51AM 1 point [-]

Let's take the very fundamental function of pointing. Every human language is rife with words called deictics that anchor the flow of utterance to specific pieces of the immediate environment. English examples are words like "this", "that", "near", "far", "soon", "late", the positional prepositions, pronominals like "me" and "you" -- the meaning of these terms is grounded dynamically by the speakers and hearers in the time and place of utterance, the placement and salience of surrounding objects and structures, and the particular speaker and hearers and overhearers of the utterance. Human pointing -- with the fingers, hands, eyes, chin, head tilt, elbow, whatever -- has been shown to perform much the same functions as deictic speech in utterance. (See the work of Sotaro Kita if you're interested in the data). A robot with no mechanism for pointing and no sensory apparatus for detecting the pointing gestures of human agents in its environment will misunderstand a great deal and will not be able to communicate fluently.

Are you really claiming that ability to understand the very concept of indexicality, and concepts like "soon", "late", "far", etc., relies on humanlike fingers? That seems like an extraordinary claim, to put it lightly.

Also:

A robot with no mechanism for pointing and no sensory apparatus for detecting the pointing gestures of human agents in its environment will misunderstand a great deal and will not be able to communicate fluently.

"Detecting pointing gestures" would be the function of a perception algorithm, not a sensory apparatus (unless what you mean is "a robot with no ability to perceive positions/orientations/etc. of objects in its environment", which... wouldn't be very useful). So it's a matter of what we do with sense data, not what sorts of body we have; that is, software, not hardware.

More generally, a lot of what you're saying (and — this is my very tentative impression — a lot of the ideas of embodied cognition in general) seems to be based on an idea that we might create some general-intelligent AI or robot, but have it start at some "undeveloped" state and then proceed to "learn" or "evolve", gathering concepts about the world, growing in understanding, until it achieves some desired level of intellectual development. The concern then arises that without the kind of embodiment that we humans enjoy, this AI will not develop the concepts necessary for it to understand us and vice versa.

Ok. But is anyone working in AI these days actually suggesting that this is how we should go about doing things? Is everyone working in AI these days suggesting that? Isn't this entire line of reasoning inapplicable to whole broad swaths of possible approaches to AI design?

P.S. What does "there, relative to the river" mean?

Comment author: telms 07 August 2013 06:56:23AM *  -1 points [-]

Are you really claiming that ability to understand the very concept of indexicality, and concepts like "soon", "late", "far", etc., relies on humanlike fingers? That seems like an extraordinary claim, to put it lightly.

Yeah, I am advancing the hypothesis that, in humans, the comprehension of indexicality relies on embodied pointing at its core -- though not just with fingers, which are not universally used for pointing in all human cultures. Sotaro Kita has the most data on this subject for language, but the embodied basis of mathematics is discussed in Where Mathematics Comes From, by by Geroge Lakoff and Rafael Nunez . Whether all possible minds must rely on such a mechanism, I couldn't possibly guess. But I am persuaded humans do (a lot of) it with their bodies.

What does "there, relative to the river" mean?

In most European cultures, we use speaker-relative deictics. If I point to the southeast while facing south and say "there", I mean "generally to my front and left". But if I turn around and face north, I will point to the northwest and say "there" to mean the same thing, ie, "generally to my front and left." The fact that the physical direction of my pointing gesture is different is irrelevant in English; it's my body position that's used as a landmark for finding the target of "there". (Unless I'm pointing at something in particular here and now, of course; in which case the target of the pointing action becomes its own landmark.)

In a number of Native American languages, the pointing is always to a cardinal direction. If the orientation of my body changes when I say "there", I might point over my shoulder rather than to my front and left. The landmark for finding the target of "there" is a direction relative to the trajetory of the sun.

But many cultures use a dominant feature of the landscape, like the Amazon or the Missippi or the Nile rivers, or a major mountain range like the Rockies, or a sacred city like Mecca, as the orientation landmark, and in some cultures this gets encoded in the deictics of the language and the conventions for pointing. "Up" might not mean up vertically, but rather "upriver", while "down" would be "downriver". In a steep river valley in New Guinea, "down" could mean "toward the river" and "up" could mean "away from the river". And "here" could mean "at the river" while "there" could mean "not at the river".

The cultural variability and place-specificity of language was not widely known to Western linguists until about ten years ago. For a long time, it was assumed that person-relative orientation was a biological constraint on meaning. This turns out to be not quite accurate. So I guess I should be more nuanced in the way I present the notion of embodied cognition. How's this: "Embodied action in the world with a cultural twist on top" is the grounding point at the bottom of the symbol expansion for human meanings, linguistic and otherwise.

Comment author: TheOtherDave 05 August 2013 03:21:19PM *  5 points [-]

OK, thanks for clarifying.

I certainly agree that a physical robot body is subject to constraints that an emulated body may not be subject to; it is possible to design an emulated body that we are unable to build, or even a body that cannot be built even in principle, or a body that interacts with its environment in ways that can't happen in the real world.

And I similarly agree that physical systems demonstrate relationships, like that between torque and effort, which provide data, and that an emulated body doesn't necessarily demonstrate the same relationships that a robot body does (or even that it can in principle). And those aren't unrelated, of course; it's precisely the constraints on the system that cause certain parts of that system to vary in correlated ways.

And I agree that a robot body is automatically subject to those constraints, whereas if I want to build an emulated software body that is subject to the same constraints that a particular robot body would be subject to, I need to know a lot more.

Of course, a robot body is not subject to the same constraints that a human body is subject to, any more than an emulated software body is; to the extent that a shared ability to understand language depends on a shared set of constraints, rather than on simply having some constraints, a robot can't understand human language until it is physically equivalent to a human. (Similar reasoning tells us that paraplegics don't understand language the same way as people with legs do.)

And if understanding one another's language doesn't depend on a shared set of constraints, such that a human with two legs, a human with no legs, and a not-perfectly-humanlike robot can all communicate with one another, it may turn out that an emulated software body can communicate with all three of them.

The latter seems more likely to me, but ultimately it's an empirical question.

Comment author: telms 07 August 2013 05:45:29AM *  -1 points [-]

You make a very important point that I would like to emphasize: incommensurate bodies very likely will lead to misunderstanding. It's not just a matter of shared or disjunct body isomorphism. It's also a matter of embodied interaction in a real world.

Let's take the very fundamental function of pointing. Every human language is rife with words called deictics that anchor the flow of utterance to specific pieces of the immediate environment. English examples are words like "this", "that", "near", "far", "soon", "late", the positional prepositions, pronominals like "me" and "you" -- the meaning of these terms is grounded dynamically by the speakers and hearers in the time and place of utterance, the placement and salience of surrounding objects and structures, and the particular speaker and hearers and overhearers of the utterance. Human pointing -- with the fingers, hands, eyes, chin, head tilt, elbow, whatever -- has been shown to perform much the same functions as deictic speech in utterance. (See the work of Sotaro Kita if you're interested in the data). A robot with no mechanism for pointing and no sensory apparatus for detecting the pointing gestures of human agents in its environment will misunderstand a great deal and will not be able to communicate fluently.

Then there are the cultural conventions that regulate pointing words and gestures alike. For example, spatial meanings tend to be either speaker-relative or landmark-relative or absolute (that is, embedded in a spatial frame of cardinal directions) in a given culture, and whichever of these options the culture chooses is used in both physical pointing and linguistic pointing through deictics. A robot with no cultural reference won't be able to disambigurate "there" (relative to me here now) versus "there" (relative to the river/mountain/rising sun), even if physical pointing is integrated into the attempt to figure out what "there" is. And the problem may not be detected due to the illustion of double transparency.

This gets even more complicated when the world of discourse shifts from the immediate environment to other places, other times, or abstract ideas. People don't stop inhabiting the real world when they talk about abstract ideas. And what you see in conversation videos is people mapping the world of discourse metaphorically to physical locations or objects in their immediate environment. The space behind me becomes yesterday's events and the space beyond my reach in front of me becomes tomorrow's plan. Or I alway point to the left when I'm talking about George and to the right when I'm talking about Fred.

This is all very much an empirical question, as you say. I guess my point is that the data has been accumulating for several decades now that embodiment matters a great deal. Where and how it matters is just beginning to be sorted out.

Comment author: seymour_results 06 August 2013 03:55:15AM -4 points [-]

Great comment, telms!

I think that there's a huge computational / energy strain on systems that attempt to map simulations of reality to reality. I don't mean this in some sort of "deep" way, just that it seems like those teaching a simulation the rules of "second life" will eventually get a simulation that's very good at prospering within "second life" rules but fairly useless at prospering in reality. Why? Because I suspect that the thing will be tuned into second life, and that its sense of comfort (if it becomes comfortable) will begin with an absence of information about reality, and will adapt to its environment. If transplanted into the real world, it would drop things, and they would shatter, people wouldn't be happy and interested in goofing off, they'd be gruff and impatient (because they're not sitting at home interacting with second life, they're at work, driving to the mall, getting groceries, etc.) Everything is changed in reality.

I'm not saying it couldn't get fairly smart in simulation, but I think there are far more failure modes in simulation. For example: there can be vore fetish porn, slug-like replicators, alien parasites, and vampires in "Second Life" people can fly and be insect sized or giant-sized in simulators. Any of these non-real-world "norms" would mean that a creature that normalizes to Second Life might have a lot of resentment if it couldn't easily transition to the real world. And, perhaps it would transition easily, but perhaps it will have associated with a very destructive norm in "Second Life." Maybe it can be reasoned with. But maybe it becomes a sociopath. The "changing event" is likely to be the shift to the real world, and there would not have been as much time studying the behaviors "in real life" or "away from keyboard." Is the robot body moving slowly because it isn't accustomed to the rigidity and feedback of gravity and the real world, or the actual surge of electricity into synthetic muscle fiber actuators? Does it resent the fact that it was normalized to live in an "outdated" or "false" environment? Does it then try to adapt the real world to its shiny "Second Life" virtual world? (Such as, by making everything inside of its apartment shiny and cartoony, or by training kids to fall into a big venus flytrap and get eaten, etc.? Does it try walk around with a chainsaw arm? When people overreact to its chainsaw arm, does it overreact to their overreaction?) If there's a "ramping up" in terms of bodies, and the bodies are similar to ours, we can have some conception of what the robot is feeling, and can treat it with respect and kindness. If on the other hand, we have no real idea what it's feeling, we run the risk that it gradually (worse, covertly) stops identifying with us to some extent (50%? 100%?). That might be bad.

I think it's very dangerous to be creating robots that the military can use, which would give the military a decisive advantage, because there' no love on a battle field, and all AGI's (including those with human-wetware infant brains) begin without adequate information about the world. I think that the best means of interacting with humans in reality is to begin interacting with them from the crib, and keep interacting with them as one "gets more sophisticated."

However, there is a huge incentive to raise AGIs in simulations (cheap and useful; possibly income-earning) or in military reality (survival-conveying, useful). Both of these situations seem uncaring (sociopathic) toward the AGI itself. Both of these situations seem like they'd end up resulting in an AGI subject to either a "fake, uninteresting" world without actual control over its reality or ability to communicate to its friends when they are not "online", or worse, a brutal upbringing, born on a sociopathic battlefield. In either case, the two most likely pathways for the birth of strong AGI from a market-driven standpoint (in the state-corrupted "marketplace of ideas," anyway) is an inadequate-capital-financing scenario for perhaps the majority of first-to-fund AGI experiments. So, there will likely be some unfriendly AGI very soon.

Hopefully, they are like the "kid from the wrong side of the tracks" who turns out good in spite of his surroundings, instead of the kid from the wrong side of the tracks who rapes your daughter and leaves her in a dumpster before stabbing you in the eye with a switchblade, just to see what it looks like. (Or far worse: bombs over 100,000 innocent people as our drones currently do, overseas, albeit with human sociopath and morally-compromised and unethically-pressured conformist intermediaries.)

Still more hopefully, Eliezer Yudkowsky or some other caring humanist gets both the funding and the competitive pressure he needs to succeed at creating a child AGI. Then, hopefully, that child AGI is surrounded by caring, competent, thoughtful people who pour their hearts and souls into raising it well. Preferably, it won't start out superstrong, or superhuman, but will be given some time to ramp up into those possible futures. (ie: "the asimo model")

Comment author: telms 07 August 2013 04:58:29AM 1 point [-]

You make some good points. Please forgive me if I am more pessimistic than you are about the likelihood of AGI in our lifetimes, though. These are hard problems, which decompose into hard problems, which decompose into hard problems -- it's hard problems all the way down, I think. The good news is, there's plenty of work to be done.

Comment author: ThisSpaceAvailable 07 August 2013 01:48:33AM 3 points [-]

Well, to be precise, researchers found tit-for-tat was the best, given the particular set-up. There's no strategy that is better than every other strategy in every set-up. If everyone has a set choice (either "always defect (AD)" or "always cooperate (AC)"), then the best strategy is AD. If there are enough TFT players, however, they will increase each others' scores, and the TFT will be more successful than AD. The more iterations there are, the more advantage TFT will give. However, if all of the players are TFT or AC, then AC will be just as good as TFT. If you have an evolutionary situation between AD, AC, and TFT where complexity is punished, "all TFT" isn't an equilibrium, because you'll have mutations to AC, which will out-compete TFT due to lower complexity, until there are enough AC that AD becomes viable, at which point TFT will start to have an advantage again. All AD will be an equilibrium, because once you reach that point, AC will be inferior, and an incremental increase in TFT due to mutation will not be able to take hold. If you have all AC, then AD will start to proliferate. If you have AC and AD, but no TFT, then eventually AD will take over.

Comment author: telms 07 August 2013 04:41:02AM 0 points [-]

Thanks for that explanation. The complexity factor hadn't occurred to me.

Comment author: Bugmaster 05 August 2013 02:22:20AM *  4 points [-]

I've come to the conclusion that Stevan Harnad is absolutely correct, and that machine language understanding will require an AI ROBOT, not a disembodied algorithmic system.

I am not familiar with Stevan Harnad, but this sounds counterintuitive to me (though it's very likely that I'm misunderstanding your point). I am currently reading your words on the screen. I can't hear you or see your body language. And yet, I can still understand what you wrote (not fully, perhaps, but enough to ask you questions about it). In our current situation, I'm not too different from a software program that is receiving the text via some input stream, so I don't see an a priori reason why such a program could not understand the text as well as I do.

Comment author: telms 05 August 2013 05:09:25AM 2 points [-]

View more: Next