The Freakonomics web site is currently conducting online research that appears, to this properly hypothesis-blinded participant, to be investigating decision-making and estimated prior probability of success. You can participate yourself at http://www.freakonomics.com/experiments/.
The study asks the participant to choose a yes/no decision that they would be willing to commit to making on the basis of a random coin toss. (Well, actually, the random decay of an atomic nucleus, but they use coin flip graphics.) In my case, the only decision I was willing to make on such a random basis is something with very low risks: namely, the decision whether or not to quit twisting my hair. I accepted the obligation to change my behavior based on a coin toss, and the coin toss says I gotta change.
Breaking a habit of such long standing will be difficult. Past behavior is the best predictor of future behavior, and all that, so when they asked how LIKELY I thought it would be that my hair-twisting habit would stick despite my best efforts to get rid of it, I estimated 90%. Yet I also claimed that I WILL PROBABLY (not certainly, but probably) conquer the habit.
Yes, I recognize the dissonance between these two statements. It intrigues me. Is it perhaps the intent of the experiment to create explicit, conscious, cognitive dissonance like this in some participants, and see what difference it makes to outcomes?
They could easily have phrased the odds question in the inverse form. They COULD have asked how likely I thought it was that I would SUCCEED in achieving my goal. That would align neatly with my statement of commitment and yield no dissonance. I could make the usual biased assumptions that strength of willpower is the same as odds of success, and over-estimate those success odds accordingly.
I don't actually know that the study cares about this, but this is what I would care about if I were the researchers.
The Freakonomics people will be following up over time by email. They're also checking on me through a friend, so there is every possibility that they expect to see an interaction between social involvement in the decision's outcome and the presence of cognitive dissonance, which is believed to drive SOCIAL behavior more strongly than it drives personal decisions kept to oneself.
I'm posting this to increase my social commitment, of course. I also posted on Facebook. It's terrible to have a psychologically trained participant make assumptions about your research project and leverage those assumptions to the max for imaginary ends. But that's life in social science. :)
Well, I certainly agree that there are important aspects of human languages that come out of our experience of being embodied in particular ways, and that without some sort of model that embeds the results of that kind of experience we're not going to get very far in automating the understanding of human language.
But it sounds like you're suggesting that it's not possible to construct such a model within a "disembodied" algorithmic system, and I'm not sure why that should be true.
Then again, I'm not really sure what precisely is meant here by "disembodied algorithmic system" or "ROBOT".
For example, is a computer executing a software emulation of a humanoid body interacting with an emulated physical environment a disembodied algorithmic system, or an AI ROBOT (or neither, or both, or it depends on something)? How would I tell, for a given computer, which kind of thing it was (if either)?
An emulated body in an emulated environment is a disembodied algorithmic system in my terminology. The classic example is Terry Winograd's SHRDLU, which made significant advances in machine language understanding by adding an emulated body (arm) and an emulated world (a cartoon blocks world, but nevertheless a world that could be manipulated) to text-oriented language processing algorithms. However, Winograd himself concluded that language understanding algorithms plus emulated bodies plus emulated worlds aren't sufficient to achieve natural language understanding.
Every emulation necessarily makes simplifying assumptions about both the world and the body that are subject to errors, bugs, and munchkin effects. A physical robot body, on the other hand, is constrained by real-world physics to that which can be built. And the interaction of a physical body with a physical environment necessarily complies with that which can actually happen in the real world. You don't have to know everything about the world in advance, as you would for a realistic world emulation. With a robot body in a physical environment, the world acts as its own model and constrains the universe of computation to a tractable size.
The other thing you get from a physical robot body is the implicit analog computation tools that come with it. A robot arm can be used as a ruler, for example. The torque on a motor can be used as a analog for effort. On these analog systems, world-grounded metaphors can be created using symbolic labels that point to (among other things) the arm-ruler or torque-effort systems. These metaphors can serve as the terminal point of a recursive meaning builder -- and the physics of the world ensures that the results are good enough models of reality for communication to succeed or for thinking to be assessed for truth-with-a-small-t.