Comment author: TheOtherDave 05 August 2013 03:19:09AM 3 points [-]

Well, I certainly agree that there are important aspects of human languages that come out of our experience of being embodied in particular ways, and that without some sort of model that embeds the results of that kind of experience we're not going to get very far in automating the understanding of human language.

But it sounds like you're suggesting that it's not possible to construct such a model within a "disembodied" algorithmic system, and I'm not sure why that should be true.

Then again, I'm not really sure what precisely is meant here by "disembodied algorithmic system" or "ROBOT".

For example, is a computer executing a software emulation of a humanoid body interacting with an emulated physical environment a disembodied algorithmic system, or an AI ROBOT (or neither, or both, or it depends on something)? How would I tell, for a given computer, which kind of thing it was (if either)?

Comment author: telms 05 August 2013 04:23:16AM *  0 points [-]

Is a computer executing a software emulation of a humanoid body interacting with an emulated physical environment a disembodied algorithmic system, or an AI ROBOT (or neither, or both, or it depends on something)?

An emulated body in an emulated environment is a disembodied algorithmic system in my terminology. The classic example is Terry Winograd's SHRDLU, which made significant advances in machine language understanding by adding an emulated body (arm) and an emulated world (a cartoon blocks world, but nevertheless a world that could be manipulated) to text-oriented language processing algorithms. However, Winograd himself concluded that language understanding algorithms plus emulated bodies plus emulated worlds aren't sufficient to achieve natural language understanding.

Every emulation necessarily makes simplifying assumptions about both the world and the body that are subject to errors, bugs, and munchkin effects. A physical robot body, on the other hand, is constrained by real-world physics to that which can be built. And the interaction of a physical body with a physical environment necessarily complies with that which can actually happen in the real world. You don't have to know everything about the world in advance, as you would for a realistic world emulation. With a robot body in a physical environment, the world acts as its own model and constrains the universe of computation to a tractable size.

The other thing you get from a physical robot body is the implicit analog computation tools that come with it. A robot arm can be used as a ruler, for example. The torque on a motor can be used as a analog for effort. On these analog systems, world-grounded metaphors can be created using symbolic labels that point to (among other things) the arm-ruler or torque-effort systems. These metaphors can serve as the terminal point of a recursive meaning builder -- and the physics of the world ensures that the results are good enough models of reality for communication to succeed or for thinking to be assessed for truth-with-a-small-t.

Comment author: Swimmer963 05 August 2013 02:10:06AM 0 points [-]

Welcome!

Without embodiment to ground meaning, you get into problems of unsearchable infinite regress, and you can easily hypothesize internally consistent worlds that are nevertheless not the real world the body lives in. This can lead to religions and other serious delusions.

Yeah. This, and the "existential angst" thing, seem to be common problems on LW, and I've never been sure why. I think that keeping yourself busy doing practical stuff prevents it from becoming an issue.

When you study human language use empirically in natural contexts (through frame-by-frame analysis of video recordings), it turns out that what we think we do with language and what we actually do are rather divergent. The body and places in the world and other agents in the interaction all play a much bigger role in the real-time construction of meaning than you would expect from introspection.

That's fascinating! What research has been done on this! I would totally be interested in reading more about it.

Comment author: telms 05 August 2013 02:38:02AM *  5 points [-]

Jurgen Streeck's book Gesturecraft: The manu-facture of meaning is a good summary of Streeck's cross-linguistic research on the interaction of gesture and speech in meaning creation. The book is pre-theoretical, for the most part, but Streeck does make an important claim that the biological covariation in a speaker or hearer across the somatosensory modes of gesture, vision, audition, and speech do the work of abstraction -- which is an unsolved problem in my book.

Streeck's claim happens to converge with Eric Kandel's hypothesis that abstraction happens when neurological activity covaries across different somatosensory modes. After all, the only things that CAN covary across, say, musical tone changes in the ear and dance moves in the arms, legs, trunk, and head, are abstract relations. Temporal synchronicity and sequence, say.

Another interesting book is Cognition in the Wild by Edwin Hutchins. Hutchins goes rather too far in the direction of externalizing cognition from the participants in the act of knowing, but he does make it clear that cultures build tools into the environment that offload thinking function and effort, to the general benefit of all concerned. Those tools get included by their users in the manufacture of online meaning, to the point that the online meaning can't be reconstructed from the words alone.

The whole field of conversation analysis goes into the micro-organization of interactive utterances from a linguistic point of view rather than a cognitive perspective. The focus is on the social and communicative functions of empirically attested language structures as demonstrated by the speakers themselves to one another. Anything written by John Heritage in that vein is worth reading, IMO.

EDIT: Revised, consolidated, and expanded bibliography on interactive construction of meaning:

LINGUISTICS

  • Philosophy in the Flesh, by George Lakoff and Mark Johnson

  • Women, Fire and Dangerous Things, by George Lakoff

  • The Singing Neaderthals, by Steven Mithen

CONVERSATION ANALYSIS & GESTURE RESEARCH

  • Handbook of Conversation Analysis, by Jack Sidnell & Tanya Stivers

  • Gesturecraft: The Manu-facture of Meaning, by Jurgen Streeck

  • Pointing: Where Language, Culture, and Cognition Meet, by Sotaro Kita

  • Gesture: Visible Action as Utterance, by Adam Kendon

  • Hearing Gesture: How Our Hands Help Us Think, by Susan Goldin-Meadow

  • Hand and Mind: What Gestures Reveal about Thought, by David McNeill

COGNITIVE PSYCHOLOGY

  • Symbols and Embodiment, edited by Manuel de Vega, Arthur M Glenberg, & Arthur C Graesser

  • Cognition in the Wild, Edwin Hutchins

Comment author: telms 05 August 2013 01:22:15AM *  9 points [-]

Hi, everyone. My name is Teresa, and I came to Less Wrong by way of HPMOR.

I read the first dozen chapters of HPMOR without having read or seen the Harry Potter canon, but once I was hooked on the former, it became necessary to see all the movies and then read all the books in order to get the HPMOR jokes. JK Rowling actually earned royalties she would never have received otherwise thanks to HPMOR.

I don't actually identify as a pure rationalist, although I started out that way many, many years ago. What I am committed to today is SANITY. I learned the hard way that, in my case at least, it is the body that keeps the mind sane. Without embodiment to ground meaning, you get into problems of unsearchable infinite regress, and you can easily hypothesize internally consistent worlds that are nevertheless not the real world the body lives in. This can lead to religions and other serious delusions.

That said, however, I find a lot of utility in thinking through the material on this site. I discovered Bayesian decision theory in high school, but the texts I read at the time either didn't explain the whole theory or else I didn't catch it all at age 14. Either way, it was just a cute trick for calculating compound utility scores based on guesses of likelihood for various contingencies. The greatest service the Less Wrong site has done for me is to connect the utility calculation method to EMPIRICAL prior probabilities! Like, duh! A hugely useful tool, that is.

As a professional writer in my day job and student of applied linguistics research otherwise, I have some reservations about those of the Sequences that reference the philosophy of language. I completely agree that Searle believes in magic (aka "intentionality"), which is not useful. But this does not mean the Chinese Room problem isn't real.

When you study human language use empirically in natural contexts (through frame-by-frame analysis of video recordings), it turns out that what we think we do with language and what we actually do are rather divergent. The body and places in the world and other agents in the interaction all play a much bigger role in the real-time construction of meaning than you would expect from introspection. Egocentric bias has a HUGE impact on what we imagine about our own utterances. I've come to the conclusion that Stevan Harnad is absolutely correct, and that machine language understanding will require an AI ROBOT, not a disembodied algorithmic system.

As for HPMOR, I hereby predict that Harrymort is going to go back in time to the primal event in Godric's Hollow and change the entire universe to canon in his quest to, er, spoilers, can't say.

Cheers.

Comment author: telms 01 August 2013 03:31:53AM 8 points [-]

I'd suggest adding separate columns for actual WORK TIME versus total ELAPSED TIME after email turnaround, task switching, sleep, etc.

Prepare kettle of chili from scratch: 40 min work time, 3 hr elapsed time

Read a 350-page novel: 6 hr (work & elapsed)

Read 690 pages of economic history excluding references: 52 hrs (work time), 3 months (elapsed)

Comment author: telms 31 July 2013 03:15:37AM *  1 point [-]

Let's see if I can take your college example and fit it to what Freakonics is investigating.

Before you roll the dice, you are asked how confident you are that if the dice roll 6, you will in fact enroll and pay the first semester's tuition at school X and still be attending classes there two months from now. You can choose from:

(a) Very likely

(b) Somewhat likely

(c) Somewhat unlikely

(d) Very unlikely

Then you're asked to give a probability estimate that you will not show up, pay up, and stick it out for two months.

Let's say you're highly motivated to do school and all three school choices are equally wonderful to you. But you don't have the tuition money and all three schools have turned you down for a scholarship. You are determined to work your way through school, but you know that the odds are against you being able to work full time and go to school full time at the same time.

So you estimate the odds against paying the first chunk of tuition and carrying a full load of classes and performing well enough to keep your job at 75%. You know it's going to be pretty damned hard.

All the same, you are confident that you are more likely than not to succeed anyway. You pick "somewhat likely" as your confidence of success.

These two estimates are logically incongruent. What interested me about the Freakonomics study is that the software challenged me on the mismatch. It popped up a dialog that said, in effect, you said you're more likely to succeed than not, but you estimated a 75% chance of failure. It sounds like you've already decided to quit. Are you sure you want to roll the dice?

You can end it there or go on with the roll.

Now doesn't that challenge change the whole feel of the decision for you? It sure does for me.

Comment author: telms 30 July 2013 04:18:34AM 0 points [-]

It's my understanding that, in a repeated series of PD games, the best strategy in the long run is "tit-for-tat": cooperate by default, but retaliate with defection whenever someone defects against you, and keep defecting until the original defector returns to cooperation mode. Perhaps the prisoners in this case were generalizing a cooperative default from multiple game-like encounters and treating this particular experiment as just one more of these more general interactions?

Freakonomics Study Investigates Decision-Making and Estimated Prior Probabilities

3 telms 30 July 2013 04:08AM

The Freakonomics web site is currently conducting online research that appears, to this properly hypothesis-blinded participant, to be investigating decision-making and estimated prior probability of success. You can participate yourself at  http://www.freakonomics.com/experiments/.

The study asks the participant to choose a yes/no decision that they would be willing to commit to making on the basis of a random coin toss. (Well, actually, the random decay of an atomic nucleus, but they use coin flip graphics.) In my case, the only decision I was willing to make on such a random basis is something with very low risks: namely, the decision whether or not to quit twisting my hair. I accepted the obligation to change my behavior based on a coin toss, and the coin toss says I gotta change.

Breaking a habit of such long standing will be difficult. Past behavior is the best predictor of future behavior, and all that, so when they asked how LIKELY I thought it would be that my hair-twisting habit would stick despite my best efforts to get rid of it, I estimated 90%. Yet I also claimed that I WILL PROBABLY (not certainly, but probably) conquer the habit.

Yes, I recognize the dissonance between these two statements. It intrigues me. Is it perhaps the intent of the experiment to create explicit, conscious, cognitive dissonance like this in some participants, and see what difference it makes to outcomes?

They could easily have phrased the odds question in the inverse form. They COULD have asked how likely I thought it was that I would SUCCEED in achieving my goal. That would align neatly with my statement of commitment and yield no dissonance. I could make the usual biased assumptions that strength of willpower is the same as odds of success, and over-estimate those success odds accordingly.

I don't actually know that the study cares about this, but this is what I would care about if I were the researchers.

The Freakonomics people will be following up over time by email. They're also checking on me through a friend, so there is every possibility that they expect to see an interaction between social involvement in the decision's outcome and the presence of cognitive dissonance, which is believed to drive SOCIAL behavior more strongly than it drives personal decisions kept to oneself.

I'm posting this to increase my social commitment, of course. I also posted on Facebook. It's terrible to have a psychologically trained participant make assumptions about your research project and leverage those assumptions to the max for imaginary ends. But that's life in social science. :)

In response to comment by telms on Semantic Stopsigns
Comment author: imaginaryphiend 27 February 2013 05:50:39PM 0 points [-]

Telms, it seems you are looking to tread in the path of the logical positivists where they sought to sort this out within a context of early Wittgenstein. Taken to the logical extreme with regard to a logical epistemic foundationalism, they tend to be generally dismissed, but in the context of an semantics relavant to general, meaningful discourse, i thought they tended to make a lot of good sense. Ironically, i keep going back to positivism. The irony being in the potential paradoxes of me seeing my self as essentially an epistemic nihilist. lol... I see it all metaphorically as an epistemology modelled visually to appear as ever expanding circles of reasoning, looking like an outward moving psychedelic spiral. If we try to deconstruct that psychedelic, perpetually moving spiral, we reach further and further towards an propositional foundation, but find we can only approach it as an infinitessimal, proposed, but not actually realizable, absolute beginning.

Comment author: telms 27 July 2013 10:12:31PM 4 points [-]

Mmm, that's not really where I'm coming from. There is an aggressively empirical research tradition in applied linguistics called "conversation analysis", which analyzes how language is actually used in real-world interaction. The raw data is actual recordings, usually with video so that the physical embodiment of the interaction and the gestures and facial expressions can be captured. The data is transcribed frame-by-frame at 1/30th of a second intervals, and includes gesture as well as vocal non-words (uh-huh, um, laugh, quavery voice, etc) to get a more complete picture of the actual construction of meaning in real time. So my question was actually an empirical one. It's one thing to guess at an analytical level that "God" might be a stop-signal in religious debates or in question chains involving children. But is the term really used that way? Has anyone got any unedited video recordings of such conversations that we could analyze? After making very many errors of my own based on expectation rather than actual data, I tend to be skeptical of any statement that says "language IS used in manner X", when that manner is not demonstrated in data. Language CAN be used in manner X, yes, but is that the normative use in actual practice? We don't know until we do the hard empirical work needed to find out.

In response to Semantic Stopsigns
Comment author: telms 25 February 2013 05:58:37AM 0 points [-]

Speaking for a moment as a discourse analyst rather than a philosopher, I would like to point out that much talk is social action rather than reasoning or argument, and what is said is rarely all, or even most, of what is meant. Does anyone here know of any empirical discourse research into the actual linguistic uses of semantic "stopsigns" in conversational practice?

View more: Prev