A couple of days ago, prompted by several recent posts by Owen_Richardson, I checked out the book "Theory of Instruction" (Engelmann and Carnine, 1982) from my university library and promised to read it this weekend and write a post about Direct Instruction. This is that post.
Learning through examples
Direct Instruction is based on a theory of learning that assumes the learner capable of extracting a concept inductively through examples of that concept. I may not know what a blegg is, but after you show me several examples of bleggs and rubes, I will be able to figure it out. The principle of DI is to use the same basic procedure of giving examples to teach every concept imaginable. Naturally, in some cases, the process might be sped up by giving an explanation first; furthermore, there are some things in every subject you just have to memorize, and DI doesn't magically change that. However, it is assumed that the examples are where the real learning occurs.
The meat of the theory is using experimental data and cognitive science to establish rules for how examples ought to be given. Here are a few of the more basic ones:
- It is impossible to demonstrate a concept using positive examples alone. Here I am reminded of the 2-4-6 game, in which subjects fail to test triplets that disconfirm their hypothesis. A teacher has control over the examples presented, so it is important to disconfirm the hypotheses that the learners (consciously or unconsciously) generate.
- To successfully teach a quality, it is important that all positive examples only share that one quality. Imagine that you are being taught what a blegg is by a sequence of examples that include blue eggs and red cubes. By the end, you will not be certain whether the defining feature of a blegg is that it's blue, or that it's an egg, or both at once, or if the critical factor is the vanadium ore content of an object.
- The way the example is presented is also a quality that must be controlled in this fashion. This is because inductive learning is not entirely a deliberate process on the part of the learner. For instance, if positive and negative examples alternate, the learner may extract the rule that "every other object is a blegg". There are multiple ways this can become a real problem: I've encountered calculus students who were confused by a problem that asked them to integrate with respect to a variable called "t", rather than "x".
- The examples must be followed by tests, which fall in the range of given examples but are not identical. This is the way to diagnose the learning process, and is the reason that you get ideas such as "DI is about asking the students 10 questions a minute." This is not a defining feature of DI, but you can see now that it can easily happen when the concept being taught is a simple one.
I don't mean to imply that DI is restricted to dealing with yes-or-no identification questions. The examples and concepts can get more complicated, and there is a classification of concepts as comparative, multi-dimensional, joining, etc. This determines how the examples should be presented, but I won't get into the classification here. In practice, a lot of concepts are taught through several sequences of examples. For instance, teaching integration by substitution might first involve a simple sequence of examples about identifying when the method is appropriate, then a sequence about choosing the correct substitution, before actually teaching students to solve an integration problem using the method.
Faultless communication
"Faultless communication" isn't a misnomer exactly, but I think it lends itself to some easy misconceptions. The basic idea is that a sequence of examples is a faultless communication when there is only one possible rule describing all the examples; there is then the often-repeated statement that if a faultless communication fails, the problem is with the learner, not with the method.
When the book gets into details, however, the actual theory is much less dismissive. In fact, it is emphasized that in general, when a method fails, there's something wrong with the method. A well-designed sequence of examples is not (usually) a faultless communication. Rather, it is a sequence of examples calibrated in such a way that, if the learner arrives at an incorrect rule, the test examples will identify the incorrect rule, which can then be traced back to an ambiguity in the examples given. Alternatively, it can make it clear that the learner lacks sufficient background to identify the correct rule.
The actual issue that the concept of faultless communication is meant to address is the following. When you don't have a clear way to diagnose failure while teaching a concept, it leads to blind experimentation: you ask "Did everyone understand that?" and, upon a negative answer, say "Okay, let me try explaining it in some different way..." You might never stumble upon the reason that you are misunderstood, except by chance.
My own thoughts
A disclaimer: I have very little experience with teaching in general, and this is my first encounter with a complete theory of teaching. Parts of Direct Instruction feel overly restrictive to me; it seems that it doesn't have much of a place for things like lecturing, for instance. Then again, a theory must be somewhat restrictive to be effective; unless the intuitive way I would teach something is already magically the optimal way, the theory is no good unless it prevents me from doing something I would otherwise do.
An interesting aspect of Direct Instruction that I don't think has been pointed out yet (well, the book, written in 1982, might not be a likely place to find such a thought): this method of teaching seems ideally suited for teaching an Artificial Intelligence. Part of the gimmick of Direct Instruction is that it tries, as much as possible, not to make assumptions about what sort of things will be obvious to the learner. Granted, a lot of the internal structure still relies on experimental data gathered from human learners, but if we're creating an AI, it's a lot easier to program in a set of fundamental responses describing the way it should learn inductively, than to program in the concept of "red" or "faster than" by hand.
I still have the book and plan to hold on to it for a week or so; if there are any questions about what Direct Instruction is or is not, ask them in the comments and I will do my best to figure out what the theory says one way or the other.
Yes, Project Follow-Through had some problems, but I don't think it's likely that those problems provided a systematic bias towards DI sufficient to explain away the huge differences as non-significant, especially since similar results were replicated in many smaller studies that were in a situation where better random assignment etc was possible.
"Research on Direct Instruction" (Adams and Engelmann, 1996) goes into much better detail on Follow-Through and those other experiments.
Actually, it basically covers three different types of studies:
Those dealing with the relative effectiveness of DI compared to other models (in a meta-analysis)
Those pinning down the internal details of DI theory, validating unique predictions it makes (about the effect specific variations in sequencing, juxtaposition, wording, pacing, etc should have on student performance). Only one prediction ever came out differently than expected: That a sequence of examples starting with negatives would be more efficient at narrowing in on a concept for the learner. It was found that while this did hold with more sophisticated older learners, more naive younger students simply interpreted the, 'This is not [whatever]' to mean, 'This is not important, so don't attend to this'.
Those demonstrating 'non-normative' outcomes. For instance, calling Piagetian developmental theory into question.
You should be able to find the book at a local university library. Could you get your hands on it? I'd love to hear what you think after reading it!