Musings on Learning Events

From LearnLab
Jump to: navigation, search

As the curmudgeon on the PLSC Executive Committee who has never quite joined the bandwagon in our theoretical statement, I offer this wiki entry as a prod to refining our theoretical statements. In particular, in this brief document I explore the conceptual bases of "event spaces" and "robust learning". (Disclaimer: Until this moment, I was a wiki virgin: I've never made a wiki entry, and rarely read wikis. Thus this entry has some of the flavor of a blog, rather than a proper wiki entry, because of my use of the first person. I hope it still serves a useful purposed. David Klahr)

I. Event spaces

At present, here is what we say about Learning Events: From the PSLC wiki: (4/18/2007 – I have added the numbering)

1. Learning events: A mental event involving the construction or application of a purported knowledge component. The event may be directly driven by instruction as in reading a definition of the knowledge component or applying it in a practice problem. While the instruction has a particular correct knowledge component as a target, the student may construct or apply a different correct or incorrect knowledge component.

2. Learning event space: The set of paths that students have available for a particular learning event

3. Learning event scheduling: It has been known since at least Ebbinghaus (1885) that the schedule of learning events influences long-term retention. Learning event scheduling is therefore an independent variable that can be manipulated. However, because of interactions with task domain (declarative or procedural), task type (study or test), and repetition spacing, learning event scheduling is a complex topic.

I do not fully understand these definitions. For one thing, it is not clear how a "purported" knowledge component differs from a real one, or for that matter, how one would determine the existence of a knowledge component. For another the ambiguity (or equivalence) between "construction" and "application" is very confusing. More generally, these three definitions appear to conflate “instruction”, “assessment”, and “learning”. Thus, in the following musings, I attempt to clarify what I see as important differences between instructional events, assessment events, and learning events. Here is how I see it: Most of our studies aim to provide some instruction (which is usually clearly and unambiguously described), and to measure the effect of that instruction on learning (which is never directly observed) via some assessment procedure (which can be clearly defined … although in some cases it isn’t) that is designed to demonstrate that the intended learning actually occurred.

I.1 Events

An event is an occurrence at a particular time, with a particular duration from T to T+d (d= duration of event). In instructional research, d can vary from seconds to semesters, depending on the grain size of the analysis. There are three distinct classes of events –instruction, learning, and assessment -- each with its own space. (Comment by Koedinger: These distinctions are great. However, the range of duration stated here is too long. The original notion of a learning event (going back to a pre-PSLC VanLehn paper) was evoked to try to track and explain how an instructional treatment taking place over many hours (or more) might be explained in terms of the sequence of 'learning events' occurring over a few seconds or minutes. The intention within PSLC is the same. Thus, a d of say 10 minutes, would be unusually long and might a reasonable loose upper bound. The breakdown of student work in LearnLab courses into instructional, assessment, and/or learning events is critical for instance to doing learning curve analysis and tracking whether a student seems to be getting better at a particular knowledge component (as only assessment events can determine) as a consequence of 1 or more successive instructional events and, it is hoped, consequential learning events.)

Each space can be characterized in terms of increasing aggregates. The smallest unit is the event. A series of events is a path. The set of all possible paths is the space. Different levels of aggregation, instrumentation precision, temporal duration, and theoretical language may compress what is a space from one perspective into an event in another. Conversely, an event may be expanded into a space as one drills down to finer grain sizes. Successful cross-mapping and comparisons and contrasts among PSLC projects depends on explicit recognition of these different grain sizes.

  • 1.1.a Learning Event Space: Learning takes place in people's minds. A learning event is a change in the learner's internal cognitive and/or motivational states. It is a process with a temporal location and duration, but it is not directly observable. A learning event path consists of a sequence of learning events leading toward increased procedural and conceptual knowledge, as well as changes in motivational states. Thus, learning event spaces are hypothetical entities: induced by instructional events and measured by assessment events.
  • I.1.b Instructional Event Space: Instruction is activity in the Learner's external environment that causes a learning event. In most of our work, instruction is intentional, goal directed, and highly specific with respect to the learning events it is designed to cause. The intentions and goals in instructional event paths are typically, but not exclusively, created by agents other than the learner.
    • An Instructional Event, like a Learning Event has a time and a duration. Unlike learning events, instructional events are directly observable.
    • The activity comprising an instructional event can be generated by the learner or by other human or non-human agents. Some of these agents can also be learners having their own set of event spaces.
    • Usually, instructional events and instructional paths are planned and intentional, but they can be unplanned, inadvertent, and unanticipated by the learner or the instructor.
    • In addition, instructional events can be classified as "other – generated" (i.e., instruction controlled and presented by an agent external to the learner.), or "self-generated" (instruction controlled by the learner, such as self paced problem solving, self-explanation, rehearsal, etc.)
    • Steps, Lessons, Courses, Curricula, etc. are types of instructional event paths: sequences of Instructional events of various grain sizes and complexity, perhaps with contingencies based on interspersed Assessment Events.
  • I.1,c Assessment Event Space: Assessments are actions designed to yield information about the learner's knowledge state. Assessment events can be initiated, and observed, by either the learner or an external agent or both. Some assessments, in addition to producing information about learners' internal states, may serve as further instructional events. (Comment by Pavlik: I think this clears up some of my confusion when I discussed test learning events and study learning events in a recent paper. I think now that I should have been talking about test(assessment) instructional events and study instructional events. My previous distinction between test and study learning events did not deal as well with the notion of observability, when this is considered as Klahr suggests, it seems clear to me that I was talking about two cannonical types of instructional events (rather than 2 cannonical types of learning events) which correspond to Klahr's instructional and assessment events, either of which may cause a learning event.)
    • At present, the wiki is surprisingly silent on this issue: there are no entries for “assessment”, “measurement”, or “testing”. Test items are alluded to in some definitions, but testing and assessment are not treated at the top level. This is a serious weakness: no science can advance without clear operational definitions of its measurement procedures. Moreover, several of our projects already have some of the best knowledge assessment procedures ever devised (e.g., in the cognitive tutors). But this needs to be made explicit in our theory.

Ideally, but rarely, instructional event paths are perfectly correlated with learning event paths. That is, for every instructional event there is a corresponding, and desired, learning event. But an Instructional Event is neither necessary nor sufficient for a Learning Event: i.e. learning may occur in the absence of instructional events, and it may not always take place in the presence of instructional events. Correspondingly, assessment events may vary widely in the extent to which they correspond to instructional events and learning events.

II. Robust learning

The wiki definition is as follows:

Robust learning: (paraphrased slightly from wiki) Learning is robust if the acquired knowledge or skill:

(a )is retained for long periods of time,

OR

(b) can be used in situations that differ significantly from the situations present during instruction.

OR

(c) allows students to learn more quickly and/or more effectively.

This definition seems indistinguishable from the many senses in which the term “far transfer” has been used for over 100 years. Moreover, “robust learning” has been used in PLSC-speak in two different senses: as both an event and an assessment.

• Robust Learning Events: A learning event that causes knowledge changes in the learner that are sufficiently important, broad, and stable, that their occurrence can be revealed by Robust Learning Assessments.

• Robust Learning Assessments. Assessment events that occur at substantial temporal durations, and in substantially different contexts than the immediate context in which the Learning Event occurred. (Note that, by definition, it is impossible to know if a robust learning event occurred until long after it did (or didn’t) occur, because robust learning event can only revealed by a robust learning assessment.)

At present, I do not see a clear conceptual distinction between the PSLC's preferred term "robust learning" and the venerable term "far transfer", because the distance metaphor in "far transfer" is itself very ill defined. The dimensions along which the assessment context differs from the learning context are many and varied. They include such things as: length of temporal interval, overlap in knowledge contexts, depth of underlying knowledge structure, social context, modality (written, spoken, visual, etc.) (cf Barnett & Ceci's (2002) steps toward remedying this conceptual problem.) Given that we make robust learning one of the central aims of both our instruction and our theory, I believe that we should build upon what efforts have already been made to understand far transfer, rather than ignore, or at best, grudgingly acknowledge the existence of that literature.

PSLC-speak makes a distinction between far transfer and what is termed "accelerated future learning" (AFL). The fluency and refinement cluster wiki defines AFL as: Learning that proceeds more effectively and more rapidly because of prior learning. It differs from transfer in its putative generality, not dependent on encounters with similar materials that require similar procedures (transfer). It may include what are called “learning to learn” skills. That same section of the wiki says "by hypothesis the robust learning produces accelerated learning through component competencies or through gains in efficiency that arise from procedures (e.g. chunking) that can apply to new learning." But elsewhere robust learning is defined as producing accelerated learning. It cant be both a hypothesized process AND a definition! How can the hypothesis be tested if the construct is defined this way? I see no need to isolate AFL from the broad class of types of transfer. Here is a simple example: If I master one web browser (Netscape) and that knowledge enables me to master another browser (Safari) much more rapidly than (a) I learned Netscape or (b) a novice learns Safari, then that would seem to imply that my learning of Netscape was "robust" because it accelerated my "future learning" of Safari. But isn't that just the same as saying that there was a lot of transfer from Netscape to Safari – including not just the specifics of each system, but also knowledge about what kinds of questions to ask about a browser? What is the new language buying us? And what is it costing us in terms of clarity and credibility?

(Pavlik Comment: OK, as far as the AFL issue, the clear distinction I see with normal performance transfer is that AFL implies the future learning will be improved on new material, but perhaps there will be no improvement in a more simple assessment of performance on the same new material. In other words, maybe if I master netscape it will not help my initial perforamnce with safari, but my learning curve will have a steeper slope from the same origin. Certainly however, this is a breed of transfer, and so if we do make AFL a centerpiece of robust learning, we must more clearly refer to it as distinct from simple performance transfer (transfer inthe origin of the learning curve with the new material). So we could say that robust learning is learning that tranfers to improve performance (the intercept of the learning curve for new information), transfer to improve learning (the slope of the new learning function for new information) and transfers after long-intervals. Having said this, it seems like the term "far transfer" might reflect this well. However, the big caveat is whether far transfer can handle describing a situation where the transfer after a long-interval is transfer to material that is not new. In this case, can we say that we have the a component of "far transfer" when the material is identical to what is learned? I'm not sure that makes sense, while it does seem to me that it makes sense to say we have a component of robust learning.... I guess the point I am not clear about is whether far transfer adequately captures those situations where we don't really have "transfer". David, how common are studies that refer to long-term retention as far transfer when there is no change in the materials between learning and assessment? Since often learning can be useful without transfer, i.e 5x7=35 is a useful fact to have in long-term memory even if a student does not transfer that knowledge to division or algebra, it seems we want to have a theory that does not exclude the utility of end-point learning (where the learning is focused only on later performance with the same items). I fear far-transfer excludes the an acknowledgement of the utility of learning for the simple sake of long-term performance with the same stimuli. This dialogue begs the question of whether short-term AFL, short term performance transfer, or long-term transfer to identical material even qualify as robust learning? Does true robust learning require all 3 factors to agree, or does a single aspect of robust learning count.)

III. Conclusion

It might be an interesting exercise to take several of our projects and see if they can be usefully described and compared, using this terminology. My hope is that such an endeavor would have less of the feel of a Procrustean Bed , than our efforts to date.