Micro level

From LearnLab
Revision as of 19:43, 8 April 2007 by Vanlehn (talk | contribs)
Jump to: navigation, search

The micro level of the PSLC theoretical framework focuses mostly on identifying the mechanisms that underlie robust learning. It is based on inference of unobservable conditions, activities and results.

[From the spring 2007 version of the strategic plan:] The micro-level of our robust learning framework is important not only in striving for greater scientific precision, but for two other reasons. First, understanding the knowledge-rich nature of learning in academic courses is dependent on having a strong theory of the content-domain of the course. Second, instruction and learning are complex dynamic processes that are dependent on the details of multiple learning events over time. Thus, our key approach is to decompose domain knowledge and instruction into small pieces, understand how a small piece of instruction can affect a small piece of knowledge, then relate this micro-level account to the macro-level observations.


We assume that learning results from the acquisition of knowledge, broadly construed to include facts, skills, concepts, integrating schemas, etc., (VanLehn, 2006, in press) and that knowledge is decomposable into a very large number of small knowledge components.


A learning event is a time interval in the life of the student, usually lasting a few seconds or a minute, wherein the student applies or constructs a knowledge component. For instance, the following are all learning events involving the same geometric knowledge component:

  • reading a definition of supplementary angles, such as the one at the top of Error! Reference source not found..
  • studying an example of supplementary angles, such as the one at the bottom of Error! Reference source not found..
  • using supplementary angles to solve a problem or find a proof
  • explaining supplementary angles to another student
  • incorrectly selecting a supplementary angle pair from set of angle pairs, getting feedback from a tutor, and then correctly selecting the supplementary pair.
  • Starting to write a definition of supplementary angles, realizing that one is uncertain of the distinction between supplementary angles and complementary angles, looking supplementary angles up in the textbook, and writing a correct definition for supplementary angles.

A single learning event often mixes application, construction and/or refinement of the knowledge component.


A micro level analysis of an experiment is based on having completed two complex, effortful analyses: (1) decomposition of the knowledge into knowledge components and (2) decomposition of each student’s instruction into learning events. As we discuss in the facilitation strategy section, technological features of the LearnLab support researchers in such analyses. Because much of the student activities in the seven LearnLab courses are instrumented for recording (e.g., an on-line course or tutor logs each student action and system response), researchers have access to detailed data streams of learning events over time. Often these learning events are pre-coded in terms of knowledge component theories built into the educational technology. But, in all cases the data are available, in the DataShop, for recoding and testing of alternative knowledge component theories. With such a knowledge component and learning event analysis in hand, some predictions follow easily. One such fundamental hypothesis is:

  • Knowledge Component Hypothesis: If a student’s instructional history contains many learning events for the knowledge components that need to be known in order to perform well on a particular assessment task, then we predict that the student will succeed on that task and fail otherwise.

Metric versions of this hypothesis exist as well. For instance, each knowledge component typically has a strength that increases each time it is used and some contextual retrieval features that are generalized each time it is used. These two properties, strength and retrieval features, allow more precise predictions about the probabilities of student success over time and across contexts of use.

References

  • VanLehn, K. (2006). The behavior of tutoring systems. International Journal of Artificial Intelligence in Education, 16
  • VanLehn, K. (in press). Intelligent tutoring systems for continuous, embedded assessment. In C. A. Dwyer (Ed.), The future of assessment: Shaping teaching and learning. Mahwah, NJ: Erbaum.