Accurate knowledge estimates principle

From LearnLab
Revision as of 14:43, 16 May 2009 by Ryan (talk | contribs)
Jump to: navigation, search

Brief statement of principle

Student learning of a set of knowledge components is more likely to be complete (all KCs fully acquired) and efficient (minimum necessary time is used) when estimates of student knowledge of each component are as accurate as possible at any given time.

Description of principle

In the 1970s, mastery learning became a popular instructional technique within schooling (Bloom, 1978). In mastery learning, a student continues a learning activity until he or she has acquired all of the relevant knowledge components, and terminates the activity as soon as all knowledge components are acquired.

In the first incarnations of mastery learning, students alternated between learning and assessment activities. However, intelligent tutoring systems such as those used in LearnLabs made it possible to conduct mastery learning for individual KCs and to track KC acquisition during learning (cf. Corbett & Anderson, 1995).

Mastery learning in intelligent tutoring systems is highly dependent on the quality of the underlying KC model. If the model infers that a student has learned the KC at a given time, and terminates practice, but the student HAS NOT acquired the KC, the student may be advanced to material for which that KC was prerequisite, without fully knowing the KC. Similarly, if the model delays in inferring that the KC has been acquired, the student may waste time "overpracticing" that skill. Over-practice has been shown to have little benefit to robust learning (cf. Cen et al, 2007).

Hence, this principle states that students will acquire all KCs fully (e.g. knowledge is complete) and will learn in the minimum necessary time (giving time for other learning activities) (e.g. learning is efficient) if the knowledge estimates are as accurate as possible.

Operational definition

As a student learns, algorithmic estimates of his/her knowledge can be effectively used to predict his/her future performance, with as little error as possible.

Examples

Experimental support

Laboratory experiment support

In vivo experiment support

Cen et al's (2007) work shows that giving too much practice on specific KCs (after the student has already been assessed to have full knowledge of the KC)has no benefit. When knowledge models are inaccurate, this is a common result (Cen et al, 2007).

Data mining analysis support

Corbett & Bhatnagar (1997) show that enriching student knowledge models through assessment of conceptual understanding as well as procedural learning significantly improves prediction of post-test performance.

Baker, Aleven, & Corbett's (2008a, 2008b) work shows that knowledge estimates can be made significantly more accurate at predicting future performance through assessing the probability the student guessed or slipped.


Level of support

Theoretical rationale

(These entries should link to one or more learning processes.)

Conditions of application

Caveats, limitations, open issues, or dissenting views

Variations (descendants)

Generalizations (ascendants)

References

Cen, H., Koedinger, K.R., Junker, B. (2007) Is Over Practice Necessary? - Improving Learning Efficiency with the Cognitive Tutor through Educational Data Mining. Proceedings of AIED 2007, 511-518