Difference between revisions of "Accurate knowledge estimates principle"
(4 intermediate revisions by the same user not shown) | |||
Line 5: | Line 5: | ||
==Description of principle== | ==Description of principle== | ||
− | In the 1970s, [[mastery learning]] became a popular instructional technique within schooling (Bloom, | + | In the 1970s, [[mastery learning]] became a popular instructional technique within schooling (Bloom, 1976). In mastery learning, a student continues a learning activity until he or she has acquired all of the relevant knowledge components, and terminates the activity as soon as all knowledge components are acquired. |
In the first incarnations of [[mastery learning]], students alternated between learning and assessment activities. However, [[intelligent tutoring systems]] such as those used in LearnLabs made it possible to conduct mastery learning for individual KCs and to track KC acquisition during learning (cf. Corbett & Anderson, 1995). | In the first incarnations of [[mastery learning]], students alternated between learning and assessment activities. However, [[intelligent tutoring systems]] such as those used in LearnLabs made it possible to conduct mastery learning for individual KCs and to track KC acquisition during learning (cf. Corbett & Anderson, 1995). | ||
+ | |||
+ | Mastery learning in intelligent tutoring systems is highly dependent on the quality of the underlying KC model. If the model infers that a student has learned the KC at a given time, and terminates practice, but the student HAS NOT acquired the KC, the student may be advanced to material for which that KC was prerequisite, without fully knowing the KC. Similarly, if the model delays in inferring that the KC has been acquired, the student may waste time "overpracticing" that skill. Over-practice has been shown to have little benefit to robust learning (cf. Cen et al, 2007). | ||
+ | |||
+ | Hence, this principle states that students will acquire all KCs fully (e.g. knowledge is complete) and will learn in the minimum necessary time (giving time for other learning activities) (e.g. learning is efficient) if the knowledge estimates are as accurate as possible. | ||
===Operational definition=== | ===Operational definition=== | ||
+ | |||
+ | As a student learns, algorithmic estimates of his/her knowledge can be effectively used to predict his/her future performance, with as little error as possible. | ||
+ | |||
===Examples=== | ===Examples=== | ||
==Experimental support== | ==Experimental support== | ||
===Laboratory experiment support=== | ===Laboratory experiment support=== | ||
===In vivo experiment support=== | ===In vivo experiment support=== | ||
+ | |||
+ | Cen et al's (2007) work shows that giving too much practice on specific KCs (after the student has already been assessed to have full knowledge of the KC)has no benefit. When knowledge models are inaccurate, this is a common result (Cen et al, 2007). | ||
+ | |||
+ | ===Data mining analysis support=== | ||
+ | |||
+ | Corbett & Bhatnagar (1997) show that enriching student knowledge models through assessment of conceptual understanding as well as procedural learning significantly improves prediction of post-test performance. | ||
+ | |||
+ | Baker, Aleven, & Corbett's (2008a, 2008b) work shows that knowledge estimates can be made significantly more accurate at predicting future performance through assessing the probability the student guessed or slipped. | ||
+ | |||
===Level of support=== | ===Level of support=== | ||
==Theoretical rationale== | ==Theoretical rationale== | ||
− | |||
==Conditions of application== | ==Conditions of application== | ||
+ | |||
+ | Should apply in any case where automated knowledge estimates influence learning software behavior in a principled fashion, including not just mastery learning but also ZPD-based interventions (Murray & Arroyo, 2002) | ||
+ | |||
==Caveats, limitations, open issues, or dissenting views== | ==Caveats, limitations, open issues, or dissenting views== | ||
+ | |||
+ | This rule is only relevant if the knowledge-component model is reasonably well-specified. If the mapping between items and knowledge components is poorly specified, it will be very difficult to assess knowledge effectively. | ||
+ | |||
==Variations (descendants)== | ==Variations (descendants)== | ||
==Generalizations (ascendants)== | ==Generalizations (ascendants)== | ||
==References== | ==References== | ||
+ | |||
+ | Baker, R.S.J.d., Corbett, A.T., Aleven, V. (2008) More Accurate Student Modeling Through Contextual Estimation of Slip and Guess Probabilities in Bayesian Knowledge Tracing. Proceedings of the 9th International Conference on Intelligent Tutoring Systems, 406-415. | ||
+ | |||
+ | Baker, R.S.J.d., Corbett, A.T., Aleven, V. (2008) Improving Contextual Models of Guessing and Slipping with a Truncated Training Set.Proceedings of the 1st International Conference on Educational Data Mining, 67-76. | ||
+ | |||
+ | Bloom, B. S. (1976). Human characteristics and school learning. New York: McGraw-Hill. | ||
+ | |||
+ | Cen, H., Koedinger, K.R., Junker, B. (2007) Is Over Practice Necessary? - Improving Learning Efficiency with the Cognitive Tutor through Educational Data Mining. Proceedings of AIED 2007, 511-518 | ||
+ | |||
+ | Corbett, A. T., & Anderson, J. R. (1995). Knowledge tracing: Modeling the acquisition of procedural | ||
+ | knowledge. User Modeling and User-Adapted Interaction, 4, 253-278. | ||
+ | |||
+ | Corbett, A. T., & Bhatnagar, A. (1997). Student modeling in the ACT programming tutor: Adjusting a | ||
+ | procedural learning model with declarative knowledge, Proceedings of the Sixth International | ||
+ | Conference on User Modeling. | ||
+ | |||
+ | Murray, T., & Arroyo, I. (2002). Towards Measuring and Maintaining the Zone of | ||
+ | Proximal Development in Adaptive Instructional Systems. Proceedings of the Sixth | ||
+ | International Conference on Intelligent Tutoring Systems. | ||
+ | |||
[[Category:Glossary]] | [[Category:Glossary]] | ||
[[Category:Instructional Principle]] | [[Category:Instructional Principle]] |
Latest revision as of 14:50, 16 May 2009
Contents
Brief statement of principle
Student learning of a set of knowledge components is more likely to be complete (all KCs fully acquired) and efficient (minimum necessary time is used) when estimates of student knowledge of each component are as accurate as possible at any given time.
Description of principle
In the 1970s, mastery learning became a popular instructional technique within schooling (Bloom, 1976). In mastery learning, a student continues a learning activity until he or she has acquired all of the relevant knowledge components, and terminates the activity as soon as all knowledge components are acquired.
In the first incarnations of mastery learning, students alternated between learning and assessment activities. However, intelligent tutoring systems such as those used in LearnLabs made it possible to conduct mastery learning for individual KCs and to track KC acquisition during learning (cf. Corbett & Anderson, 1995).
Mastery learning in intelligent tutoring systems is highly dependent on the quality of the underlying KC model. If the model infers that a student has learned the KC at a given time, and terminates practice, but the student HAS NOT acquired the KC, the student may be advanced to material for which that KC was prerequisite, without fully knowing the KC. Similarly, if the model delays in inferring that the KC has been acquired, the student may waste time "overpracticing" that skill. Over-practice has been shown to have little benefit to robust learning (cf. Cen et al, 2007).
Hence, this principle states that students will acquire all KCs fully (e.g. knowledge is complete) and will learn in the minimum necessary time (giving time for other learning activities) (e.g. learning is efficient) if the knowledge estimates are as accurate as possible.
Operational definition
As a student learns, algorithmic estimates of his/her knowledge can be effectively used to predict his/her future performance, with as little error as possible.
Examples
Experimental support
Laboratory experiment support
In vivo experiment support
Cen et al's (2007) work shows that giving too much practice on specific KCs (after the student has already been assessed to have full knowledge of the KC)has no benefit. When knowledge models are inaccurate, this is a common result (Cen et al, 2007).
Data mining analysis support
Corbett & Bhatnagar (1997) show that enriching student knowledge models through assessment of conceptual understanding as well as procedural learning significantly improves prediction of post-test performance.
Baker, Aleven, & Corbett's (2008a, 2008b) work shows that knowledge estimates can be made significantly more accurate at predicting future performance through assessing the probability the student guessed or slipped.
Level of support
Theoretical rationale
Conditions of application
Should apply in any case where automated knowledge estimates influence learning software behavior in a principled fashion, including not just mastery learning but also ZPD-based interventions (Murray & Arroyo, 2002)
Caveats, limitations, open issues, or dissenting views
This rule is only relevant if the knowledge-component model is reasonably well-specified. If the mapping between items and knowledge components is poorly specified, it will be very difficult to assess knowledge effectively.
Variations (descendants)
Generalizations (ascendants)
References
Baker, R.S.J.d., Corbett, A.T., Aleven, V. (2008) More Accurate Student Modeling Through Contextual Estimation of Slip and Guess Probabilities in Bayesian Knowledge Tracing. Proceedings of the 9th International Conference on Intelligent Tutoring Systems, 406-415.
Baker, R.S.J.d., Corbett, A.T., Aleven, V. (2008) Improving Contextual Models of Guessing and Slipping with a Truncated Training Set.Proceedings of the 1st International Conference on Educational Data Mining, 67-76.
Bloom, B. S. (1976). Human characteristics and school learning. New York: McGraw-Hill.
Cen, H., Koedinger, K.R., Junker, B. (2007) Is Over Practice Necessary? - Improving Learning Efficiency with the Cognitive Tutor through Educational Data Mining. Proceedings of AIED 2007, 511-518
Corbett, A. T., & Anderson, J. R. (1995). Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction, 4, 253-278.
Corbett, A. T., & Bhatnagar, A. (1997). Student modeling in the ACT programming tutor: Adjusting a procedural learning model with declarative knowledge, Proceedings of the Sixth International Conference on User Modeling.
Murray, T., & Arroyo, I. (2002). Towards Measuring and Maintaining the Zone of Proximal Development in Adaptive Instructional Systems. Proceedings of the Sixth International Conference on Intelligent Tutoring Systems.