Difference between revisions of "Optimized scheduling"
(→In vivo experiment support)
|Line 28:||Line 28:|
===In vivo experiment support===
===In vivo experiment support===
Revision as of 18:07, 8 January 2008
- 1 Brief statement of principle
- 2 Description of principle
- 3 Experimental support
- 4 Theoretical rationale
- 5 Conditions of application
- 6 Caveats, limitations, open issues, or dissenting views
- 7 Variations (descendants)
- 8 Generalizations (ascendants)
- 9 References
Brief statement of principle
Optimized scheduling yields better long-term retention than a practice schedule based on fixed intervals (whether massed or spaced) or intervals self-determined by students (e.g., in flash card use).
Description of principle
This principle involves applying an instructional schedule that has been ordered to maximize robust learning. Optimized scheduling involves maximizing instructional efficiency (i.e., robust learning gains per instructional time spent) by mathematically deriving when a student should repeat practice of a knowledge component. The time interval between practice is optimal (neither too short or too long) when it best balances the benefit of enhanced memory strength due to retrieval at a long interval (spaced practice) and the cost of time to retrain due to retrieval failure at a long interval.
Mathematical models may be used to produce optimized schedules by computing the knowledge component that will be most efficiently learned if practiced next.
The spacing recommendation in the recent IES practice guide on "Organizing Instruction and Study to Improve Student Learning" describes a generalization of this principle, basically that spaced practice leads to better long-term retention than massed practice [this should probably be added as a principle in the hierarchy]. Recent work by Pavlik (2207) qualifies the conclusions because most (if not all) of the research referenced in the guide does not control for time on task.
Scheduling practice to maximize some future measure of learning given a fixed time of current practice.
Examples of optimized scheduling include learning event scheduling (see Pavlik's research program), the knowledge tracing algorithm used in Cognitive Tutors (see Cen's study), and adaptive fading of scaffolding or assistance (see Renkl's study).
Laboratory experiment support
Optimal scheduling (an expanding schedule computed by the ACT-R based algorithm) results in easier (and faster) practice and better one week retention in the lab (Pavlik, accepted).
In vivo experiment support
Fall 2007 Chinese I results produced by Pavlik show increased time on task (motivational effect) for optimized vocabulary practice. In another experiment in Fall 2007, students using an optimized Chinese radical trainer experienced a gain in their future learning of Hanzi characters.
Optimized scheduling is often a method for providing optimal repetition. The optimized scheduling of Pavlik (2005; 2007) balances the speed (reduced time cost of practice) advantage of recency with the long-term learning advantage of spaced practice. This speed advantage typically occurs for drill practice because more recent drill practice has fewer failures and therefore less need for costly review practice.
Conditions of application
One main condition of application is that the task requires some form of repetition of related knowledge components. In cases with single trials of unrelated items the schedule is trivial and cannot be optimized.
Caveats, limitations, open issues, or dissenting views
A limiting condition of using the optimized scheduling of Pavlik is that it relies on a recency advantage for practice. Without this recency advantage, it is often true that maximal spacing is optimal as suggested in "Organizing Instruction and Study to Improve Student Learning" practice guide.
For example, if each practice trial has a fixed duration this will result in no recency advantage and maximal (or very wide) spacing will be optimal. However, many procedures use the test trials since the testing effect has shown that tests result in stronger learning than passive study. Tests often have a strong advantage when they occur with greater recency since this recency reduces the need for review in the case of failure.
Examples of optimized scheduling include learning event scheduling (see Pavlik's research program), the knowledge tracing algorithm used in Cognitive Tutors (see Cen's study), and adaptive fading of scaffolding or assistance (see Renkl's study). While Pavlik (2005, 2007) has pioneered optimization of spacing for independent items, schedules can also be optimized by controlling practice quantity (Cen) or by controlling the order (and or spacing) of scaffolding exercises for dependent items (Renkl).
- Pavlik Jr., P. I., & Anderson, J. R. (accepted). Using a model to compute the optimal schedule of practice. Journal of Experimental Psychology: Applied.
- Pavlik Jr., P. I. (2005). The microeconomics of learning: Optimizing paired-associate memory. Dissertation Abstracts International: Section B: The Sciences and Engineering, 66(10-B), 5704.
- Pavlik Jr., P. I. (2007). Timing is an order: Modeling order effects in the learning of information. In F. E., Ritter, J. Nerb, E. Lehtinen & T. O'Shea (Eds.), In order to learn: How order effects in machine learning illuminate human learning (pp. 137-150). New York: Oxford University Press.
- Pavlik Jr., P. I., Presson, N., Dozzi, G., Wu, S.-m., MacWhinney, B., & Koedinger, K. R. (2007). The FaCT (Fact and Concept Training) System: A new tool linking cognitive science with educators. In D. McNamara & G. Trafton (Eds.), Proceedings of the Twenty-Ninth Annual Conference of the Cognitive Science Society (pp. 397-402). Mahwah, NJ: Lawrence Erlbaum.
- Pavlik Jr., P. I., Presson, N., & Koedinger, K. R. (2007). Optimizing knowledge component learning using a dynamic structural model of practice. In R. Lewis & T. Polk (Eds.), Proceedings of the Eighth International Conference of Cognitive Modeling. Ann Arbor: University of Michigan.
- Pashler, H., Zarow, G., & Triplett, B. (2003). Is temporal spacing of tests helpful even when it inflates error rates? Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(6), 1051-1057.