The PSLC Coordinative Learning cluster
The studies in the Coordinative Learning cluster tend to focus on varying a) the types of information available to learning or b) the instructional methods that they employ. In particular, the studies focus on the impact of having learners coordinate two or more types. Given that the student has multiple sources/methods available, two factors that might impact learning are:
- What is the relationship between the content in the two sources or the content generated by the two methods? Our hypothesis is that the two sources or methods facilitate robust learning when a knowledge component is difficult to understand or absent in one and is present or easier to understand in the other.
- When and how does the student coordinate between the two sources or methods? Our hypothesis is that students should be encouraged to compare the two, perhaps by putting them close together in space or time.
At the micro-level, the overall hypothesis is that robust learning occurs when the learning event space has target paths whose sense making difficulties complement each other (as expressed in the first bullet above) and the students make path choices that take advantage of these complementary paths (as in the second bullet, above). This hypothesis is just a specialization of the general PSLC hypothesis to this cluster.
Coordinative Learning glossary.
- Conceptual tasks
- Ecological Control Group
- External representations
- Input sources
- Instructional method
- Multimedia sources
- Procedural tasks
- Self-supervised learning
- Unlabeled examples
When and how does coordinating multiple sources of information or lines of reasoning increase robust learning?
Two sub-groups of coordinative learning studies are exploring these more specific questions:
1) Visualizations and Multi-modal sources
When does adding visualizations or other multi-modal input enhance robust learning and how do we best support students in coordinating these sources?
2) Examples and Explanations
When and how should example study by combined and coordinated with problem solving to increase robust learning? When and how should explicit explanations be added or requested of students before, during, or after example study and problem solving practice?
- Content of the sources (e.g., pictures, diagrams, written text, audio, animation) or the encouraged lines of reasoning (e.g., example study, self-explanation, conceptual task, procedural task) and combinations
- Instructional activities designed to engage students in coordination (e.g., conceptual vs. procedural exercises, contiguous presentation of sources, self-explanation)
When students are given sources/methods whose sense making difficulties are complementary and they are engaged in coordinating the sources/methods, then their learning will be more robust than it would otherwise be.
There are both sense making and foundational skill building explanations. From the sense making perspective, if the sources/methods yield complementary content and the student is engaged in coordinating them, then the student is more likely to successfully understand the instruction because if a student fails to understand one of the sources/methods, he can use the second to make sense of the first. From a foundational skill building perspective, attending to both sources/methods simultaneously associates features from both with the learned knowledge components, thus potentially increasing feature validity and hence robust learning.
- Visualizations and Multi-modal sources
- Contiguous Representations for Robust Learning (Aleven & Butcher)
- Mapping Visual and Verbal Information: Integrated Hints in Geometry (Aleven & Butcher)
- Visual Representations in Science Learning (Davenport, Klahr & Koedinger)
- Co-training of Chinese characters (Liu, Perfetti, Dunlap, Zi, Mitchell)
- Learning Chinese pronunciation from a “talking head” (Liu, Massaro, Dunlap, Wu, Chen,Chan, Perfetti) [Was in Refinement and Fluency]
- Examples and Explanations
- Knowledge component construction vs. recall (Booth, Siegler, Koedinger & Rittle-Johnson)
- Studying the Learning Effect of Personalization and Worked Examples in the Solving of Stoichiometry Problems (McLaren, Koedinger & Yaron)
- Note-taking Project Page (Bauer & Koedinger)
- The REAP Project: Implicit and explicit instruction on word meanings (Juffs & Eskenazi)
- Hints during tutored problem solving – the effect of fewer hint levels with greater conceptual content (Aleven & Roll)
- Handwriting Algebra Tutor (Anthony, Yang & Koedinger)
- Lab study proof-of-concept for handwriting vs typing input for learning algebra equation-solving (completed)
- Effect of adding simple worked examples to problem-solving in algebra learning (completed, analysis in progress)
- In vivo comparison of Cognitive Tutor Algebra using handwriting vs typing input (planned)
- Bridging Principles and Examples through Analogy and Explanation (Nokes & VanLehn)
- Does learning from worked-out examples improve tutored problem solving? (Renkl, Aleven & Salden) [Also in Interactive Communication]
- Scaffolding Problem Solving with Embedded Example to Promote Deep Learning (Ringenberg & VanLehn) [In Interactive Communication but also relevant here]
Much research in human and machine learning research has advocated various kinds of “multiples” to assist learning:
- multiple representations (e.g., machine learning: Liere & Tadepalli, 1997; human learning: Ainsworth & Van Labeke, in press),
- multiple strategies (e.g., machine learning: Michalski & Tecucci 1997; Saitta, Botta, & Neri, 1993; human learning: Klahr & Siegler, 1978);
- multiple learning tasks (e.g., machine learning: Caruana, 1997; Case, Jain, Ott, Sharma, & Stephan, 1998; human learning: Holland, Holyoak, Nisbett, & Thagard, 1986);
- multiple data sources (e.g., machine learning: Blum & Mitchell, 1998; Collins & Singer, 1999).
Experiments in human learning have demonstrated, for instance, that instruction that combines rules or principles and examples yields better results than either alone (Holland, Holyoak, Nisbett, & Thagard, 1986) or, for instance, iterative instruction of both procedures and concepts better learning (Rittle-Johnson & Koedinger, 2002; Rittle-Johnson, Siegler, & Alibali, 2001).
Experiments in machine learning have demonstrated how more robust, generalizable learning can be achieved by training a single learner on multiple related tasks (Caruana 1997) or by training multiple learning systems on the same task (Blum & Mitchell 1998; Collins & Singer 1999; Muslea, Minton, & Knoblock, 2002). Blum and Mitchell (1998) provide both empirical results and a proof of the circumstances under which strategy combinations enhance learning. In particular, the co-training approach for combining multiple learning strategies yields better learning to the extent that the learning strategies produce “uncorrelated errors” – when one is wrong the other is often right. As an example of PSLC work, Donmez et al. (2005) demonstrate, using a multi-dimensional collaborative process analysis, that regularities across multiple codings of the same data can be exploited for the purpose of improving text classification accuracy for difficult codings.
An ambitious goal of PSLC is provide a rigorous causal theory of human learning results at the level of precision of machine learning research.
- Ainsworth, S.E. & Van Labeke (in press) Multiple Forms of Dynamic Representation. Learning and Instruction.
- Blum, A., & Mitchell, T. (1998). Combining labeled and unlabeled data with co-training. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), (pp. 92–100). New York: ACM Press. Available: citeseer.nj.nec.com/blum98combining.html
- Caruana, R. (1997). Multitask learning. Machine Learning 28(1), 41-75. Available: citeseer.nj.nec.com/caruana97multitask.html.
- Case, J., Jain, S., Ott, M., Sharma, A., & Stephan, F. (1998). Robust learning aided by context. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), (pp. 44-55). New York: ACM Press.
- Collins, M., & Singer, Y. (1999). Unsupervised models for named entity classification. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (pp. 189–196).
- Donmez, P., Rose, C. P., Stegmann, K., Weinberger, A., and Fischer, F. (2005). Supporting CSCL with Automatic Corpus Analysis Technology, to appear in the Proceedings of Computer Supported Collaborative Learning.
- Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (1986). Induction: Processes of inference, learning, and discovery. Cambridge, MA: MIT Press.
- Klahr D., and Siegler R.S. (1978). The Representation of Children's Knowledge. In H.W. Reese and L.P. Lipsitt (Eds.), Advances in Child Development and Behavior, Academic Press, New York, NY, pp. 61-116.
- Liere, R., & Tadepalli, P. (1997). Active learning with committees for text categorization. In Proceedings of AAAI-97, 14th Conference of the American Association for Artificial Intelligence (pp. 591—596). Menlo Park, CA: AAAI Press.
- Michalski, R., & Tecuci, G. (Eds.) (1997). Machine learning: A multi-strategy approach. Morgan Kaufmann.
- Muslea, I., Minton, S., & Knoblock, C. (2002). Active + semi-supervised learning = robust multi-view learning. In Proceedings of ICML-2002. Sydney, Australia.
- Rittle-Johnson, B., Siegler, R. S., & Alibali, M. W. (2001). Developing conceptual understanding and procedural skill in mathematics: An iterative process. Journal of Educational Psychology, 93(2), 346–262.
- Rittle-Johnson, B., & Koedinger, K. R. (2002). Comparing instructional strategies for integrating conceptual and procedural knowledge. Paper presented at the Psychology of Mathematics Education, National, Athens, GA.
- Saitta, L., Botta, M., & Neri, F. (1993). Multi-strategy learning and theory revision. Machine Learning, 11(2/3), 153–172.