Does learning from worked-out examples improve tutored problem solving?

From LearnLab
Revision as of 14:18, 17 October 2006 by Renkl (talk | contribs) (Abstract)
Jump to: navigation, search

Does learning from worked-out examples improve tutored problem solving?

Alexander Renkl, Vincent Aleven, & Ron Salden

Abstract

Although problem solving supported by Cognitive Tutors has been shown to be successful in fostering initial acquisition of cognitive skill, this approach does not seem to be optimal with respect to focusing the learner on the domain principles to be learned. In order to foster a deep understanding of domain principles and how they are applied in problem solving, we combine the theoretical rationales of Cognitive Tutors and example-based learning. Especially, we address the following main hypotheses: (1) Enriching a Cognitive Tutor unit with examples whose worked-out steps are gradually faded leads to better learning; (2) individualizing the fading procedure based on the quality of self-explanations that the learners provide further improves learning; (3) using free-form self-explanations is more useful in this context as compared to the usual menu-based formats; (4) learning can be enhanced further by providing previously self-explained examples – including the learner’s own self-explanations – as support at problem-solving impasses. We address these research questions by preparatory lab experiments and subsequent field experiments in the Geometry LearnLab.

We have already performed two laboratory experiment on research question 1. The detailled analyses of the process data in still in progress. Up to now, we found the following result with respect to learning outcomes and time-on-task (i.e., learning time).In a first experiment, we compared a Cognitive Tutor module with and and one without worked-out examples; both versions comprised self-explanation prompts. We found no differences in the learning outcomes variable of conceptual understanding and procedural skills. However, the example-enriched tutor led to significantly shorted learning time. We also found a significant advantage with respect to an efficiency measure relating to learning time to learning outcomes. Informal observations showed that the participants (German students) were in part confused by the part that in the example condition the solution was already given. As a consequence, we informed the students more fully about the respective Cognitive Tutor environments to be studied in a second experiment. In addition, we collected thinking aloud data (yet to be analyzed). We found significant advantages of the example condition with respect to conceptual knowledge, learning time (less time), and efficiency of learning. With respect to procedural skills no differences were observed.

Background and Significance

Glossary

  • Physics example line: lines from an example are organized around sub-goals, and they include a demonstration of an application of a single knowledge component.
  • Complete vs. incomplete example: an incomplete example omits the justification for applying a knowledge component, whereas a complete example includes both the appliction of the knowledge component as well as its justification.
  • Instructional explanation: an instruction explanation is generated for a student by an authorative figure (the textbook, teacher, or researcher). In this research context, the instructional explanation was generated by the experimenters and was delivered to the student in a voice-over narration of an expert solving a problem in Andes.

Research question

How is robust learning affected by self-explanation vs. instructional explanation?

Independent variables

Two variables were crossed:

  • Did the example present an explanation with each line or present just the line?
  • After each line (and its explanation, if any) was presented, students were prompted to either explain or paraphrase the line in their own words.

The condition where explanations were presented in the example and students were asked to paraphrase them is considered the “instructional explanation” condition. The two conditions where students were asked to self-explain the example lines are considered the “self-explanation” conditions. The remain condition, where students were asked to paraphrase examples that did not contain explainations, was considered the “no explanation” condition.

Hypothesis

For these well-prepared students, self-explanation should not too difficult. That is, the instruction should be below the students’ zone of proximal development. Thus, the learning-by-doing path (self-explanation) should elicit more robust learning than the alternative path (instructional explanation) wherein the student does less work.

As a manipulation check on the utility of the explanations in the complete examples, we hypothesize that instructional explanation condition should produce more robust learning than the no-explanation condition.

Dependent variables & Results

  • Near transfer, immediate: During training, examples alternated with problems, and the problems were solved using Andes. Each problem was similar to the example that preceded it, so performance on it is a measure of normal learning (near transfer, immediate testing). The log data were analyzed and assistance scores (sum of errors and help requests) were calculated. There was a main effect of Study Strategy on assistance score, reflecting higher scores for the paraphrase condition than the self-explanation condition.
  • Near transfer, retention: On the student’s regular mid-term exam, one problem was similar to the training. Since this exam occurred a week after the training, and the training took place in just under 2 hours, the student’s performance on this problem is considered a test of retention. Results on the measure were mixed. While there were no reliable main effects or interactions, the the complete self-explanation group was marginally higher than the complete paraphrase condition (LSD, p = .064).
  • Near and far transfer: After training, students did their regular homework problems using Andes. Students did them whenever they wanted, but most completed them just before the exam. The homework problems were divided based on similarity to the training problems, and assistance scores were calculated. On both similar (near transfer) and dissimilar (far transfer) problems, the results are consistent with self-explanation being more effective than instructional explanation.
  • Acceleration of future learning: The training was on magnetic fields, and it was followed in the course by a unit on electrical fields. Log data from the electrical field homework was analyzed as a measure of acceleration of future learning. Both assistance scores and learning curves of the key principles support the hypothesis that self-explanation is more effective than instructional explanation.

Explanation

This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis. The IC cluster’s hypothesis is that robust learning occurs when two conditions are met:

  • The learning event space should have paths that are mostly learning-by-doing along with alternative paths were a second agent does most of the work. In this study, self-explanation comprises the learning-by-doing path and instructional explanations are ones where another agent (the author of the text) has done most of the work.
  • The student takes the learning-by-doing path unless it becomes too difficult. This study tried (successfully, it appears) to control the student’s path choice. It showed that when students take the learning-by-doing path, they learned more than when they take the alternative path.

The IC cluster’s hypothesis actually predicts an attribute-treatment interaction (ATI) here. If some students were under-prepared and thus would find the self-explanation path too difficult, then those students would learn more on the instructional-explanation path. ATI analyzes have not yet been completed.

Annotated bibliography

  • Presentation to the NSF Site Visitors, June, 2006
  • Preliminary results were presented to the Intelligent Tutoring in Serious Games workshop, Aug. 2006
  • Presentation to the NSF Follow-up Site Visitors, September, 2006

References

Anzai, Y., & Simon, H. A. (1979). The theory of learning by doing. Psychological Review, 86(2), 124-140.

Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182. [1]

Hausmann, R. G. M., & Chi, M. T. H. (2002). Can a computer interface support self-explaining? Cognitive Technology, 7(1), 4-14. [2]