Does learning from worked-out examples improve tutored problem solving?

From LearnLab
Revision as of 15:07, 17 October 2006 by Renkl (talk | contribs) (Explanation)
Jump to: navigation, search

Does learning from worked-out examples improve tutored problem solving?

Alexander Renkl, Vincent Aleven, & Ron Salden

Abstract

Although problem solving supported by Cognitive Tutors has been shown to be successful in fostering initial acquisition of cognitive skill, this approach does not seem to be optimal with respect to focusing the learner on the domain principles to be learned. In order to foster a deep understanding of domain principles and how they are applied in problem solving, we combine the theoretical rationales of Cognitive Tutors and example-based learning. Especially, we address the following main hypotheses: (1) Enriching a Cognitive Tutor unit with examples whose worked-out steps are gradually faded leads to better learning; (2) individualizing the fading procedure based on the quality of self-explanations that the learners provide further improves learning; (3) using free-form self-explanations is more useful in this context as compared to the usual menu-based formats; (4) learning can be enhanced further by providing previously self-explained examples – including the learner’s own self-explanations – as support at problem-solving impasses. We address these research questions by preparatory lab experiments and subsequent field experiments in the Geometry LearnLab.

We have already performed two laboratory experiment on research question 1. Detailled analyses of the process data are still in progress. Up to now, we found the following results with respect to learning outcomes and time-on-task (i.e., learning time). In a first experiment, we compared a Cognitive Tutor unit with worked-out examples and and one without examples; both versions comprised self-explanation prompts. We found no differences in the learning outcome variables of conceptual understanding and procedural skills (transfer). However, the example-enriched tutor led to significantly shorted learning times. We also found a significant advantage with respect to an efficiency measure relating the learning time to learning outcomes. Informal observations showed that the participants (German students) were in part confused that the solution was already given in the example condition ("What should we exactly do?"). As a consequence, we informed the students more fully about the respective Cognitive Tutor environments to be studied in a second experiment. In addition, we collected thinking aloud data (yet to be analyzed). We found significant advantages of the example condition with respect to conceptual knowledge, learning time (less time), and efficiency of learning. With respect to procedural skills no differences were observed.

Background and Significance

The background of this research is twofold. (1) The very successful approach of Cognitive Tutors (Anderson, Corbett, Koedinger, & Pelletier, 1995; Koedinger, Anderson, Hadley, & Mark, 1997) is taken up. These computer-based tutors provide individualized support for learning by doing (i.e., solving problems) by selecting appropriate problems to-be-solved, by providing feedback and problem-solving hints, and by on-line assessment of the student’s learning progress. Cognitive Tutors individualize the instruction by selecting problems based on a model of the students’ present knowledge state that is constantly updated, through a Bayesian process called “knowledge tracing” (Corbett & Anderson, 1995). A restriction of learning in Cognitive Tutor is that conceptual understanding is not a major learning goal. (2) The research tradition on worked-out examples rooted in Cognitive Load Theory (Sweller, van Merrienboer, & Paas, 1998) and, more specifically, the instructional model of example-based learning by Renkl and Atkinson (in press) are taken up in order to foster skill acquisition that is found in deep conceptual udnerstanding. By presenting examples instead of problems to be solved in the beginning of a learning sequence, the learner have more attentential capacity availabel in order to self-explain and thus deepen their understanding of problem solutions.

This project is in several respects of signficance:

(1) Presently, the positive effects of examples were shown in comparison to unsupported problem solving. We aim to show that example study is also superior to supported problem solving in the very beginning of a learning sequence.

(2) The Cognitive Tutor approach can be enhanced by ideas from research on example-based learning.

(3) The example-based learning approach can be enriched by individualizing instructinal procedures such as fading.

Glossary

To be developed, but will probably include:

Learning by worked-out examples

Learning by problem solving

Self-explanation

Fading

Research question

Can the effectiveness and efficiency of Cogntive Tutors be enhanced by including learning from worked-out examples?

Independent variables

The independent variable refers to the following variation:

(a) Cognitive Tutor with problems to be solved

versus

(b) Cognitive Tutors with intially worked-out examples, then partially worked-out examples, and finally problem to be solved.

Although self-explanation prompts are a typical "ingredient" of example-based learning, but not of learning by problem solving, such prompts were included in both conditions. Thereby, the potential effects can be clearly attributed to the presence or absense of example study.

Hypothesis

For these well-prepared students, self-explanation should not too difficult. That is, the instruction should be below the students’ zone of proximal development. Thus, the learning-by-doing path (self-explanation) should elicit more robust learning than the alternative path (instructional explanation) wherein the student does less work.

As a manipulation check on the utility of the explanations in the complete examples, we hypothesize that instructional explanation condition should produce more robust learning than the no-explanation condition.

Dependent variables & Results

  • Near transfer, immediate: During training, examples alternated with problems, and the problems were solved using Andes. Each problem was similar to the example that preceded it, so performance on it is a measure of normal learning (near transfer, immediate testing). The log data were analyzed and assistance scores (sum of errors and help requests) were calculated. There was a main effect of Study Strategy on assistance score, reflecting higher scores for the paraphrase condition than the self-explanation condition.
  • Near transfer, retention: On the student’s regular mid-term exam, one problem was similar to the training. Since this exam occurred a week after the training, and the training took place in just under 2 hours, the student’s performance on this problem is considered a test of retention. Results on the measure were mixed. While there were no reliable main effects or interactions, the the complete self-explanation group was marginally higher than the complete paraphrase condition (LSD, p = .064).
  • Near and far transfer: After training, students did their regular homework problems using Andes. Students did them whenever they wanted, but most completed them just before the exam. The homework problems were divided based on similarity to the training problems, and assistance scores were calculated. On both similar (near transfer) and dissimilar (far transfer) problems, the results are consistent with self-explanation being more effective than instructional explanation.
  • Acceleration of future learning: The training was on magnetic fields, and it was followed in the course by a unit on electrical fields. Log data from the electrical field homework was analyzed as a measure of acceleration of future learning. Both assistance scores and learning curves of the key principles support the hypothesis that self-explanation is more effective than instructional explanation.

Explanation

Explanation

1. The tutor provides the value and prompts the student to self-explain it by providing the justification.

1.1. The student self explains the line  Exit, with learning

1.2. The student use shallow strategies such as guessing etc.  Exit, no learning

1.3. The student’s self-explanation is incorrect and the tutor gives feedback  Start

2. The student generates the step (all two parts) via a shallow strategy such as guessing or copying it from a hint

2.1. The line is correct  Exit, with little learning

2.2. The line is incorrect and the tutor gives feedback  Start

3. The student generates the value by trying to apply geometry knowledge

3.1. The value is correct  some learning and move to path 4..

3.2. The line is incorrect and the tutor gives feedback  Start

4. The value was determined by the student and the student is to explain it by providing the justification.

4.1 The student self explains the line  Exit, with learning

4.2 The student use shallow strategies such as guessing etc.  Exit, with a bit of learning (via path 3.1)

4.2 The student’s self-explanation is incorrect and the tutor gives feedback  Start

5. The student asks for and receives a hint  Start

Annotated bibliography

  • Presentation to the NSF Site Visitors, June, 2006
  • Preliminary results were presented to the Intelligent Tutoring in Serious Games workshop, Aug. 2006
  • Presentation to the NSF Follow-up Site Visitors, September, 2006

References

Anzai, Y., & Simon, H. A. (1979). The theory of learning by doing. Psychological Review, 86(2), 124-140.

Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182. [1]

Hausmann, R. G. M., & Chi, M. T. H. (2002). Can a computer interface support self-explaining? Cognitive Technology, 7(1), 4-14. [2]