Does learning from worked-out examples improve tutored problem solving?

From LearnLab
Revision as of 16:57, 29 March 2007 by Ron-Salden (talk | contribs)
Jump to: navigation, search

Does learning from worked-out examples improve tutored problem solving?

Alexander Renkl, Vincent Aleven, & Ron Salden

Abstract

Although problem solving supported by cognitive tutors has been shown to be successful in fostering initial acquisition of cognitive skill, this approach does not seem to be optimal with respect to focusing the learner on the domain principles to be learned. In order to foster a deep understanding of domain principles and how they are applied in problem solving, we combine the theoretical rationales of Cognitive Tutors and example-based learning. Especially, we address the following main hypotheses:

  1. Enriching a Cognitive Tutor unit with examples whose worked-out steps are gradually faded leads to better learning.
  2. Individualizing the fading procedure based on the quality of self-explanations that the learners provide further improves learning.
  3. Using free-form self-explanations is more useful in this context as compared to the usual menu-based formats.
  4. Learning can be enhanced further by providing previously self-explained examples – including the learner’s own self-explanations – as support at problem-solving impasses.

We have already performed two laboratory experiments on the first hypothesis above. Detailed analyses of the process data are still in progress. Up to now, we found the following results with respect to learning outcomes and time-on-task (i.e., learning time). In the first experiment, we compared a Cognitive Tutor unit with worked-out examples to one without examples; both versions used self-explanation prompts. We found no differences in the learning outcome variables of conceptual understanding and procedural skills (transfer). However, the example-enriched tutor led to significantly shorter learning times. We also found a significant advantage with respect to an efficiency measure relating the learning time to learning outcomes. Informal observations showed that the participants (German students) were in part confused that the solution was already given in the example condition ("What should we exactly do?"). Thus, in the second lab experiment, we informed the students more fully about the respective Cognitive Tutor environments to be studied. In addition, we collected think-aloud data (yet to be analyzed). We found significant advantages of the example condition with respect to conceptual knowledge, learning time (less time), and efficiency of learning. With respect to procedural skills no differences were observed.

Background and Significance

The background of this research is twofold. (1) The very successful approach of Cognitive Tutors (Anderson, Corbett, Koedinger, & Pelletier, 1995; Koedinger, Anderson, Hadley, & Mark, 1997) is taken up. These computer-based tutors provide individualized support for learning by doing (i.e., solving problems) by selecting appropriate problems to-be-solved, by providing feedback and problem-solving hints, and by on-line assessment of the student’s learning progress. Cognitive Tutors individualize the instruction by selecting problems based on a model of the students’ present knowledge state that is constantly updated, through a Bayesian process called “knowledge tracing” (Corbett & Anderson, 1995). A restriction of learning in Cognitive Tutor is that conceptual understanding is not a major learning goal. (2) The research tradition on worked-out examples rooted in Cognitive Load Theory (Sweller, van Merriënboer, & Paas, 1998) and, more specifically, the instructional model of example-based learning by Renkl and Atkinson (in press) are taken up in order to foster skill acquisition that is found in deep conceptual udnerstanding. By presenting examples instead of problems to be solved in the beginning of a learning sequence, the learner have more attentential capacity availabel in order to self-explain and thus deepen their understanding of problem solutions.

This project is in several respects of signficance:

(1) Presently, the positive effects of examples were shown in comparison to unsupported problem solving. We aim to show that example study is also superior to supported problem solving in the very beginning of a learning sequence.

(2) The Cognitive Tutor approach can be enhanced by ideas from research on example-based learning.

(3) The example-based learning approach can be enriched by individualizing instructinal procedures such as fading.

Glossary

To be developed, but will probably include:

Learning by worked-out examples

Learning by problem solving

Self-explanation

Fading

Research question

Can the effectiveness and efficiency of Cogntive Tutors be enhanced by including learning from worked-out examples?

Independent variables

The independent variable refers to the following variation:

(a) Cognitive Tutor with problems to be solved

versus

(b) Cognitive Tutors with intially worked-out examples, then partially worked-out examples, and finally problem to be solved.

Although self-explanation prompts are a typical "ingredient" of example-based learning, but not of learning by problem solving, such prompts were included in both conditions. Thereby, the potential effects can be clearly attributed to the presence or absense of example study.

Dependent variables

1) Conceptual knowledge / retention

2) Procedural knowledge / Transfer

3) Acceleration of future learning (in future experiments)

4) Learning time

5) Efficiency of learning (relating leanring time to learning outcomes)

Hypotheses & Results

The provision of example in Cognitive Tutors should lead to better conceptual understanding and, thereby, transfer performance. In addition, examples in Cognitive Tutors should reduce learning time.

On the whole, the present results confirm the hypotheses with respect to conceptual knowledge and learning time. The excepted effects on transfer were not found.


Study 1 (lab study at Feiburg, Geometry Cognitive Tutor)

  • Summary
    • Lab Study: 8th and 9th grade geometry classes from a German high school in Freiburg
    • Domain: Circles Unit in the Geometry Cognitive Tutor
    • The students were randomly assigned to one of two conditions:
      • Problem Solving Condition: In this control condition students solved answer steps and entered explanations on all problems
      • Worked Example Condition: In this experimental condition students were first presented with problems that had worked out (i.e., filled in) answer steps but still had to enter the explanations for these steps. As they progressed through the Unit these worked out answers steps were faded meaning that towards the end of the Unit the students had to fill in answer steps and explanations.

Screenshots of problem solving condition plus example condition.


  • Findings
    • No overall effect of experimental condition on students' performance on geometry answers or reasons at posttest
    • Although working in the Diagram condition improved lower-knowledge students' explanations at posttest, higher-knowledge students performed best when working in the Table condition. The result was evidenced by a significant 3-way interaction of Test Time (Pre- vs. Posttest) X Condition (Table vs. Diagram) X Prior Knowledge (Higher vs. Lower) for students' performance on geometry rules at posttest (F(1,39) = 6.2, p < .02).

Explanation

This study belongs to the interactive communication cluster because it investigates a variation of the amount of contribution from the system and from the learner, respectively: Who provides the solution of the initial solution steps?

More specifically, this study is about changes in path choices that occur when a tutoring system includes partially worked examples. The basic idea is that when a tutor relieves a student of most of the work in generating a line by providing part of it, then students are more likely to engage in deep learning to fill in the rest. However, the instruction must be engineered so that students still become autonomous problem solvers—they eventually can do all the work themselves.

One analysis treats each step of the solution of a geometry problem as a learning event. A step has 2 parts: The derivation of the value (e.g., ∠ABD = 180º - ∠ABC; ∠ABD = 35º) and the justification (“supplementary angles applied to line DBC”). The learning event space for a single step is:

1. The tutor provides the value and prompts the student to self-explain it by providing the justification.

   1.1. The student self explains the line → Exit, with learning

   1.2. The student use shallow strategies such as guessing etc. → Exit, no learning

   1.3. The student’s self-explanation is incorrect and the tutor gives feedback → Start

2. The student generates the step (all two parts) via a shallow strategy such as guessing or copying it from a hint

   2.1. The line is correct → Exit, with little learning

   2.2. The line is incorrect and the tutor gives feedback → Start

3. The student generates the value by trying to apply geometry knowledge

   3.1. The value is correct → some learning and move to path 4..

   3.2. The line is incorrect and the tutor gives feedback → Start

4. The value was determined by the student and the student is to explain it by providing the justification.

   4.1 The student self explains the line → Exit, with learning

   4.2 The student use shallow strategies such as guessing etc. → Exit, with a bit of learning (via path 3.1)

   4.2 The student’s self-explanation is incorrect and the tutor gives feedback → Start

5. The student asks for and receives a hint → Start

In the example-studying condition, students are first presented with completely worked out examples, then with faded examples. On the completely worked out examples, each example step displays the value and derivation, and prompts the student to provide the step’s justification. Hence, the students take path 1 or path 5 on every line. On the faded examples, students take path 1 or path 5 on each presented line and paths 2, 3, 4 or 5 on the “faded out” lines. In the control condition, the students work with the conventional tutoring system the whole time, so path 1 is never available.

There are only two “really good” exits from the learning event space (1.1 and 4.1). Exiting via 4.1 means that the student has provided the whole step including the justification, whereas exiting via 1.1 means that the student has provided only the justification of the step, so the two exits may elicit different amounts or types of learning. The path 4.2 is an exit with learning. However, it is sub-optimal because the steps was determined with a not fully developed, partial understanding. On the whole, this means that the experiment is manipulating learning types. Thus, it is partly a study of path effects.

However, the experiment may also affect the frequency of good vs. poor exits, so it is also a study of path choices. Moreover, the students’ path choices interact with their mastery of geometry. If students are so unfamiliar with the appropriate knowledge that exit 4.1 via 3.1 is not feasible, even given the tutor’s hints, then students will probably exit via a poor path (2.1) or the suboptimal path (4.2) on faded steps but can choose to self-explain and exit via a good path (1.1) on the unfaded steps. Since only the example-fading condition has unfaded steps, this condition should cause more good exits and more learning than the problem solving condition. On the other hand, if the students have some familiarity with the appropriate knowledge, then the problem-solving could do just as well or even better than the example fading.

This path analysis also leads to the hypothesis that example-based learning needs less time. In the beginning of the unit, learners in the example condition go just through the path 1 and 5, whereas learners in the problem-solving condition can take long ways through the paths 2, 3, 4,and 5.

Annotated bibliography

Salden R. J. C. M., Aleven, V., Renkl, A., & Wittwer, J. (2006, July). Does Learning from Examples Improve Tutored Problem Solving? In 2006 Proceedings of the 28th Annual Meeting of the Cognitive Science Society (pp. 2602). Vancouver, Canada. Link to paper

Presentation to the PSLC Advisory Board, Fall 2006.

Schwonke, R., Wittwer, J., Aleven, V., Salden, R. J. C. M., Krieg, C., & Renkl, A. (2007). Can tutored problem solving benefit from faded worked-out examples? Paper to be presented at The European Cognitive Science Conference 2007, May 23-27. Delphi, Greece. Link to paper


References

Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. The Journal of the Learning Sciences, 4, 167-207.

Corbett, A. T., & Anderson, J. R. (1995). Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modeling and User-Adapted Interaction, 4, 253-278.

Koedinger, K. R., Anderson, J. R., Hadley, W. H., & Mark, M. A. (1997). Intelligent tutoring goes to school in the big city. International Journal of Artificial Intelligence in Education, 8, 30-43.

Renkl, A., & Atkinson, R. K. (in press). Cognitive skill acquisition: Ordering instructional events in example-based learning. F. E. Ritter, J. Nerb, E. Lehtinen, T. O’Shea (Eds.), In order to learn: How ordering effect in machine learning illuminate human learning and vice versa. Oxford, UK: Oxford University Press.

Sweller, J., Merriënboer, J. J. G. van, & Paas, F. G. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10, 251-296.