Difference between revisions of "Prompted self-explanation hypothesis"

From LearnLab
Jump to: navigation, search
(Variations (descendants))
 
(7 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
#REDIRECT [[Prompted self-explanation principle]]
 +
 
== Brief statement of principle ==
 
== Brief statement of principle ==
When students are given a [[worked example]] or text to study, prompting them to self-explain each step of the worked example or each line of the text causes higher learning gains than having them study the material without such prompting.  
+
When students are given a [[worked examples|worked example]] or text to study, prompting them to self-explain each step of the worked example or each line of the text causes higher learning gains than having them study the material without such prompting.
  
 
== Description of principle ==
 
== Description of principle ==
Many empirical studies have shown that there is a large amount of variance when it comes to individually produced [[Self-explanation|self-explanations]]. Some students have a natural tenancy to self-explain, while other students do little more than repeat the content of the example or expository text. The quality of the self-explanations themselves can be highly variable (Renkl, 1997). One instructional intervention that has been shown to be effective is to prompt students to self-explain (Chi et al., 1994). [[Prompting]] can take many forms, including verbal prompts from human experimenters (Chi et al., 1994), prompts automatically generated by computer tutors (McNamara, 2004; Hausmann & Chi, 2002; Koedinger & Aleven, 2002), or embedded in the learning materials themselves (Hausmann & VanLehn, 2007).
+
Many empirical studies have shown that there is a large amount of variance when it comes to individually produced [[Self-explanation|self-explanations]] (Chi et al., 1989). Some students have a natural tenancy to self-explain, while other students do little more than repeat the content of the example or expository text. The quality of the self-explanations themselves can be highly variable (Lovett, 1992; Renkl, 1997). One instructional intervention that has been shown to be effective is to prompt students to self-explain (Chi et al., 1994). [[Prompting]] can take many forms, including verbal prompts from human experimenters (Chi et al., 1994), prompts automatically generated by computer tutors (McNamara, 2004; Hausmann & Chi, 2002; Aleven & Koedinger, 2002), or embedded in the learning materials themselves (Hausmann & VanLehn, 2007).
  
 
In the context of studying an example or reading a text, prompting for [[Self-explanation|self-explanations]] leads to greater learning gains than naturally occuring student practices.
 
In the context of studying an example or reading a text, prompting for [[Self-explanation|self-explanations]] leads to greater learning gains than naturally occuring student practices.
Line 10: Line 12:
 
=== Operational definition ===
 
=== Operational definition ===
 
* <b>Self-explaining</b> is defined as a "content-relevant articulation uttered by the student after reading a line of text" (Chi, 2000; p. 165) or after studying a step in a worked-out example. A self-explanation may contain a meta-cognitive statement and/or a self-explanation inference.
 
* <b>Self-explaining</b> is defined as a "content-relevant articulation uttered by the student after reading a line of text" (Chi, 2000; p. 165) or after studying a step in a worked-out example. A self-explanation may contain a meta-cognitive statement and/or a self-explanation inference.
* A <b>meta-cognitive statement</b> is an assessment, made by the student, of his or her own current understanding of the line of text or example step.
+
** A <b>meta-cognitive statement</b> is an assessment, made by the student, of his or her own current understanding of the line of text or example step.
* A <b>self-explanation inference</b> is "an identified pieced of knowledge generated...that states something beyond what the sentence explicitly said" (Chi, 2000; p. 165).
+
** A <b>self-explanation inference</b> is "an identified pieced of knowledge generated...that states something beyond what the sentence explicitly said" (Chi, 2000; p. 165).
 
*<b>Prompting</b> is defined as an external cue that is intended to elicit the activity of self-explaining. Prompts are typically generated by a person, tutoring system, or a verbal reminder embedded in the learning material.
 
*<b>Prompting</b> is defined as an external cue that is intended to elicit the activity of self-explaining. Prompts are typically generated by a person, tutoring system, or a verbal reminder embedded in the learning material.
  
Line 70: Line 72:
 
*[[Hausmann_Study2|The effects of interaction on robust learning (Hausmann & VanLehn, 2007)]]
 
*[[Hausmann_Study2|The effects of interaction on robust learning (Hausmann & VanLehn, 2007)]]
 
*[[Craig_questions|Deep-level questions during example studying (Craig, VanLehn, & Chi, 2006)]]
 
*[[Craig_questions|Deep-level questions during example studying (Craig, VanLehn, & Chi, 2006)]]
 +
*[[Bridging_Principles_and_Examples_through_Analogy_and_Explanation|Bridging Principles and Examples through Analogy and Explanation (Nokes & VanLehn, 2007)]]
  
 
== Theoretical rationale ==
 
== Theoretical rationale ==
Line 79: Line 82:
 
|Start
 
|Start
 
# Process the line shallowly, e.g., paraphrasing it<br>
 
# Process the line shallowly, e.g., paraphrasing it<br>
## There is nothing more to learn Exit, with learning<br>
+
## There is nothing more to learn => Exit, with learning<br>
## The line is incomplete; its explanation is missing Exit, with little learning<br>
+
## The line is incomplete; its explanation is missing => Exit, with little learning<br>
 
# Try to process the line deeply, e.g., self-explain it<br>
 
# Try to process the line deeply, e.g., self-explain it<br>
## There is nothing missing from the line Exit, with learning<br>
+
## There is nothing missing from the line => Exit, with learning<br>
 
## The line is incomplete; its explanation is missing<br>
 
## The line is incomplete; its explanation is missing<br>
### The attempted self-explanation succeeds  Exit, with learning<br>
+
### The attempted self-explanation succeeds  => Exit, with learning<br>
### The attempted self-explanation fails Exit, with perhaps less learning<br>
+
### The attempted self-explanation fails => Exit, with perhaps less learning<br>
 
|-
 
|-
 
|}
 
|}
Line 115: Line 118:
  
 
Hausmann, R. G. M., &amp; VanLehn, K. (2007). Explaining self-explaining: A contrast between content and generation. In R. Luckin, K. R. Koedinger &amp; J. Greer (Eds.), Artificial intelligence in education: Building technology rich learning contexts that work (Vol. 158, pp. 417-424). Amsterdam: IOS Press. [http://learnlab.org/uploads/mypslc/publications/hausmannvanlehn2007_final.pdf]
 
Hausmann, R. G. M., &amp; VanLehn, K. (2007). Explaining self-explaining: A contrast between content and generation. In R. Luckin, K. R. Koedinger &amp; J. Greer (Eds.), Artificial intelligence in education: Building technology rich learning contexts that work (Vol. 158, pp. 417-424). Amsterdam: IOS Press. [http://learnlab.org/uploads/mypslc/publications/hausmannvanlehn2007_final.pdf]
 +
 +
Lovett, M. C. (1992). Learning by problem solving versus by examples: The benefits of generating and receiving information. In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society (pp. 956-961). Hillsdale, NJ: Erlbaum.
  
 
McNamara, D. S., Levinstein, I. B., & Boonthum, C. (2004). iSTART: Interactive strategy training for active reading and thinking. Behavioral Research Methods, Instruments, and Computers, 36, 222-233. [http://www.ingentaconnect.com/content/psocpubs/brm/2004/00000036/00000002/art00007]
 
McNamara, D. S., Levinstein, I. B., & Boonthum, C. (2004). iSTART: Interactive strategy training for active reading and thinking. Behavioral Research Methods, Instruments, and Computers, 36, 222-233. [http://www.ingentaconnect.com/content/psocpubs/brm/2004/00000036/00000002/art00007]

Latest revision as of 14:50, 23 January 2009

Brief statement of principle

When students are given a worked example or text to study, prompting them to self-explain each step of the worked example or each line of the text causes higher learning gains than having them study the material without such prompting.

Description of principle

Many empirical studies have shown that there is a large amount of variance when it comes to individually produced self-explanations (Chi et al., 1989). Some students have a natural tenancy to self-explain, while other students do little more than repeat the content of the example or expository text. The quality of the self-explanations themselves can be highly variable (Lovett, 1992; Renkl, 1997). One instructional intervention that has been shown to be effective is to prompt students to self-explain (Chi et al., 1994). Prompting can take many forms, including verbal prompts from human experimenters (Chi et al., 1994), prompts automatically generated by computer tutors (McNamara, 2004; Hausmann & Chi, 2002; Aleven & Koedinger, 2002), or embedded in the learning materials themselves (Hausmann & VanLehn, 2007).

In the context of studying an example or reading a text, prompting for self-explanations leads to greater learning gains than naturally occuring student practices.


Operational definition

  • Self-explaining is defined as a "content-relevant articulation uttered by the student after reading a line of text" (Chi, 2000; p. 165) or after studying a step in a worked-out example. A self-explanation may contain a meta-cognitive statement and/or a self-explanation inference.
    • A meta-cognitive statement is an assessment, made by the student, of his or her own current understanding of the line of text or example step.
    • A self-explanation inference is "an identified pieced of knowledge generated...that states something beyond what the sentence explicitly said" (Chi, 2000; p. 165).
  • Prompting is defined as an external cue that is intended to elicit the activity of self-explaining. Prompts are typically generated by a person, tutoring system, or a verbal reminder embedded in the learning material.

Examples

Here are the instructions to self-explain, taken from Chi et al. (1994):

"We would like you to read each sentence out loud and then explain what it means to you. That is, what
new information does each line provide for you, how does it relate to what you've already read, does it give
you a new insight into your understanding of how the circulatory system works, or does it raise a question
in your mind. Tell us whatever is going through your mind–even if it seems unimportant."

These prompts were reworded to be used in Hausmann & VanLehn (2007):

  • What new information does each step provide for you?
  • How does it relate to what you've already seen?
  • Does it give you a new insight into your understanding of how to solve the problems?
  • Does it raise a question in your mind?

These prompts were then included as text, just below a worked-out example. The example was presented as a video taken of the Andes interface, with a voice-over narration describing the user-interface actions (see Table below). In this example, the student is learning how to solve the following problem:

A charged particle is in a region where there is an electric field E of magnitude

14.3 V/m at an angle of 22 degrees above the positive x-axis. If the charge on the particle

is -7.9 C, find the magnitude of the force on the particle P due to the electric field E.


An example of prompting for self-explanining

    Now that all the given information has been entered, we need to apply
our knowledge of physics to solve the problem.

    One way to start is to ask ourselves, “What quantity is the problem seeking?”
In this case, the answer is the magnitude of the force on the particle due to
the electric field.

    We know that there is an electric field. If there is an electric field,
and there is a charged particle located in that region, then we can infer
that there is an electric force on the particle. The direction of the
electric force is in the opposite direction as the electric field because
the charge on the particle is negative.

    We use the Force tool from the vector tool bar to draw the electric force.
This brings up a dialog box. The force is on the particle and it is due to some
unspecified source. We do know, however, that the type of force is electric, so
we choose “electric” from the pull-down menu. For the orientation, we need to
add 180 degrees to 22 degrees to get a force that is in a direction that is
opposite of the direction of the electric field. Therefore we put 202 degrees.
Finally, we use “Fe” to designate this as an electric force.

[ PROMPT ]

    Now that the direction of the electric force has been indicated, we can work on
finding the magnitude. We must choose a principle that relates the magnitude
of the electric force to the strength of the electric field, and the charge on the
particle. The definition of an electric field is only equation that relates these
three variables. We write this equation, in the equation window.

[ PROMPT ]

Note. PROMPT = "Please begin your self-explanation."

Experimental support

Laboratory experiment support

Prompting for self-explaining has been shown to be effective in both increasing the amount, as well as learning gains (Chi et al., 1994). Prompting for self-explaining is typically paired with a training session, which instructs students on how to produce explanations. Laboratory research has shown that both the training and prompting techniques can be effective in producing performance gains (Bielaczyc, Pirolli, & Brown, 1995). Training does not necessarily have to be done by a human tutor. Instead, training students to self-explain can be automatized with a computerized training system (McNamara, 2004).

In vivo experiment support

Several in vivo experiments have leveraged laboratory work for inclusion of self-explaining in the classroom. Some in vivo experiments include:

Theoretical rationale

Prompting for self-explaining should increase the probability that a student engages in self-explaining, which includes an increase in the amount and accuracy of meta-cognitive monitoring statements and self-explanation inferences. Prompting for self-explaining is an attempt to increase the likelihood of traversing deep learning events.

Start
  1. Process the line shallowly, e.g., paraphrasing it
    1. There is nothing more to learn => Exit, with learning
    2. The line is incomplete; its explanation is missing => Exit, with little learning
  2. Try to process the line deeply, e.g., self-explain it
    1. There is nothing missing from the line => Exit, with learning
    2. The line is incomplete; its explanation is missing
      1. The attempted self-explanation succeeds => Exit, with learning
      2. The attempted self-explanation fails => Exit, with perhaps less learning

Conditions of application

When should a prompt for self-explanation be delivered? In many of the studies described on this page, prompts for self-explanation were offered after each step of a worked-out solution. The timing of the prompt may depend on the domain. For example, in Hausmann and VanLehn (2007), the domain was physics, which requires the acquisition of procedure knowledge. The prompt to self-explain was issued after each solution step. For a more conceptual domain, such as the circulatory system, the experimenter in Chi et al. (1994) prompted the students to self-explain after reading each page of a text on the circulatory system. Roughly one line (or idea) was contained on each page of the text. After several pages, the participants became accustomed to the procedure, and turning the page became an implicit prompt for the students to begin self-explaining (Chi, personal communication).

Caveats, limitations, open issues, or dissenting views

Examples typically precede problem solving. For example, in Sweller and Cooper (1985; Experiment 2), they asked students to study 2 examples in preparation to solve 8 problems. Similarly, Chi et al. (1989) asked students to read through 4 chapters of a physics text, which contained several examples. After studying each chapter, the students were asked to solve problems related to the content that they just studied. Finally, Trafton and Reiser (1993) manipulated the presentation of examples and problems by using either a blocked design, where students studied 6 examples, then solved 6 problems. Alternatively, an alternating conditions presented one example first, then solved one problem. They continued this sequence until all problems and examples were completed.

The order of solving and studying examples from Hausmann and VanLehn (2007) differed from traditional research on example-studying. In their experiment, students attempted to solve a problem first, and then studied an isomorphic example. The students alternated between solving problems and studying examples until all four problems were solved and all three examples were studied. Problems were presented first to capitalize on the strengths of impasse-driven learning (VanLehn , 1988). The problems created conditions where an impasse might be reached while solving a problem, and the example would demonstrate a smooth, expert solution to the same problem.

Variations (descendants)

Corrective self-explanation

Generalizations (ascendants)

Example-rule coordination principle

References

Aleven, V. A. W. M. M., & Koedinger, K. R. (2002). An effective metacognitive strategy: Learning by doing and explain with a computer-based Cognitive Tutor. Cognitive Science, 26, 147-179. [1]

Bielaczyc, K., Pirolli, P., & Brown, A. L. (1995). Training in self-explanation and self-regulation strategies: Investigating the effects of knowledge acquisition activities on problem solving. Cognition and Instruction, 13(2), 221-252. [2]

Chi, M. T. H., DeLeeuw, N., Chiu, M.-H., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439-477. [3]

Hausmann, R. G. M., & Chi, M. T. H. (2002). Can a computer interface support self-explaining? Cognitive Technology, 7(1), 4-14. [4]

Hausmann, R. G. M., & VanLehn, K. (2007). Explaining self-explaining: A contrast between content and generation. In R. Luckin, K. R. Koedinger & J. Greer (Eds.), Artificial intelligence in education: Building technology rich learning contexts that work (Vol. 158, pp. 417-424). Amsterdam: IOS Press. [5]

Lovett, M. C. (1992). Learning by problem solving versus by examples: The benefits of generating and receiving information. In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society (pp. 956-961). Hillsdale, NJ: Erlbaum.

McNamara, D. S., Levinstein, I. B., & Boonthum, C. (2004). iSTART: Interactive strategy training for active reading and thinking. Behavioral Research Methods, Instruments, and Computers, 36, 222-233. [6]

Renkl, A. (1997). Learning from worked-out examples: A study on individual differences. Cognitive Science, 21(1), 1-29. [7]

Sweller, J., & Cooper, G. A. (1985). The use of worked examples as a substitute for problem solving in learning algebra. Cognition and Instruction, 2(1), 59-89. [8]

Trafton, J. G., & Reiser, B. J. (1993). The contributions of studying examples and solving problems to skill acquisition. In Proceedings of the Fifteenth Annual Conference of the Cognitive Science Society (pp. 1017-1022). Hillsdale, NJ: Erlbaum. [9]

VanLehn, K. (1988). Toward a theory of impasse-driven learning. In H. Mandl & A. Lesgold (Eds.), Learning issues for intelligent tutoring systems (pp. 19-41). New York: Springer.