Self-explanation: Meta-cognitive vs. justification prompts
Contents
The Effects of Interaction on Robust Learning
Robert G.M. Hausmann, Brett van de Sande, Sophia Gershman, & Kurt VanLehn
Summary Table
PIs | Robert G.M. Hausmann (Pitt), Brett van de Sande (Pitt), Sophia Gershman (WHRHS), & Kurt VanLehn (Pitt) |
Other Contributers | Tim Nokes (Pitt) |
Study Start Date | Sept. 1, 2007 |
Study End Date | Aug. 31, 2008 |
LearnLab Site | Watchung Hills Regional High School (WHRHS) |
LearnLab Course | Physics |
Number of Students | N = 75 |
Total Participant Hours | 150 hrs. |
DataShop | Loaded: data not collect |
Abstract
The literature on studying examples and text in general shows that students learn more when they are prompted to self-explain the text as they read it. Experimenters have generally used two types of prompts: meta-cognitive and justification. An example of a meta-cognitive prompt would be, "What did this sentence tell you that you didn't already know?" and an example of a justification prompt would be, "What reasoning or principles justifies this sentence's claim?" To date, no study has included both types of prompts, and yet there are good theoretical reasons to expect them to have differential impacts on student learning. This study will directly compare them in a single experiment using high schools physics students.
Background and Significance
Glossary
Research question
How is robust learning affected by self-explanation vs. jointly constructed explanations?
Independent variables
Only one independent variable, with two levels, was used:
- Explanation-construction: individually constructed explanations vs. jointly constructed explanations
Prompting for an explanation was intended to increase the probability that the individual or dyad will traverse a useful learning-event path.
Hypothesis
Dependent variables
- Near transfer, immediate: electrodynamics problems solved in Andes during the laboratory period (2 hrs.).
Results
Laboratory Experiment
Procedure
All of the participants were enrolled in a year-long, high-school physics course. The task domain, electrodynamics, was taught at the beginning of the Spring semester. Therefore, all of the students were familiar with the Andes physics tutor. They did not need any training in the interface. Unlike our previous lab experiment, they did not solve a warm-up problem. Instead, they started the experiment with a fairly complex problem.
Participants were randomly assigned to condition. The first activity was to train the participants in their respective explanation activities. They read the instructions to the experiment, presented on a webpage, followed by the prompts used after each step of the example.
After reading the experimental instructions, they then watched an introductory video to the Andes physics tutor. Afterwards, they solved an electrodynamics problem. Once they finished, participants then watched a video solving an isomorphic problem. Note that this procedure is slightly different from those used in the past where examples are presented before solving problems (e.g., Sweller & Cooper, 1985; Exper. 2). The videos decomposed into steps, and students were prompted to explain each step. The cycle of explaining examples and solving problems repeated until either 4 problems were solved or 2 hours elapsed. The first problem was used as a warm-up exercise, and the problems became progressively more complex.
As in our first experiment, we used normalize assistance scores. Normalize assistance scores were defined as the sum of all the errors and requests for help on that problem divided by the number of entries made in solving that problem. Thus, lower assistance scores indicate that the student derived a solution while making fewer mistakes and getting less help, and thus demonstrating better performance and understanding.