Difference between revisions of "Craig questions"
|Line 35:||Line 35:|
=== Research question ===
=== Research question ===
Is robust learning better achieved by observing multimedia displays integrated with [[deep-level reasoning questions]], prompts for reflection, or [[self-explanation]]?
Is robust learning better achieved by observing multimedia displays integrated with [[deep-level reasoning questions]], prompts for reflection , or [[self-explanation]]?
=== Independent variables ===
=== Independent variables ===
Revision as of 10:11, 18 April 2007
- 1 Investigating the robustness of vicarious learning: Sense making with deep-level reasoning questions
Investigating the robustness of vicarious learning: Sense making with deep-level reasoning questions
Scotty Craig, Kurt VanLehn, and Micki Chi
|Other Contributers||Robert N. Shelby (USNA), Brett van de Sande (Pitt)|
|Study Start Date||12-1-05|
|Study End Date||8-1-06|
|Number of Students||N = 17|
|Total Participant Hours||24 hrs.|
|DataShop||Target date: June 15, 2007|
Earlier work (Craig, at al. 2006; Gholson & Craig, in press) found that inserting relevant deep-level questions into observed video material both increased deep level question asking and improved learning. These lab studies had student learn topics in computer literacy by viewing videos of both monologues and dialogues; some material included deep-level questions, some included shallow questions and some included no questions. The conditions that included deep-level questions learned more than the others. However, it is not known how this method works compared to other methods for enhancing learning from observed materials (e.g. prompting for self-explanation). It is also not known if this effect can be useful for learning outside the lab setting.
Our in vivo experiment presented identical core content on magnetism using example problems from the Andes tutoring system in three different ways. The material was presented in three formats. All three of these formats were presented as a video of a worked example with each step corresponding to a knowledge component. The knowledge components were preceded by a deep-level question (e.g. What are the implications of having the magnetic field close to an electrified wire?), a prompt for learners to reflection on the material (i.e. a pause in the video) or a self-explanation prompt (e.g. Please begin your self explanation). Measures of Andes transfer, and long term robust learning were measured. The learners’ interaction with Andes were coded for differences on completion time, within task behavior, and the completion rates of the Andes homework.
Participants’ homework performance was investigated by looking at Andes homework scores and completion time data. There were no differences found on Andes homework scores among the three groups. However, there was a difference found in the amount of time needed to complete homework. This significant differences represented a 55% savings in time to complete the problem for the participants in the deep-level questions condition. However, both of these findings are difficult to interprete given that there was an average of 39 days between initial training and homework completion by the learners.
The current study varied the level of guidance provided. The level of guidance was varied by presenting students with a deep-level reasoning condition, a self-explanation condition and a reflection condition. The deep-level reasoning questions provided a step-by-step guide that scaffolded the learner during the learning process. The self-explanation condition asked that students build the links of these scaffolds by self-explaining the steps. As a control for time on task, the reflection condition presented materials to the participants with a pause before each step.
A guided learning hypothesis would predict that since the deep-level questions provided a constant cognitive guide the deep-level question condition would improve learning over the reflection condition and possibly the self-explanation condition if the students could not produce the guidance while producing the self-explanations. Alternatively, a content equivalency hypothesis would be that since all three conditions provide the same content they should all produce learning of the material (Klahr & Nigam, 2004).
- Normal post-test, homework on Andes: After training, students did their regular homework problems using Andes. Students could do them whenever they wanted, but most students normally completed them just before the exam. The more similar homework problems (near transfer) were analyzed.
Participants’ homework performance was investigated by looking at Andes homework scores and completion time data. There were no differences found on Andes homework scores among the three groups. However, there was a marginally significant trend found on the completion time data in favor of participants in the deep-level question condition over those in the reflection condition (t (9) = 2.14, p = .07). This difference for completion time became significant when participants in the two unguided conditions were collapsed and compared against participants in the guided condition (t (15) = 2.41, p < .05). This significant differences represented a 55% savings in time to complete the problem for the participants in the deep-level questions condition.
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis. The IC cluster’s hypothesis is that robust learning occurs when two conditions are met:
- The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work. In this study, the deep-level question condition and the self-explanation condition could comprise the learning-by-doing paths in that learners are guided to produce clearer mental models of the material. Alternatively the participants in the reflection condition only received pauses during the presentation, thus these participants were not guided to produce better mental models. These participants relied more on the video to provide relevant links for them instead of actively constructing these links.
- The student should take the learning-by-doing path unless it becomes too difficult. This study attempts to control the student’s path choice by presenting them with deep-level questions that guide them in building better mental models. However, the self-explanation and reflection conditions require the students to produce the learning by doing path. In these conditions, if the production becomes too difficult for the students then they will not learn. This study is testing whether students will learn more by being encouraged to take a learning-by-doing path, via deep-level questions, than an alternative path. Since none of the students attempted more than a few self-explanations, it appears that the students in the self-explanation conditions did not take the learning-by-doing path.
- Presented at LRDC Supergroup meeting July, 2006
- Presented at PSLC Roadshow - Memphis November, 2006
- Presented at LRDC Graduate student recruitment - Pittsburgh Feburary, 2007
- Presented at VanLehn, K., Hausmann, R., & Craig, S. (2007, April). PSLC AERA Symposium: In vivo experimentation for understanding robust learning: Pros and cons.
- Presented at VanLehn, K., Hausmann, R., & Craig, S. (2007). PSLC EARLI Symposium.
- Chi, M. T. H., Hausmann, R. G. M., & Roy, M. (under revision). Learning from observing tutoring collaboratively: Insights about tutoring effectiveness from vicarious learning. Cognitive Science.
- Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182.
- Chi, M. T. H., de Leew, N., Chiu, M., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439-477.
- Craig, S. D., Driscoll, D., & Gholson, B. (2004). Constructing knowledge from dialog in an intelligent tutoring system: Interactive learning, vicarious learning, and pedagogical agents. Journal of Educational Multimedia and Hypermedia, 13, 163-183. 
- Craig, S. D., Sullins, J., Witherspoon, A. & Gholson, B. (2006). Deep-Level Reasoning Questions effect: The Role of Dialog and Deep-Level Reasoning Questions during Vicarious Learning. Cognition and Instruction, 24(4), 565-591.
- Gholson, B. & Craig, S. D. (2006). Promoting constructive activities that support vicarious learning during computer-based instruction. Educational Psychology Review, 18, 119-139. 
- Klahr, D. & Nigam, M. (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. Psychological Science, 15, 661-667.
This project shares features with the following research projects:
Use of Questions during learning
- Reflective Dialogues (Katz)
- Post-practice reflection (Katz)
- FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)
Self explanations during learning
- The Effects of Interaction on Robust Learning (Hausmann & Chi)
- A comparison of self-explanation to instructional explanation (Hausmann & Vanlehn)
Learning by Observing
--Scotty 12:12, 19 September 2006 (EDT)