Difference between revisions of "Craig questions"

From LearnLab
Jump to: navigation, search
m (Reverted edits by Unyvice (Talk); changed back to last version by Scraig@pitt.edu)
 
(55 intermediate revisions by 4 users not shown)
Line 1: Line 1:
== Investigating the robustness of vicarious learning: Sense Making with Deep-level reasoning questions  ==
+
== Investigating the robustness of vicarious learning: Sense making with deep-level reasoning questions  ==
 
  Scotty Craig, Kurt VanLehn, and Micki Chi''
 
  Scotty Craig, Kurt VanLehn, and Micki Chi''
 +
 +
=== Summary Table ===
 +
{| border="1" cellspacing="0" cellpadding="5" style="text-align: left;"
 +
| '''PI''' || Scotty Craig
 +
|-
 +
| '''Other Contributers''' || Robert N. Shelby (USNA), Brett van de Sande (Pitt)
 +
|-
 +
| '''Study Start Date''' || 12-1-05
 +
|-
 +
| '''Study End Date''' || 8-1-06
 +
|-
 +
| '''LearnLab Site''' || USNA
 +
|-
 +
| '''LearnLab Course''' || Physics
 +
|-
 +
| '''Number of Students''' || ''N'' = 17
 +
|-
 +
| '''Total Participant Hours''' || 24 hrs.
 +
|-
 +
| '''DataShop''' || Target date: June 15, 2007
 +
|}
 +
<br>
  
 
=== Abstract ===
 
=== Abstract ===
Craig and colleagues had participants watch information on computer hardware over a series of studies in an effort to determine ways improve learning while observing material. These studies pointed toward a deep level reasoning question effect for improving learning while observing. This effect states that if you insert a series of relevant deep level questions into observed material learning will be improved. A series of studies have shown that it is this series of deep level questions that is important. The same improvement is not seen if  They found that participants exposed to dialogs both increased deep level question asking and in another series of studies improved learning. However, this was only if deep level questions were used. Further investigations found that simply observing a presentation with deep-level questions improves learning (regardless of monolog/dialog format) over various controls. However, it is not know why this method works over observing other methods of learning. It is also not known if this effect can be useful for learning outside the lab setting. The current in vivo experiment will present the identical core content on magnetism using the examples problems from the Andes tutoring system in three different ways. The material will be presented as a worked example. The content will be divided into knowledge components. The knowledge components will be preceded by a deep question (e.g. What are the implications of having the magnetic field close to an electrified wire?), a prompt for learners to reflection on the material or a self explanation prompt (e.g. Please begin your self-explanation). Measures of short term retention, Andes transfer, and long term robust learning will be measured. A pretest and posttest will be implemented to measure short term retention. These tests will consist of deep-multiple choice questions that tap the materials essential knowledge components. The pretest and posttest will be counterbalanced to prevent an order effect. The learners’ interaction with Andes will be observed for differences on completion time, within task behavior, and the completion rates of the Andes homework. Long term classroom transfer between the conditions will come from tracking learners’ performance on in class tests on the experimental material.
+
Earlier work (Craig, at al. 2006; Gholson & Craig, in press) found that inserting relevant [[deep-level question]]s into observed video material both increased deep level question asking and improved learning. These lab studies had student learn topics in computer literacy by viewing videos of both monologues and dialogues; some material included [[deep-level question]]s, some included shallow questions and some included no questions. The conditions that included [[deep-level question]]s learned more than the others. However, it is not known how this method works compared to other methods for enhancing learning from observed materials (e.g. prompting for [[self-explanation]]). It is also not known if this effect can be useful for learning outside the lab setting.  
 +
 
 +
Our [[in vivo experiment]] presented identical core content on magnetism using example problems from the Andes tutoring system in three different ways. The material was presented in three formats. All three of these formats were presented as a video of a [[worked examples|worked example]] with each step corresponding to a [[knowledge component]]. The [[knowledge components]] were preceded by a [[deep-level question]] (e.g. What are the implications of having the magnetic field close to an electrified wire?), a prompt for learners to reflection on the material (i.e. a pause in the video) or a [[self-explanation]] prompt (e.g. Please begin your self explanation). Measures of [[Andes]] transfer, and long term [[robust learning]] were measured. The learners’ interaction with Andes were coded for differences on completion time, within task behavior, and the completion rates of the Andes homework.
 +
 
 +
Participants’ homework performance was investigated by looking at Andes homework scores and completion time data. There were no differences found on Andes homework scores among the three groups. However, there was a difference found in the amount of time needed to complete homework. This significant differences represented a 55% savings in time to complete the problem for the participants in the deep-level questions condition. However, both of these findings are difficult to interprete given that there was an average of 39 days between initial training and homework completion by the learners.
  
 
=== Glossary ===
 
=== Glossary ===
Forthcoming, but will probably include
+
See [[:Category:Craig questions|Craig deep-level questions Glossary]]
*  Vicarious learning
 
*  Deep-level reasoning question
 
 
 
  
 
=== Research question ===
 
=== Research question ===
How is robust learning affected by observing multimedia displays with narrative deep-level reasoning questions?
+
Is robust learning better achieved by observing multimedia displays integrated with [[deep-level question]]s, prompts for [[reflection questions|reflection]], or [[self-explanation]]?
  
 
=== Independent variables ===
 
=== Independent variables ===
The current study varied the level of guidance provided. The level of guidance was varied by presented students with a deep-level reasoning condition, a self-explanation condition and a reflection condition. The deep-level reasoning questions provided a step-by-step guide that scaffolded the learner during the learning process. The self-explanation conditions asked that students build the links of these scaffolds by self-explaining the steps. The reflection condition presented the participants with the steps and asked them to reflect on the material as it was presented.
+
The current study varied the level of guidance provided. The level of guidance was varied by presenting students with a deep-level reasoning condition, a self-explanation condition and a reflection condition. The deep-level reasoning questions provided a step-by-step guide that scaffolded the learner during the learning process. The self-explanation condition asked that students build the links of these scaffolds by self-explaining the steps. As a control for time on task, the reflection condition presented materials to the participants with a pause before each step.
 +
 
 +
'''Examples for each condition'''
 +
{| border="1" cellspacing="0" cellpadding="0" style="text-align: left;"
 +
|
 +
|-
 +
| ''Deep-level question'' || ''Self Explanation'' || ''Reflection''
 +
|-
 +
| What effect does a straight current-carrying wire have on magnetic field lines? || Please begin your self-explanation || Pause for 10 seconds
 +
|-
 +
| ''Corresponding Example text''
 +
|-
 +
| Magnetic field lines near a straight current-carrying wire take the form of
 +
concentric circles with the wire at their center
 +
|}
 +
<br>
  
 
=== Hypothesis ===
 
=== Hypothesis ===
A guided learning hypothesis would predict that since the deep-level questions provided greater provided a constant cognitive guide the deep-level question condition would improve learning over the reflection condition and possibly the self-explanation condition if the students could not produce the guidance why producing the self-explanations. Alternatively, a content equivalency hypothesis would be that since all three conditions provide the same content they should all produce learning of the material (Klahr & Nigam, 2004).
+
A guided learning hypothesis would predict that since the deep-level questions provided a constant cognitive guide the deep-level question condition would improve learning over the reflection condition and possibly the self-explanation condition if the students could not produce the guidance while producing the self-explanations. Alternatively, a content equivalency hypothesis would be that since all three conditions provide the same content they should all produce learning of the material (Klahr & Nigam, 2004).
  
 
=== Dependent variables ===
 
=== Dependent variables ===
* ''Near transfer, immediate'': During training, examples alternated with problems, and the problems were solved using Andes.  Each problem was similar to the example that preceded it, so performance on it is a measure of normal learning (near transfer, immediate testing).  The log data were analyzed and assistance scores (sum of errors and help requests) were calculated.  This measure showed self-explanation was more effective than instructional explanation.
 
  
* ''Homework'':  After training, students did their regular homework problems using Andes. Students can do them whenever they want, but most normally complete them just before the exam. The homework problems were divided based on similarity to the training problems. Homework for both similar (near transfer) and dissimilar (far transfer) problems were analyzed.  
+
* ''[[Long-term retention]], homework on Andes'':  After training, students did their regular homework problems using Andes. Students could do them whenever they wanted, but most students normally completed them just before the exam (''M'' = 39 days after training). The more similar homework problems (near transfer) were analyzed.
 +
 
 +
=== Results ===
 +
Participants’ homework performance was investigated by looking at Andes homework scores and completion time data. There were no differences found on Andes homework scores among the three groups. However, there was a marginally significant trend found on the completion time data in favor of participants in the deep-level question condition over those in the reflection condition (t (9) = 2.14, p = .07). This difference for completion time became significant when participants in the two unguided conditions were collapsed and compared against participants in the guided condition (t (15) = 2.41, p < .05).  This significant differences represented a 55% savings in time to complete the problem for the participants in the deep-level questions condition.
  
 
=== Explanation ===
 
=== Explanation ===
 
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s hypothesis is that robust learning occurs when two conditions are met:  
 
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s hypothesis is that robust learning occurs when two conditions are met:  
* The learning event space should have paths that are mostly learning-by-doing along with alternative paths were a second agent does most of the work.  In this study, the deep-level question condition and the self-explanation condition could comprise the “learning-by-doing paths” in that learners are guided to produce clearer mental models of the material. Alternatively the participants in the reflection condition would not be guided to produce better mental models, but would reply more on the video to provide all relevant links for them.
+
* The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work.  In this study, the deep-level question condition and the self-explanation condition could comprise the learning-by-doing paths in that learners are guided to produce clearer mental models of the material. Alternatively the participants in the reflection condition only received pauses during the presentation, thus these participants were not guided to produce better mental models.  These participants relied more on the video to provide relevant links for them instead of actively constructing these links.
* The student should take the learning-by-doing path unless it becomes too difficult.  This study attempts to control the student’s path choice by presenting them with deep-level questions that guide them to building better mental models. However, the self-explanation and reflection conditions require the students to produce the learning by doing path. In these conditions, if the production becomes to difficult for the students then they will not learn. The study is testing that when students take the learning-by-doing path, they learned more than when they take the alternative path.  Since none of the students attempted more than a few self-explanations, it appears that the students in the self-explanation conditions took this path.
+
* The student should take the learning-by-doing path unless it becomes too difficult.  This study attempts to control the student’s path choice by presenting them with deep-level questions that guide them in building better mental models. However, the self-explanation and reflection conditions require the students to produce the learning by doing path. In these conditions, if the production becomes too difficult for the students then they will not learn. This study is testing whether students will learn more by being encouraged to take a learning-by-doing path, via deep-level questions, than an alternative path.  Since none of the students attempted more than a few self-explanations, it appears that the students in the self-explanation conditions did not take the learning-by-doing path.
  
  
 
=== Annotated bibliography ===
 
=== Annotated bibliography ===
 +
*  Presented at LRDC Supergroup meeting July, 2006
 +
*  Presented at PSLC Roadshow - Memphis November, 2006
 +
*  Presented at LRDC Graduate student recruitment - Pittsburgh Feburary, 2007
 +
*  Presented at VanLehn, K., Hausmann, R., & Craig, S. (2007, April). PSLC AERA Symposium: In vivo experimentation for understanding robust learning: Pros and cons.
 +
*  Presented at VanLehn, K., Hausmann, R., & Craig, S. (2007). PSLC EARLI Symposium.
 +
*  Craig, S. D., VanLehn, K., & Chi. M.T.H. (2008). Promoting learning by observing deep-level reasoning questions on quantitative physics problem solving with Andes. In K. McFerrin, R. Weber, R. Weber, R. Carlsen, & D.A. Willis (Eds.). The proceedings of the 19th International conference for the Society for Information Technology & Teacher Education. (pp. 1065-1068). Chesapeake, VA: AACE.
  
 +
=== References ===
  
=== References ===
+
* Chi, M. T. H., Hausmann, R. G. M., & Roy, M. (under revision). Learning from observing tutoring collaboratively: Insights about tutoring effectiveness from vicarious learning. ''Cognitive Science.''  
* Chi, M. T. H., Roy, M., & Hausmann, R. G. M. (in press). Learning from observing tutoring collaboratively: Insights about tutoring effectiveness from vicarious learning. ''Cognitive Science.''  
 
 
* Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. ''Cognitive Science, 13'', 145-182.
 
* Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. ''Cognitive Science, 13'', 145-182.
 
* Chi, M. T. H., de Leew, N., Chiu, M., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. ''Cognitive Science, 18'', 439-477.
 
* Chi, M. T. H., de Leew, N., Chiu, M., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. ''Cognitive Science, 18'', 439-477.
 
* Craig, S. D., Driscoll, D., & Gholson, B. (2004). Constructing knowledge from dialog in an intelligent tutoring system: Interactive learning, vicarious learning, and pedagogical agents. ''Journal of Educational Multimedia and Hypermedia, 13'', 163-183. [http://andes3.lrdc.pitt.edu/~scraig/publications/Craigetal2004VL.pdf]
 
* Craig, S. D., Driscoll, D., & Gholson, B. (2004). Constructing knowledge from dialog in an intelligent tutoring system: Interactive learning, vicarious learning, and pedagogical agents. ''Journal of Educational Multimedia and Hypermedia, 13'', 163-183. [http://andes3.lrdc.pitt.edu/~scraig/publications/Craigetal2004VL.pdf]
* Craig, S. D., Sullins, J., Witherspoon, A. & Gholson, B. (in press/2006). Deep-Level Reasoning Questions effect: The Role of Dialog and Deep-Level Reasoning Questions during Vicarious Learning. ''Cognition and Instruction, 24(4)'', 563-589.
+
* Craig, S. D., Sullins, J., Witherspoon, A. & Gholson, B. (2006). Deep-Level Reasoning Questions effect: The Role of Dialog and Deep-Level Reasoning Questions during Vicarious Learning. ''Cognition and Instruction, 24(4)'', 565-591.
* Gholson, B. & Craig, S. D. (in press/2006). Promoting constructive activities that support vicarious learning during computer-based instruction. ''Educational Psychology Review, 18'', 1XX-1XX. [http://andes3.lrdc.pitt.edu/~scraig/publications/Gholson&Craig2006.pdf]
+
* Gholson, B. & Craig, S. D. (2006). Promoting constructive activities that support vicarious learning during computer-based instruction. ''Educational Psychology Review, 18'', 119-139. [http://andes3.lrdc.pitt.edu/~scraig/publications/Gholson&Craig2006.pdf]
 
* Klahr, D. & Nigam, M. (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. ''Psychological Science, 15'', 661-667.
 
* Klahr, D. & Nigam, M. (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. ''Psychological Science, 15'', 661-667.
 +
 +
=== Connections ===
 +
This project shares features with the following research projects:
 +
 +
Use of Questions during learning
 +
* [[Reflective_Dialogues_%28Katz%29 | Reflective Dialogues (Katz)]]
 +
* [[Post-practice reflection (Katz) | Post-practice reflection (Katz)]]
 +
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]
 +
 +
Self explanations during learning
 +
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann & Chi)]]
 +
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann & Vanlehn)]]
 +
 +
Learning by Observing
 +
* [[Craig_observing | Learning from Problem Solving while Observing Worked Examples (Craig, Gadgil, & Chi)]]
 
--[[User:Scraig@pitt.edu|Scotty]] 12:12, 19 September 2006 (EDT)
 
--[[User:Scraig@pitt.edu|Scotty]] 12:12, 19 September 2006 (EDT)

Latest revision as of 17:19, 15 November 2010

Investigating the robustness of vicarious learning: Sense making with deep-level reasoning questions

Scotty Craig, Kurt VanLehn, and Micki Chi

Summary Table

PI Scotty Craig
Other Contributers Robert N. Shelby (USNA), Brett van de Sande (Pitt)
Study Start Date 12-1-05
Study End Date 8-1-06
LearnLab Site USNA
LearnLab Course Physics
Number of Students N = 17
Total Participant Hours 24 hrs.
DataShop Target date: June 15, 2007


Abstract

Earlier work (Craig, at al. 2006; Gholson & Craig, in press) found that inserting relevant deep-level questions into observed video material both increased deep level question asking and improved learning. These lab studies had student learn topics in computer literacy by viewing videos of both monologues and dialogues; some material included deep-level questions, some included shallow questions and some included no questions. The conditions that included deep-level questions learned more than the others. However, it is not known how this method works compared to other methods for enhancing learning from observed materials (e.g. prompting for self-explanation). It is also not known if this effect can be useful for learning outside the lab setting.

Our in vivo experiment presented identical core content on magnetism using example problems from the Andes tutoring system in three different ways. The material was presented in three formats. All three of these formats were presented as a video of a worked example with each step corresponding to a knowledge component. The knowledge components were preceded by a deep-level question (e.g. What are the implications of having the magnetic field close to an electrified wire?), a prompt for learners to reflection on the material (i.e. a pause in the video) or a self-explanation prompt (e.g. Please begin your self explanation). Measures of Andes transfer, and long term robust learning were measured. The learners’ interaction with Andes were coded for differences on completion time, within task behavior, and the completion rates of the Andes homework.

Participants’ homework performance was investigated by looking at Andes homework scores and completion time data. There were no differences found on Andes homework scores among the three groups. However, there was a difference found in the amount of time needed to complete homework. This significant differences represented a 55% savings in time to complete the problem for the participants in the deep-level questions condition. However, both of these findings are difficult to interprete given that there was an average of 39 days between initial training and homework completion by the learners.

Glossary

See Craig deep-level questions Glossary

Research question

Is robust learning better achieved by observing multimedia displays integrated with deep-level questions, prompts for reflection, or self-explanation?

Independent variables

The current study varied the level of guidance provided. The level of guidance was varied by presenting students with a deep-level reasoning condition, a self-explanation condition and a reflection condition. The deep-level reasoning questions provided a step-by-step guide that scaffolded the learner during the learning process. The self-explanation condition asked that students build the links of these scaffolds by self-explaining the steps. As a control for time on task, the reflection condition presented materials to the participants with a pause before each step.

Examples for each condition

Deep-level question Self Explanation Reflection
What effect does a straight current-carrying wire have on magnetic field lines? Please begin your self-explanation Pause for 10 seconds
Corresponding Example text
Magnetic field lines near a straight current-carrying wire take the form of

concentric circles with the wire at their center


Hypothesis

A guided learning hypothesis would predict that since the deep-level questions provided a constant cognitive guide the deep-level question condition would improve learning over the reflection condition and possibly the self-explanation condition if the students could not produce the guidance while producing the self-explanations. Alternatively, a content equivalency hypothesis would be that since all three conditions provide the same content they should all produce learning of the material (Klahr & Nigam, 2004).

Dependent variables

  • Long-term retention, homework on Andes: After training, students did their regular homework problems using Andes. Students could do them whenever they wanted, but most students normally completed them just before the exam (M = 39 days after training). The more similar homework problems (near transfer) were analyzed.

Results

Participants’ homework performance was investigated by looking at Andes homework scores and completion time data. There were no differences found on Andes homework scores among the three groups. However, there was a marginally significant trend found on the completion time data in favor of participants in the deep-level question condition over those in the reflection condition (t (9) = 2.14, p = .07). This difference for completion time became significant when participants in the two unguided conditions were collapsed and compared against participants in the guided condition (t (15) = 2.41, p < .05). This significant differences represented a 55% savings in time to complete the problem for the participants in the deep-level questions condition.

Explanation

This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis. The IC cluster’s hypothesis is that robust learning occurs when two conditions are met:

  • The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work. In this study, the deep-level question condition and the self-explanation condition could comprise the learning-by-doing paths in that learners are guided to produce clearer mental models of the material. Alternatively the participants in the reflection condition only received pauses during the presentation, thus these participants were not guided to produce better mental models. These participants relied more on the video to provide relevant links for them instead of actively constructing these links.
  • The student should take the learning-by-doing path unless it becomes too difficult. This study attempts to control the student’s path choice by presenting them with deep-level questions that guide them in building better mental models. However, the self-explanation and reflection conditions require the students to produce the learning by doing path. In these conditions, if the production becomes too difficult for the students then they will not learn. This study is testing whether students will learn more by being encouraged to take a learning-by-doing path, via deep-level questions, than an alternative path. Since none of the students attempted more than a few self-explanations, it appears that the students in the self-explanation conditions did not take the learning-by-doing path.


Annotated bibliography

  • Presented at LRDC Supergroup meeting July, 2006
  • Presented at PSLC Roadshow - Memphis November, 2006
  • Presented at LRDC Graduate student recruitment - Pittsburgh Feburary, 2007
  • Presented at VanLehn, K., Hausmann, R., & Craig, S. (2007, April). PSLC AERA Symposium: In vivo experimentation for understanding robust learning: Pros and cons.
  • Presented at VanLehn, K., Hausmann, R., & Craig, S. (2007). PSLC EARLI Symposium.
  • Craig, S. D., VanLehn, K., & Chi. M.T.H. (2008). Promoting learning by observing deep-level reasoning questions on quantitative physics problem solving with Andes. In K. McFerrin, R. Weber, R. Weber, R. Carlsen, & D.A. Willis (Eds.). The proceedings of the 19th International conference for the Society for Information Technology & Teacher Education. (pp. 1065-1068). Chesapeake, VA: AACE.

References

  • Chi, M. T. H., Hausmann, R. G. M., & Roy, M. (under revision). Learning from observing tutoring collaboratively: Insights about tutoring effectiveness from vicarious learning. Cognitive Science.
  • Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182.
  • Chi, M. T. H., de Leew, N., Chiu, M., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439-477.
  • Craig, S. D., Driscoll, D., & Gholson, B. (2004). Constructing knowledge from dialog in an intelligent tutoring system: Interactive learning, vicarious learning, and pedagogical agents. Journal of Educational Multimedia and Hypermedia, 13, 163-183. [1]
  • Craig, S. D., Sullins, J., Witherspoon, A. & Gholson, B. (2006). Deep-Level Reasoning Questions effect: The Role of Dialog and Deep-Level Reasoning Questions during Vicarious Learning. Cognition and Instruction, 24(4), 565-591.
  • Gholson, B. & Craig, S. D. (2006). Promoting constructive activities that support vicarious learning during computer-based instruction. Educational Psychology Review, 18, 119-139. [2]
  • Klahr, D. & Nigam, M. (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. Psychological Science, 15, 661-667.

Connections

This project shares features with the following research projects:

Use of Questions during learning

Self explanations during learning

Learning by Observing

--Scotty 12:12, 19 September 2006 (EDT)