Difference between revisions of "Ringenberg Examples-as-Help"
(→Summary Table) |
|||
Line 6: | Line 6: | ||
| '''PIs''' || Kurt VanLehn, Donald Treacy, Michael Ringenberg | | '''PIs''' || Kurt VanLehn, Donald Treacy, Michael Ringenberg | ||
|- | |- | ||
− | | '''Study Start Date''' || | + | | '''Study Start Date''' || 18 February 2005 |
|- | |- | ||
− | | '''Study End Date''' || | + | | '''Study End Date''' || 04 April 2005 |
|- | |- | ||
| '''LearnLab Site''' || USNA | | '''LearnLab Site''' || USNA | ||
Line 20: | Line 20: | ||
| '''DataShop''' || No; Andes data still incompatible | | '''DataShop''' || No; Andes data still incompatible | ||
|} | |} | ||
+ | |||
===Abstract=== | ===Abstract=== | ||
This ''in vivo'' experiment which occurred in the Physics LearnLab compared the relative utility of an intelligent tutoring system that used [[hint sequence]]s to a version that used [[completely justified example]]s for learning college level physics. In order to test which strategy produced better gains in competence, two version of [[Andes]] were used: one offered participants hint sequences and the other completely justified examples in response to their help requests. We found that providing examples was at least as effective as the hint sequences and was more efficient in terms of the number of problems it took to obtain the same level of mastery. | This ''in vivo'' experiment which occurred in the Physics LearnLab compared the relative utility of an intelligent tutoring system that used [[hint sequence]]s to a version that used [[completely justified example]]s for learning college level physics. In order to test which strategy produced better gains in competence, two version of [[Andes]] were used: one offered participants hint sequences and the other completely justified examples in response to their help requests. We found that providing examples was at least as effective as the hint sequences and was more efficient in terms of the number of problems it took to obtain the same level of mastery. |
Revision as of 20:14, 17 April 2007
Scaffolding Problem Solving with Embedded Examples to Promote Deep Learning
Michael Ringenberg and Kurt VanLehn
Summary Table
PIs | Kurt VanLehn, Donald Treacy, Michael Ringenberg |
Study Start Date | 18 February 2005 |
Study End Date | 04 April 2005 |
LearnLab Site | USNA |
LearnLab Course | General Physics II |
Number of Students | N = 46 |
Total Participant Hours | 20 minutes over required coursework |
DataShop | No; Andes data still incompatible |
Abstract
This in vivo experiment which occurred in the Physics LearnLab compared the relative utility of an intelligent tutoring system that used hint sequences to a version that used completely justified examples for learning college level physics. In order to test which strategy produced better gains in competence, two version of Andes were used: one offered participants hint sequences and the other completely justified examples in response to their help requests. We found that providing examples was at least as effective as the hint sequences and was more efficient in terms of the number of problems it took to obtain the same level of mastery.
Background and Significance
When students use a tutoring system with hint sequences, they sometimes engage in help abuse on virtually every step (citation needed). This means that the tutoring system is telling them each step, so essentially, they are generating a worked-out example. There may be nothing wrong with this for some students, as examples can be effective instructional material (citation needed).
Glossary
See Ringenberg Examples-as-Help Glossary
Research question
Will robust learning ensue if students are presented with relevant, completely justified examples instead of hint sequences whenever they ask for a help?
Independent variables
Particpants worked on assigned homework problems covering Inductors by using Andes at home. When they requested help on a step, they got either:
- a relevant, completely justified example (the Examples condition), or
- the normal Andes hint sequence (the Hints condition).
When they clicked on the "Done" button the example or the hint would disappear, then they would be back in problem solving mode. Thus, Examples students could not easily copy steps from the example to the problem they were solving.
Hypothesis
Providing completely justified examples instead of hint sequences when students ask for help on steps will promote the learning of knowledge components and help appropriately generalize the knowledge components.
Dependent variables & Results
- Near Transfer, retention
- Performance on problems involving inductors on the normal mid-term exam that were similar to the training problems. There was not significant difference in performance between the two conditions. Both conditions did better than a baseline of participants who solved no homework problems.
- Transfer task, deep structure assessment
- Problem matching task: No significant difference in performance between the two conditions; however, participants in the examples condition solved fewer training problems. Both conditions did better than a baseline of participants who solved no homework problems.
- Homework
- Number of problems completed: Participants in the examples condition solved significantly fewer problems than participants in the hints condition.
- Time on task: Participants in the examples condition spent less time solving problems than those in the hints condition. Participants in both conditions spent about the same amount of time per problem.
Explanation
Annotated bibliography
- Ringenberg, Michael A. & VanLehn, Kurt (2006). Scaffolding Problem Solving with Annotated, Worked-Out Examples to Promote Deep Learning. Paper presented at the ITS 2006, Taiwan. Winner of Best Paper First Authored by a Student Award. 231Kb PDF