Difference between revisions of "Aleven - Causal Argumentation Game"

From LearnLab
Jump to: navigation, search
m (Reverted edits by Jeraldinesewell (Talk); changed back to last version by Matt)
Line 54: Line 54:
 
==== References ====
 
==== References ====
 
==== Future Plans ====
 
==== Future Plans ====
[http://editingwritingservices.org/article.php article writing services]
 

Revision as of 16:15, 18 August 2011

Policy World: Combining an intelligent tutor with an educational game

Summary Table

Study 1

PIs Matt Easterday, Vincent Aleven
Other Contributers Richard Scheines, Sharon Carver
Study Start Date
Study End Date August 2010
LearnLab Site NA
LearnLab Course NA
Number of Students 80
Total Participant Hours
DataShop

Abstract

In the educational game Policy World, students search for and analyze scientific evidence that they then use to debate a computer opponent on topics like school-choice. Unfortunately, commonly-used game mechanics that impose stiff penalties to build anticipation and motivation (e.g., restarting a level after “dying”) conflict with traditional cognitive tutoring mechanics such as immediate, direct assistance. In this study I compare the effects of immediate vs. delayed cognitive assistance on learning and motivation in an educational game. The laboratory study with compares 80 college undergraduates randomly assigned to two groups. Students in the traditional-tutoring group receive immediate, step-level cognitive assistance after making an error. Students in the cognitive-game group receive only situational game feedback. However, after failing a level, these students are allowed to replay the level and are then given immediate, step-level cognitive assistance. Outcome measures include both learning (e.g., whether the student picks a policy position based on evidence and can cite evidence to support that positions) and motivation (e.g., the student’s time on task and self-reported attitude toward the game). Log data will be analyzed for major sources of errors in: searching for evidence, comprehending and evaluating causal claims, creating diagrammatic representations of causal claims, synthesizing multiple claims, and choosing recommendations. The study will determine whether we can increase motivation in traditional tutors by adding game mechanics like fantasy and opposition or if we must fundamentally alter how we provide assistance in the context of an educational game.

Background & Significance

In the educational game Policy World, students search for and analyze scientific evidence that they then use to debate a computer opponent on topics like school-choice. Unfortunately, commonly-used game mechanics that impose stiff penalties to build anticipation and motivation (e.g., restarting a level after “dying”) conflict with traditional cognitive tutoring mechanics such as immediate, direct assistance.

Glossary

Hypotheses

Providing immediate cognitive assistance will lead to more efficient learning (in terms of time) but decrease motivation when compared to a more game-like feedback.

However, we may be able to get the best of both worlds by using an intelligent-novice approach, where students receive immediate cognitive assistance only AFTER they have the chance to play through a game level (and die).

Independent Variables

Students in the traditional-tutoring group receive immediate, step-level cognitive assistance after making an error. Students in the cognitive-game group receive only situational game feedback. However, after failing a level, these students are allowed to replay the level and are then given immediate, step-level cognitive assistance.

Dependent Variables

Outcome measures include both learning (e.g., whether the student picks a policy position based on evidence and can cite evidence to support that positions) and motivation (e.g., the student’s time on task and self-reported attitude toward the game).

Log data will be analyzed for major sources of errors in: searching for evidence, comprehending and evaluating causal claims, creating diagrammatic representations of causal claims, synthesizing multiple claims, and choosing recommendations.

Planned Experiments

March 2010 laboratory study with 80 college undergraduates randomly assigned to two groups.

Explanation

Further Information

Connections

Annotated Bibliography

References

Future Plans