Difference between revisions of "Chi - Induction of Adaptive Pedagogical Tutorial Tactics"

From LearnLab
Jump to: navigation, search
(New page: == Project Overview == This project will address goal 1 of the CMDM thrust and in particular use DataShop datasets (90 in 5 years) to produce better cognitive models and verify the models ...)
 
Line 1: Line 1:
 
== Project Overview ==
 
== Project Overview ==
This project will address goal 1 of the CMDM thrust and in particular use DataShop datasets (90 in 5 years) to produce better cognitive models and verify the models with in vivo experimentsCognitive models drive the great many instructional decisions that automated tutoring currently make, whether it is how to organize instructional messages, sequence topics and problems in a curriculum, adapt pacing to student needs, or select appropriate materials and tasks to adapt to student needs. Cognitive models also appear critical to accurate assessment of self-regulated learning skills or motivational states.
+
This project will address goal 3 of the CMDM thrust and in particular investigate on application of a general data-driven methodology, Reinforcement Learning (RL), to derive adaptive pedagogical tutorial tactics directly from pre-existing interaction data.  More specifically, this project is designed to: 1) help computer tutors employ effective, adaptive pedagogical tutorial tactics; 2) test the viability of using RL, especially POMDP, to induce pedagogical tactics, 3) show that pedagogical tutorial tactics is a potential source of learning power for computer tutors to improve students' learning; and 4) explore the underlining causes of the effectiveness of the induced pedagogical tactics.
Multiple algorithms have been developed for automated discovery of the attributes or factors that make up a cognitive model (or a "Q matrix") including various Q-matrix discovery algorithms like Rule Spaces, Knowledge Spaces, Learning Factors Analysis (LFA), and Bayesian exponential-family PCA. This project will create an infrastructure for automatically applying such algorithms to data sets in the DataShop, discovering better cognitive models, and evaluating whether such models improve tutors.
+
In designing e-learning environments that effectively support student learning, one faces many decisions with respect to how the system should interact with the student at any given point. For any forms of e-learning environment, the system's behavior can be viewed as a sequential decision process wherein, at each discrete step, the system is responsible for selecting the next action to takeEach of these system decisions affects successive user's actions and performances. Pedagogical strategies are defined as policies to decide the next system's action when there are multiple ones available. It is often unclear how to make each of these system decisions effectively because its impact on learning cannot often be observed immediately and the effectiveness of one decision also depends on the effectiveness of sub-sequence decisions.  Ideally, an effective learning environment should craft and adapt its actions to the user's needs.  However, there is no existing well-established theory on how to make these system decisions effectively. Typically, system designers designs pedagogical strategies by hands and have to make many nontrivial design choices. However, it is also often difficult to evaluate these hand-coded rules as their performance depends upon a number of factors, such as the content difficulty, the student's incoming competence, the system's usability, and so on.  
 +
 
 +
One form of genuinely highly interactive e-learning environments lies in the center of our interest is Intelligent tutoring systems (ITSs). Existing ITSs typically employ hand-coded pedagogical rules that seek to implement existing cognitive or instructional theories. These theories may or may not have been well-evaluated. For example, in both the CTAT \cite{Anderson1995,koedingerintelligent1997} and Andes systems \cite{AndesJAIED2005}, help is provided upon request because it is assumed that students know when they need help and will only process help when they desire it. Research on gaming, however, has raised some doubts about this, by showing that students sometimes exploit these mechanisms for shallow gains thus voiding the help value \cite{DBLP:conf/chi/BakerCKW04,DBLP:conf/its/BakerCK04}.
 +
 
  
 
== Planned accomplishments for PSLC Year 6 ==
 
== Planned accomplishments for PSLC Year 6 ==

Revision as of 15:49, 14 April 2010

Project Overview

This project will address goal 3 of the CMDM thrust and in particular investigate on application of a general data-driven methodology, Reinforcement Learning (RL), to derive adaptive pedagogical tutorial tactics directly from pre-existing interaction data. More specifically, this project is designed to: 1) help computer tutors employ effective, adaptive pedagogical tutorial tactics; 2) test the viability of using RL, especially POMDP, to induce pedagogical tactics, 3) show that pedagogical tutorial tactics is a potential source of learning power for computer tutors to improve students' learning; and 4) explore the underlining causes of the effectiveness of the induced pedagogical tactics. In designing e-learning environments that effectively support student learning, one faces many decisions with respect to how the system should interact with the student at any given point. For any forms of e-learning environment, the system's behavior can be viewed as a sequential decision process wherein, at each discrete step, the system is responsible for selecting the next action to take. Each of these system decisions affects successive user's actions and performances. Pedagogical strategies are defined as policies to decide the next system's action when there are multiple ones available. It is often unclear how to make each of these system decisions effectively because its impact on learning cannot often be observed immediately and the effectiveness of one decision also depends on the effectiveness of sub-sequence decisions. Ideally, an effective learning environment should craft and adapt its actions to the user's needs. However, there is no existing well-established theory on how to make these system decisions effectively. Typically, system designers designs pedagogical strategies by hands and have to make many nontrivial design choices. However, it is also often difficult to evaluate these hand-coded rules as their performance depends upon a number of factors, such as the content difficulty, the student's incoming competence, the system's usability, and so on.

One form of genuinely highly interactive e-learning environments lies in the center of our interest is Intelligent tutoring systems (ITSs). Existing ITSs typically employ hand-coded pedagogical rules that seek to implement existing cognitive or instructional theories. These theories may or may not have been well-evaluated. For example, in both the CTAT \cite{Anderson1995,koedingerintelligent1997} and Andes systems \cite{AndesJAIED2005}, help is provided upon request because it is assumed that students know when they need help and will only process help when they desire it. Research on gaming, however, has raised some doubts about this, by showing that students sometimes exploit these mechanisms for shallow gains thus voiding the help value \cite{DBLP:conf/chi/BakerCKW04,DBLP:conf/its/BakerCK04}.


Planned accomplishments for PSLC Year 6

1. Develop code and human-computer interfaces for applying, comparing and interpreting cognitive model discovery algorithms across multiple data sets in DataShop. We will document processes for how the algorithms, like LFA, combine automation and human input to discover or improve cognitive models of specific learning domains. 2. Demonstrate the use of the model discovery infrastructure (#1) for at least two discovery algorithms applied to at least 4 DataShop data sets. We will target at least one math (Geometry area and/or Algebra equation solving), one science (Physics kinematics), and one language (English articles) domain. 3. For at least one of these data sets, work with associated researchers to perform a “close the loop” experiment whereby we test whether a better cognitive model leads to better or more efficient student learning.

Integrated Research Results and High Profile Publication

Establishing that cognitive models of academic domain knowledge in math, science, and language can be discovered from data would be an important scientific achievement. The achievement will be greater to the extent that the discovered models involve deep or integrative knowledge components not directly apparent in surface task structure (e.g., model discovery in the Geometry area domain isolated a problem decomposition skill). The statistical model structure of competing discovery algorithms promises to shed new light on the nature or extent of regularities or laws of learning, like the power or exponential shape of learning curves, whether the complexity of task behavior is due to human or domain characteristics (the ant on the beach question), whether or not there are systematic individual differences in student learning rates. We expect integrative results of this project can be published in high-profile general journals (e.g., Science or Nature) or more specific technical (e.g., Machine Learning or JMLR) or psychological journals (e.g., Cognitive Science or Learning Science).

Year 6 Project Deliverables

  • Develop code and human-computer interfaces for applying, comparing and interpreting cognitive model discovery algorithms across multiple data sets in DataShop.
  • Demonstrate the use of the model discovery infrastructure for at least two discovery algorithms applied to at least 4 DataShop data sets.
  • For at least one of these data sets, work with associated researchers to perform a “close the loop” experiment whereby we test that a better cognitive model leads to better or more efficient student learning.

6th Month Milestone

By March, 2010 we will 1) be able to run the LFA algorithm on PSLC data sets from the DataShop web services, 2) have run model discovery with using at least one algorithm on at least two data sets, and 3) we will have designed and ideally run the close-the-loop experiment.