Difference between revisions of "Interaction plateau"

From LearnLab
Jump to: navigation, search
 
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
==Brief statement of principle==
 +
[[Step-based instruction]] is just as effective as [[natural tutoring]], and more effective than [[low-interaction instruction]].
 +
 +
==Description of principle==
 +
We see a plateau when learning gains are graphed on the y-axis and degree of interactivity is graphed on the x-axis.  The learning gains increase as the degree of interaction increases from [[low-interaction instruction]] to [[step-based instruction]], but then the curve is flat from [[step-based instruction]] to [[natural tutoring]].
 +
 +
===Operational definition===
 +
The steps of a task are defined by convention or the instruction.  Step-based instruction is insures that students attended to correct steps and that they are encourage to derive them.  For instance, a tutoring system might provide a form to fill in, where each blank in the form is a step, and then provide immediate feedback and hints on each blank in order to insure that the student derives a correct step for the blank. 
 +
 +
On the other hand, natural tutoring is more interactive.  The prototype is face-to-face human tutoring, although some natural langauge computer tutoring systems count as natural tutors as well.  The key attribute is that they can interact at any grain size with the student.  For instance, if a human tutor is helping a student fill in the blanks in the aforementioned form, and the student appears confused by one blank, then the human tutor might elicit a directed line of reasoning (Evens & Michael, 2006) where each inference in a long series is elicited from the student and leads eventually to filling in the blank correctly.  The interaction plateau makes the counter-intuitive claim that such natural tutoring is no more effective than step-based instruction.
 +
 +
The interaction plateau also claims that  low interaction instruction is less effective than step-based instruction.  Low interaction instruction is subclassified into
 +
 +
* read-only instruction, such as reading a textbook or watching a video, and
 +
* low-interaction problem solving, such as doing problems with either no feedback at all or feedback on answer only.
 +
 +
===Examples===
 +
Suppose one wanted to help students kearn while doing their physics.  Which of the following would be more effective?
 +
 +
*  When a student wants help, the student clicks on a "I need a tutor" button, and gets audio-only tutoring from a human tutor who can see the students' screen.  The human tutor helps the student finish the problem, and may stay on the line to help with further homework problems. 
 +
 +
*  The student uses [http://www.andes.pitt.edu Andes], a step-based homework helper (VanLehn et al, 2005).
 +
 +
*  The student solves the homework problem on paper and enters the answer into [http://www.webassign.net/ WebAssign].  It indicates whether the answer is correct.  If it is incorrect, WebAssign may give a hint; the student can submit the incorrect answer or rework their solution and enter a new answer.
 +
 +
The interaction plateau predicts that the first two treatments will be equally effective, and they they will be more effective than the third treatment, ceterus paribus.
 +
 +
==Experimental support==
 +
 +
===In vivo experiment support===
 +
Reif and Scott (1999) found an interaction plateau when they compared human tutoring, a computer tutor and low-interaction problem solving.  All students in their experiment were in the same physics class; the experiment varied only the way that the students did their homework.  One group of 15 students did their physics homework problems individually in a six-person room where “two tutors were kept quite busy providing individual help” (ibid, pg. 826).  Another 15 students did their homework on a computer tutor that had them either solve a problem or study a solution.  When solving a problem, students got immediate feedback and hints on each step.  When studying a problem, they were shown steps and asked to determine which one(s) were incorrect.  This forced them to derive the steps.  Thus, this computer tutor counts as step-based instruction.  The remaining 15 students merely did their homework as usual, relying on the textbook, their friends and the course TAs for help.  The human tutors and the computer tutors produced learning gains that were not reliably different, and yet both were reliably larger than the low-interaction instruction provided by normal homework (d=1.31 for human tutoring; d=1.01 for step-based computer tutoring). 
 +
 +
Although these results are consistent with the interaction plateau, there is a potential confound.  The human tutors and the computer tutor taught an effective problem solving method (Heller & Reif, 1984) which may or may not have been mentioned in the textbook and lectures.  If not, then the poor learning gains of the untutored students may be due to their lack of access to content (the problem solving strategy) that was available to the tutored student.  This potential confound does not affect the level part of the plateau; only the steep part.
 +
 +
===Laboratory experiment support===
 +
 +
In a series of experiments,  (VanLehn et al., 2007) taught students to reason out answers to conceptual physics questions such as: “As the earth orbits the Sun, the sun exerts a gravitational force on it.  Does the earth also exert a force on the sun? Why or why not?”  In all conditions of the experiment, students first studied a short textbook, then solved several training problems.  For each problem, the students wrote an short essay-long answer, then were tutored on its flaws, then read a correct, well-written essay.  Students were expected to apply a certain set of concept in their essays—these comprised the correct steps.  The treatments differed in how they tutored students when the essay lacked a step or had an incorrect step.  There were four experimental treatments: (1) Human tutors who communicated via a text-based interface with student; (2) Why2-Atlas and (3) Why2-AutoTutor, both of which were natural language computer tutors designed to approach human tutoring; and (4) a simple step-based computer tutor that “tutored” a missing or incorrect step by merely display text that explained what the correct step was.  A control condition had students merely read passages from a textbook without answering conceptual questions.  The first 3 treatments all count as natural tutoring, so according to the interaction plateau, they should all have the same learning gains as the simple step-based tutoring system.  All four experimental conditions should score higher than the control condition, as it is classified as read-only studying of text.  Figure ## shows the post-test scores, adjusted for pretest scores in an ANCOVA.  The four experimental conditions are not reliably different, and they all were higher than the read-only studying condition by approximately d=1.0.  Thus, the results of experiments 1 and 2 support the interaction plateau.
 +
 +
(VanLehn et al., 2007) were surprised that the four experimental conditions tied, so they did several more experiments.  The experiments used different assessment methods (e.g., far transfer; retention), different students (pre-physics vs. post-physics courses) and different numbers of training problems.  The interaction plateau was observed in all experiments except one.  In that experiment, students who had not taken college physics were trained with materials that were designed for students who had taken college physics, and human tutoring was more effective than the simple step-based computer tutor.  This makes sense; if the materials are too far over the students’ current level of competence, reading doesn’t suffice for comprehension, and yet a human tutor can help “translate” the content into novice terms.  The last 2 experiments used a completely overhauled set of materials designed especially for students who had not taken college physics, and again found an interaction plateau.
 +
 
 +
In a series of experiments, (Evens & Michael, 2006) tutored medical students in cardiovascular physiology.  All students were first taught the basics of the baroreceptor reflex which controls human blood pressure.  They were then given a training problem wherein an artificial pacemaker malfunctions and students must fill out a spreadsheet whose rows denoted physiological variables (e.g., heart rate; the blood volume per stroke of the heart, etc.) and whose column denoted time periods.  Each cell was filled with a +, - or 0 to indicate that the variable was increasing, decreasing or constant.  Each such entry was a step.  The authors first developed a step-based tutoring system, CIRCSIM, that presented a short text passage for each incorrectly entered step.  They then developed a sophisticated natural language tutoring system, CIRCSIM-tutor, which replaced the text passages with human-like typed dialogue intended to remedy not just the step but the concepts behind the step as well.  They also used a read-only studying condition with an experimenter-written text, and they included conditions with expert human tutors interacting in typed text with students.  Figure ### summarizes the results from several experiments that used the same assessments and training problems but different treatments.  The treatments that count as Natural Tutoring (the expert human tutors and CIRCSIM-tutor) tied with each other and with the step-based computer tutor (CIRCSIM).  The only conditions were learning gains were significantly different were the read-only text studying treatments.  This pattern is consistent with the interaction plateau.
 +
 +
So far, step-based instruction was conducted by a computer tutoring system.  However, this is not the only way to get students to derive each correct step.  (Chi, Roy, & Hausmann, in press) gave students a video of a problem being solved by a human tutor and a tutee working at a white board.  Students had to solve the same problem as the one being solved in the video, and they could do so any way they wished.  Other studies (VanLehn, 1998; VanLehn, Jones, & Chi, 1992) suggested that students use two main strategies for solving problems when they have access to an isomorphic solved problem:  They either copy each step from the example, or they generate each step and check it against the example’s step.  The checking strategy counts as step-based instruction, but the copying strategy does not since it is a fairly syntactic process.  The Chi et al study did not control strategies, the experiment did use either pairs of students working together on the problems (with or without a video) versus students working alone on the problems (with or without a video).  Presumably, the pairs are much less likely to use the copying strategy than the solos.  Thus, the pairs+video treatment (where the checking strategy was probably common) can be counted as step-based instruction, whereas the individuals+video (where the copying strategy was probably common) can be ignored, as copying hardly counts as problem solving at all.  Besides these two conditions, the other conditions in the experiment were (3) human tutoring, (4) pairs solving problems with the aid of a textbook but no video, and (5) individual students with a textbook but no video.  The latter two treatments count as low-interaction problem solving, as the students had no way to tell if the steps they generated were correct.  The results, shown in Figure ###, are consistent with the interaction plateau. In particular, human tutoring and the step-based instruction condition (pairs+video) had the same learning gains, and these gains were reliably larger than the other treatments.
 +
 +
To summarize, these four studies have all displayed an interaction plateau.  Granted, none of the studies were designed to test the interaction plateau hypothesis, so classifications of their conditions into low-interaction instruction, step-based instruction and natural tutoring may seem a bit forced.  Even ignoring the names of the classes, when the treatment conditions are ordered from least interactive to most interactive, all 4 studies produced plateaus.
 +
 +
==Theoretical rationale==
 +
Good instructors will design steps that are sufficiently close together that most students can, by the end of their homework, derive every step.  Perhaps the students struggle to bridge the steps when they are first learning a new topic, but with step-based instruction, most students eventually can do the hidden reasoning that must be done to correctly bridge from every step to the next.  Natural tutoring provides no added value.  However, low-interaction problem solving harms learning by making it too difficult to generate correct steps, and read-only studying invites an illustion of knowing  (Glenberg, Wilkinson, & Epstein, 1982) less learning.
 +
==Conditions of application==
 +
The interaction plateau applies only to learning to solve complex, multi-step tasks.  Single-step tasks or domains without well-defined tasks are excluded.
 +
 +
The interaction plateau applies only when all students are taught the same content using the same tasks. 
 +
==Caveats, limitations, open issues, or dissenting views==
 +
Although studies of non-expert tutors showed only modest learning gains (Cohen, Kulik & Kulik, 1982), Bloom's (1984) expert tutors elicited very large (2-sigma) learning gains, which are larger than the gains usually found in step-based instruction.  Corbett (2001) has argued that current computer tutors, when allowed to use mastery learning, also achieve a 2-sigma learning gain.  Moreover, Bloom's tutors used a larger threshold for mastery than his comparison treatments, which could account for some of their benefits.  Nonetheless, the assumption of the human tutor's omnipotence is so widely believed that there is likely to be at least some truth in it.
 +
 +
==Variations (descendants)==
 +
==Generalizations (ascendants)==
 +
==References==
 +
Bloom, B. S. (1984). The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. ''Educational Researcher'', 13, 4-16.
 +
 +
Corbett, A. (2001). Cognitive computer tutors: Solving the two-sigma problem. In ''User Modeling: Proceedings of the Eighth International Conference'' (pp. 137-147).
 +
 +
Chi, M. T. H., Roy, M., & Hausmann, R. G. M. (in press). Observing tutorial dialogues collaboratively:  Insights about human tutoring effectiveness from vicarious learning. ''Cognitive Science''.
 +
 +
Cohen, P. A., Kulik, J. A., & Kulik, C.-L. C. (1982). Educational outcomes of tutoring: A meta-analysis of findings. ''American Educational Research Journal'', 19(2), 237-248.
 +
 +
Evens, M., & Michael, J. (2006). ''One-on-one Tutoring By Humans and Machines''. Mahwah, NJ: Erlbaum.
 +
 +
Glenberg, A. M., Wilkinson, A. C., & Epstein, W. (1982). The illusion of knowing: Failure in the self-assessment of comprehension. ''Memory & Cognition'', 10(6), 597-602.
 +
 +
Heller, J. I., & Reif, F. (1984). Prescribing effective human problem-solving processes: Problem descriptions in physics. ''Cognition and Instruction'', 1(2), 177-216.
 +
 +
Reif, F., & Scott, L. A. (1999). Teaching scientific thinking skills: Students and computers coaching each other. ''American Journal of Physics'', 67(9), 819-831.
 +
 +
VanLehn, K. (1998). Analogy events: How examples are used during problem solving. ''Cognitive Science'', 22(3), 347-388.
 +
 +
VanLehn, K., Graesser, A. C., Jackson, G. T., Jordan, P., Olney, A., & Rose, C. P. (2007). When are tutorial dialogues more effective than reading? ''Cognitive Science'', 31(1), 3-62.
 +
 +
VanLehn, K., Jones, R. M., & Chi, M. T. H. (1992). A model of the self-explanation effect. ''The Journal of the Learning Sciences'', 2(1), 1-59.
 +
 +
VanLehn, K., Lynch, C., Schultz, K., Shapiro, J. A., Shelby, R. H., Taylor, L., et al. (2005). The Andes physics tutoring system: Lessons learned. ''International Journal of Artificial Intelligence and Education'', 15(3), 147-204.
 +
 +
 
[[Category:Glossary]]
 
[[Category:Glossary]]
[[Category:Instructional Principle]]
 
[[Category:PSLC General]]
 

Latest revision as of 03:42, 12 December 2007

Brief statement of principle

Step-based instruction is just as effective as natural tutoring, and more effective than low-interaction instruction.

Description of principle

We see a plateau when learning gains are graphed on the y-axis and degree of interactivity is graphed on the x-axis. The learning gains increase as the degree of interaction increases from low-interaction instruction to step-based instruction, but then the curve is flat from step-based instruction to natural tutoring.

Operational definition

The steps of a task are defined by convention or the instruction. Step-based instruction is insures that students attended to correct steps and that they are encourage to derive them. For instance, a tutoring system might provide a form to fill in, where each blank in the form is a step, and then provide immediate feedback and hints on each blank in order to insure that the student derives a correct step for the blank.

On the other hand, natural tutoring is more interactive. The prototype is face-to-face human tutoring, although some natural langauge computer tutoring systems count as natural tutors as well. The key attribute is that they can interact at any grain size with the student. For instance, if a human tutor is helping a student fill in the blanks in the aforementioned form, and the student appears confused by one blank, then the human tutor might elicit a directed line of reasoning (Evens & Michael, 2006) where each inference in a long series is elicited from the student and leads eventually to filling in the blank correctly. The interaction plateau makes the counter-intuitive claim that such natural tutoring is no more effective than step-based instruction.

The interaction plateau also claims that low interaction instruction is less effective than step-based instruction. Low interaction instruction is subclassified into

  • read-only instruction, such as reading a textbook or watching a video, and
  • low-interaction problem solving, such as doing problems with either no feedback at all or feedback on answer only.

Examples

Suppose one wanted to help students kearn while doing their physics. Which of the following would be more effective?

  • When a student wants help, the student clicks on a "I need a tutor" button, and gets audio-only tutoring from a human tutor who can see the students' screen. The human tutor helps the student finish the problem, and may stay on the line to help with further homework problems.
  • The student uses Andes, a step-based homework helper (VanLehn et al, 2005).
  • The student solves the homework problem on paper and enters the answer into WebAssign. It indicates whether the answer is correct. If it is incorrect, WebAssign may give a hint; the student can submit the incorrect answer or rework their solution and enter a new answer.

The interaction plateau predicts that the first two treatments will be equally effective, and they they will be more effective than the third treatment, ceterus paribus.

Experimental support

In vivo experiment support

Reif and Scott (1999) found an interaction plateau when they compared human tutoring, a computer tutor and low-interaction problem solving. All students in their experiment were in the same physics class; the experiment varied only the way that the students did their homework. One group of 15 students did their physics homework problems individually in a six-person room where “two tutors were kept quite busy providing individual help” (ibid, pg. 826). Another 15 students did their homework on a computer tutor that had them either solve a problem or study a solution. When solving a problem, students got immediate feedback and hints on each step. When studying a problem, they were shown steps and asked to determine which one(s) were incorrect. This forced them to derive the steps. Thus, this computer tutor counts as step-based instruction. The remaining 15 students merely did their homework as usual, relying on the textbook, their friends and the course TAs for help. The human tutors and the computer tutors produced learning gains that were not reliably different, and yet both were reliably larger than the low-interaction instruction provided by normal homework (d=1.31 for human tutoring; d=1.01 for step-based computer tutoring).

Although these results are consistent with the interaction plateau, there is a potential confound. The human tutors and the computer tutor taught an effective problem solving method (Heller & Reif, 1984) which may or may not have been mentioned in the textbook and lectures. If not, then the poor learning gains of the untutored students may be due to their lack of access to content (the problem solving strategy) that was available to the tutored student. This potential confound does not affect the level part of the plateau; only the steep part.

Laboratory experiment support

In a series of experiments, (VanLehn et al., 2007) taught students to reason out answers to conceptual physics questions such as: “As the earth orbits the Sun, the sun exerts a gravitational force on it. Does the earth also exert a force on the sun? Why or why not?” In all conditions of the experiment, students first studied a short textbook, then solved several training problems. For each problem, the students wrote an short essay-long answer, then were tutored on its flaws, then read a correct, well-written essay. Students were expected to apply a certain set of concept in their essays—these comprised the correct steps. The treatments differed in how they tutored students when the essay lacked a step or had an incorrect step. There were four experimental treatments: (1) Human tutors who communicated via a text-based interface with student; (2) Why2-Atlas and (3) Why2-AutoTutor, both of which were natural language computer tutors designed to approach human tutoring; and (4) a simple step-based computer tutor that “tutored” a missing or incorrect step by merely display text that explained what the correct step was. A control condition had students merely read passages from a textbook without answering conceptual questions. The first 3 treatments all count as natural tutoring, so according to the interaction plateau, they should all have the same learning gains as the simple step-based tutoring system. All four experimental conditions should score higher than the control condition, as it is classified as read-only studying of text. Figure ## shows the post-test scores, adjusted for pretest scores in an ANCOVA. The four experimental conditions are not reliably different, and they all were higher than the read-only studying condition by approximately d=1.0. Thus, the results of experiments 1 and 2 support the interaction plateau.

(VanLehn et al., 2007) were surprised that the four experimental conditions tied, so they did several more experiments. The experiments used different assessment methods (e.g., far transfer; retention), different students (pre-physics vs. post-physics courses) and different numbers of training problems. The interaction plateau was observed in all experiments except one. In that experiment, students who had not taken college physics were trained with materials that were designed for students who had taken college physics, and human tutoring was more effective than the simple step-based computer tutor. This makes sense; if the materials are too far over the students’ current level of competence, reading doesn’t suffice for comprehension, and yet a human tutor can help “translate” the content into novice terms. The last 2 experiments used a completely overhauled set of materials designed especially for students who had not taken college physics, and again found an interaction plateau.

In a series of experiments, (Evens & Michael, 2006) tutored medical students in cardiovascular physiology. All students were first taught the basics of the baroreceptor reflex which controls human blood pressure. They were then given a training problem wherein an artificial pacemaker malfunctions and students must fill out a spreadsheet whose rows denoted physiological variables (e.g., heart rate; the blood volume per stroke of the heart, etc.) and whose column denoted time periods. Each cell was filled with a +, - or 0 to indicate that the variable was increasing, decreasing or constant. Each such entry was a step. The authors first developed a step-based tutoring system, CIRCSIM, that presented a short text passage for each incorrectly entered step. They then developed a sophisticated natural language tutoring system, CIRCSIM-tutor, which replaced the text passages with human-like typed dialogue intended to remedy not just the step but the concepts behind the step as well. They also used a read-only studying condition with an experimenter-written text, and they included conditions with expert human tutors interacting in typed text with students. Figure ### summarizes the results from several experiments that used the same assessments and training problems but different treatments. The treatments that count as Natural Tutoring (the expert human tutors and CIRCSIM-tutor) tied with each other and with the step-based computer tutor (CIRCSIM). The only conditions were learning gains were significantly different were the read-only text studying treatments. This pattern is consistent with the interaction plateau.

So far, step-based instruction was conducted by a computer tutoring system. However, this is not the only way to get students to derive each correct step. (Chi, Roy, & Hausmann, in press) gave students a video of a problem being solved by a human tutor and a tutee working at a white board. Students had to solve the same problem as the one being solved in the video, and they could do so any way they wished. Other studies (VanLehn, 1998; VanLehn, Jones, & Chi, 1992) suggested that students use two main strategies for solving problems when they have access to an isomorphic solved problem: They either copy each step from the example, or they generate each step and check it against the example’s step. The checking strategy counts as step-based instruction, but the copying strategy does not since it is a fairly syntactic process. The Chi et al study did not control strategies, the experiment did use either pairs of students working together on the problems (with or without a video) versus students working alone on the problems (with or without a video). Presumably, the pairs are much less likely to use the copying strategy than the solos. Thus, the pairs+video treatment (where the checking strategy was probably common) can be counted as step-based instruction, whereas the individuals+video (where the copying strategy was probably common) can be ignored, as copying hardly counts as problem solving at all. Besides these two conditions, the other conditions in the experiment were (3) human tutoring, (4) pairs solving problems with the aid of a textbook but no video, and (5) individual students with a textbook but no video. The latter two treatments count as low-interaction problem solving, as the students had no way to tell if the steps they generated were correct. The results, shown in Figure ###, are consistent with the interaction plateau. In particular, human tutoring and the step-based instruction condition (pairs+video) had the same learning gains, and these gains were reliably larger than the other treatments.

To summarize, these four studies have all displayed an interaction plateau. Granted, none of the studies were designed to test the interaction plateau hypothesis, so classifications of their conditions into low-interaction instruction, step-based instruction and natural tutoring may seem a bit forced. Even ignoring the names of the classes, when the treatment conditions are ordered from least interactive to most interactive, all 4 studies produced plateaus.

Theoretical rationale

Good instructors will design steps that are sufficiently close together that most students can, by the end of their homework, derive every step. Perhaps the students struggle to bridge the steps when they are first learning a new topic, but with step-based instruction, most students eventually can do the hidden reasoning that must be done to correctly bridge from every step to the next. Natural tutoring provides no added value. However, low-interaction problem solving harms learning by making it too difficult to generate correct steps, and read-only studying invites an illustion of knowing (Glenberg, Wilkinson, & Epstein, 1982) less learning.

Conditions of application

The interaction plateau applies only to learning to solve complex, multi-step tasks. Single-step tasks or domains without well-defined tasks are excluded.

The interaction plateau applies only when all students are taught the same content using the same tasks.

Caveats, limitations, open issues, or dissenting views

Although studies of non-expert tutors showed only modest learning gains (Cohen, Kulik & Kulik, 1982), Bloom's (1984) expert tutors elicited very large (2-sigma) learning gains, which are larger than the gains usually found in step-based instruction. Corbett (2001) has argued that current computer tutors, when allowed to use mastery learning, also achieve a 2-sigma learning gain. Moreover, Bloom's tutors used a larger threshold for mastery than his comparison treatments, which could account for some of their benefits. Nonetheless, the assumption of the human tutor's omnipotence is so widely believed that there is likely to be at least some truth in it.

Variations (descendants)

Generalizations (ascendants)

References

Bloom, B. S. (1984). The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13, 4-16.

Corbett, A. (2001). Cognitive computer tutors: Solving the two-sigma problem. In User Modeling: Proceedings of the Eighth International Conference (pp. 137-147).

Chi, M. T. H., Roy, M., & Hausmann, R. G. M. (in press). Observing tutorial dialogues collaboratively: Insights about human tutoring effectiveness from vicarious learning. Cognitive Science.

Cohen, P. A., Kulik, J. A., & Kulik, C.-L. C. (1982). Educational outcomes of tutoring: A meta-analysis of findings. American Educational Research Journal, 19(2), 237-248.

Evens, M., & Michael, J. (2006). One-on-one Tutoring By Humans and Machines. Mahwah, NJ: Erlbaum.

Glenberg, A. M., Wilkinson, A. C., & Epstein, W. (1982). The illusion of knowing: Failure in the self-assessment of comprehension. Memory & Cognition, 10(6), 597-602.

Heller, J. I., & Reif, F. (1984). Prescribing effective human problem-solving processes: Problem descriptions in physics. Cognition and Instruction, 1(2), 177-216.

Reif, F., & Scott, L. A. (1999). Teaching scientific thinking skills: Students and computers coaching each other. American Journal of Physics, 67(9), 819-831.

VanLehn, K. (1998). Analogy events: How examples are used during problem solving. Cognitive Science, 22(3), 347-388.

VanLehn, K., Graesser, A. C., Jackson, G. T., Jordan, P., Olney, A., & Rose, C. P. (2007). When are tutorial dialogues more effective than reading? Cognitive Science, 31(1), 3-62.

VanLehn, K., Jones, R. M., & Chi, M. T. H. (1992). A model of the self-explanation effect. The Journal of the Learning Sciences, 2(1), 1-59.

VanLehn, K., Lynch, C., Schultz, K., Shapiro, J. A., Shelby, R. H., Taylor, L., et al. (2005). The Andes physics tutoring system: Lessons learned. International Journal of Artificial Intelligence and Education, 15(3), 147-204.